DOE Office of Scientific and Technical Information (OSTI.GOV)
Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart
2016-08-07
The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1999-01-01
The atomization energy of Mg4 is determined using the MP2 and CCSD(T) levels of theory. Basis set incompleteness, basis set extrapolation, and core-valence effects are discussed. Our best atomization energy, including the zero-point energy and scalar relativistic effects, is 24.6+/-1.6 kcal per mol. Our computed and extrapolated values are compared with previous results, where it is observed that our extrapolated MP2 value is good agreement with the MP2-R12 value. The CCSD(T) and MP2 core effects are found to have the opposite signs.
Hill, J Grant
2013-09-30
Auxiliary basis sets (ABS) specifically matched to the cc-pwCVnZ-PP and aug-cc-pwCVnZ-PP orbital basis sets (OBS) have been developed and optimized for the 4d elements Y-Pd at the second-order Møller-Plesset perturbation theory level. Calculation of the core-valence electron correlation energies for small to medium sized transition metal complexes demonstrates that the error due to the use of these new sets in density fitting is three to four orders of magnitude smaller than that due to the OBS incompleteness, and hence is considered negligible. Utilizing the ABSs in the resolution-of-the-identity component of explicitly correlated calculations is also investigated, where it is shown that i-type functions are important to produce well-controlled errors in both integrals and correlation energy. Benchmarking at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations level indicates impressive convergence with respect to basis set size for the spectroscopic constants of 4d monofluorides; explicitly correlated double-ζ calculations produce results close to conventional quadruple-ζ, and triple-ζ is within chemical accuracy of the complete basis set limit. Copyright © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Chien, S.; Gratch, J.; Burl, M.
1994-01-01
In this report we consider a decision-making problem of selecting a strategy from a set of alternatives on the basis of incomplete information (e.g., a finite number of observations): the system can, however, gather additional information at some cost.
CCSDT calculations of molecular equilibrium geometries
NASA Astrophysics Data System (ADS)
Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve
1997-08-01
CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.
Muessig, L; Hauser, J; Wills, T J; Cacucci, F
2016-08-01
Place cells are hippocampal pyramidal cells that are active when an animal visits a restricted area of the environment, and collectively their activity constitutes a neural representation of space. Place cell populations in the adult rat hippocampus display fundamental properties consistent with an associative memory network: the ability to 1) generate new and distinct spatial firing patterns when encountering novel spatial contexts or changes in sensory input ("remapping") and 2) reinstate previously stored firing patterns when encountering a familiar context, including on the basis of an incomplete/degraded set of sensory cues ("pattern completion"). To date, it is unknown when these spatial memory responses emerge during brain development. Here, we show that, from the age of first exploration (postnatal day 16) onwards, place cell populations already exhibit these key features: they generate new representations upon exposure to a novel context and can reactivate familiar representations on the basis of an incomplete set of sensory cues. These results demonstrate that, as early as exploratory behaviors emerge, and despite the absence of an adult-like grid cell network, the developing hippocampus processes incoming sensory information as an associative memory network. © The Author 2016. Published by Oxford University Press.
Brandenburg, Jan Gerit; Grimme, Stefan
2014-01-01
We present and evaluate dispersion corrected Hartree-Fock (HF) and Density Functional Theory (DFT) based quantum chemical methods for organic crystal structure prediction. The necessity of correcting for missing long-range electron correlation, also known as van der Waals (vdW) interaction, is pointed out and some methodological issues such as inclusion of three-body dispersion terms are discussed. One of the most efficient and widely used methods is the semi-classical dispersion correction D3. Its applicability for the calculation of sublimation energies is investigated for the benchmark set X23 consisting of 23 small organic crystals. For PBE-D3 the mean absolute deviation (MAD) is below the estimated experimental uncertainty of 1.3 kcal/mol. For two larger π-systems, the equilibrium crystal geometry is investigated and very good agreement with experimental data is found. Since these calculations are carried out with huge plane-wave basis sets they are rather time consuming and routinely applicable only to systems with less than about 200 atoms in the unit cell. Aiming at crystal structure prediction, which involves screening of many structures, a pre-sorting with faster methods is mandatory. Small, atom-centered basis sets can speed up the computation significantly but they suffer greatly from basis set errors. We present the recently developed geometrical counterpoise correction gCP. It is a fast semi-empirical method which corrects for most of the inter- and intramolecular basis set superposition error. For HF calculations with nearly minimal basis sets, we additionally correct for short-range basis incompleteness. We combine all three terms in the HF-3c denoted scheme which performs very well for the X23 sublimation energies with an MAD of only 1.5 kcal/mol, which is close to the huge basis set DFT-D3 result.
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Astrophysics Data System (ADS)
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-01
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-14
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Moller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method. PMID:24695916
Interval neutrosophic sets and their application in multicriteria decision making problems.
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
Moving frames and prolongation algebras
NASA Technical Reports Server (NTRS)
Estabrook, F. B.
1982-01-01
Differential ideals generated by sets of 2-forms which can be written with constant coefficients in a canonical basis of 1-forms are considered. By setting up a Cartan-Ehresmann connection, in a fiber bundle over a base space in which the 2-forms live, one finds an incomplete Lie algebra of vector fields in the fields in the fibers. Conversely, given this algebra (a prolongation algebra), one can derive the differential ideal. The two constructs are thus dual, and analysis of either derives properties of both. Such systems arise in the classical differential geometry of moving frames. Examples of this are discussed, together with examples arising more recently: the Korteweg-de Vries and Harrison-Ernst systems.
Incomplete Gröbner basis as a preconditioner for polynomial systems
NASA Astrophysics Data System (ADS)
Sun, Yang; Tao, Yu-Hui; Bai, Feng-Shan
2009-04-01
Precondition plays a critical role in the numerical methods for large and sparse linear systems. It is also true for nonlinear algebraic systems. In this paper incomplete Gröbner basis (IGB) is proposed as a preconditioner of homotopy methods for polynomial systems of equations, which transforms a deficient system into a system with the same finite solutions, but smaller degree. The reduced system can thus be solved faster. Numerical results show the efficiency of the preconditioner.
Optimizing Balanced Incomplete Block Designs for Educational Assessments
ERIC Educational Resources Information Center
van der Linden, Wim J.; Veldkamp, Bernard P.; Carlson, James E.
2004-01-01
A popular design in large-scale educational assessments as well as any other type of survey is the balanced incomplete block design. The design is based on an item pool split into a set of blocks of items that are assigned to sets of "assessment booklets." This article shows how the problem of calculating an optimal balanced incomplete block…
Investigation of materials for inert electrodes in aluminum electrodeposition cells
NASA Astrophysics Data System (ADS)
Haggerty, J. S.; Sadoway, D. R.
1987-09-01
Work was divided into major efforts. The first was the growth and characterization of specimens; the second was Hall cell performance testing. Cathode and anode materials were the subject of investigation. Preparation of specimens included growth of single crystals and synthesis of ultra high purity powders. Special attention was paid to ferrites as they were considered to be the most promising anode materials. Ferrite anode corrosion rates were studied and the electrical conductivities of a set of copper-manganese ferrites were measured. Float Zone, Pendant Drop Cryolite Experiments were undertaken because unsatisfactory choices of candidate materials were being made on the basis of a flawed set of selection criteria applied to an incomplete and sometimes inaccurate data base. This experiment was then constructed to determine whether the apparatus used for float zone crystal growth could be adapted to make a variety of important based melts and their interactions with candidate inert anode materials. Compositions), driven by our perception that the basis for prior selection of candidate materials was inadequate. Results are presented.
Should genes with missing data be excluded from phylogenetic analyses?
Jiang, Wei; Chen, Si-Yun; Wang, Hong; Li, De-Zhu; Wiens, John J
2014-11-01
Phylogeneticists often design their studies to maximize the number of genes included but minimize the overall amount of missing data. However, few studies have addressed the costs and benefits of adding characters with missing data, especially for likelihood analyses of multiple loci. In this paper, we address this topic using two empirical data sets (in yeast and plants) with well-resolved phylogenies. We introduce varying amounts of missing data into varying numbers of genes and test whether the benefits of excluding genes with missing data outweigh the costs of excluding the non-missing data that are associated with them. We also test if there is a proportion of missing data in the incomplete genes at which they cease to be beneficial or harmful, and whether missing data consistently bias branch length estimates. Our results indicate that adding incomplete genes generally increases the accuracy of phylogenetic analyses relative to excluding them, especially when there is a high proportion of incomplete genes in the overall dataset (and thus few complete genes). Detailed analyses suggest that adding incomplete genes is especially helpful for resolving poorly supported nodes. Given that we find that excluding genes with missing data often decreases accuracy relative to including these genes (and that decreases are generally of greater magnitude than increases), there is little basis for assuming that excluding these genes is necessarily the safer or more conservative approach. We also find no evidence that missing data consistently bias branch length estimates. Copyright © 2014 Elsevier Inc. All rights reserved.
Monte Carlo explicitly correlated second-order many-body perturbation theory
NASA Astrophysics Data System (ADS)
Johnson, Cole M.; Doran, Alexander E.; Zhang, Jinmei; Valeev, Edward F.; Hirata, So
2016-10-01
A stochastic algorithm is proposed and implemented that computes a basis-set-incompleteness (F12) correction to an ab initio second-order many-body perturbation energy as a short sum of 6- to 15-dimensional integrals of Gaussian-type orbitals, an explicit function of the electron-electron distance (geminal), and its associated excitation amplitudes held fixed at the values suggested by Ten-no. The integrals are directly evaluated (without a resolution-of-the-identity approximation or an auxiliary basis set) by the Metropolis Monte Carlo method. Applications of this method to 17 molecular correlation energies and 12 gas-phase reaction energies reveal that both the nonvariational and variational formulas for the correction give reliable correlation energies (98% or higher) and reaction energies (within 2 kJ mol-1 with a smaller statistical uncertainty) near the complete-basis-set limits by using just the aug-cc-pVDZ basis set. The nonvariational formula is found to be 2-10 times less expensive to evaluate than the variational one, though the latter yields energies that are bounded from below and is, therefore, slightly but systematically more accurate for energy differences. Being capable of using virtually any geminal form, the method confirms the best overall performance of the Slater-type geminal among 6 forms satisfying the same cusp conditions. Not having to precompute lower-dimensional integrals analytically, to store them on disk, or to transform them in a nonscalable dense-matrix-multiplication algorithm, the method scales favorably with both system size and computer size; the cost increases only as O(n4) with the number of orbitals (n), and its parallel efficiency reaches 99.9% of the ideal case on going from 16 to 4096 computer processors.
A statistical approach to identify, monitor, and manage incomplete curated data sets.
Howe, Douglas G
2018-04-02
Many biological knowledge bases gather data through expert curation of published literature. High data volume, selective partial curation, delays in access, and publication of data prior to the ability to curate it can result in incomplete curation of published data. Knowing which data sets are incomplete and how incomplete they are remains a challenge. Awareness that a data set may be incomplete is important for proper interpretation, to avoiding flawed hypothesis generation, and can justify further exploration of published literature for additional relevant data. Computational methods to assess data set completeness are needed. One such method is presented here. In this work, a multivariate linear regression model was used to identify genes in the Zebrafish Information Network (ZFIN) Database having incomplete curated gene expression data sets. Starting with 36,655 gene records from ZFIN, data aggregation, cleansing, and filtering reduced the set to 9870 gene records suitable for training and testing the model to predict the number of expression experiments per gene. Feature engineering and selection identified the following predictive variables: the number of journal publications; the number of journal publications already attributed for gene expression annotation; the percent of journal publications already attributed for expression data; the gene symbol; and the number of transgenic constructs associated with each gene. Twenty-five percent of the gene records (2483 genes) were used to train the model. The remaining 7387 genes were used to test the model. One hundred and twenty-two and 165 of the 7387 tested genes were identified as missing expression annotations based on their residuals being outside the model lower or upper 95% confidence interval respectively. The model had precision of 0.97 and recall of 0.71 at the negative 95% confidence interval and precision of 0.76 and recall of 0.73 at the positive 95% confidence interval. This method can be used to identify data sets that are incompletely curated, as demonstrated using the gene expression data set from ZFIN. This information can help both database resources and data consumers gauge when it may be useful to look further for published data to augment the existing expertly curated information.
An ab initio study of the C3(+) cation using multireference methods
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Martin, J. M. L.; Francois, J. P.; Gijbels, R.
1991-01-01
The energy difference between the linear 2 sigma(sup +, sub u) and cyclic 2B(sub 2) structures of C3(+) has been investigated using large (5s3p2d1f) basis sets and multireference electron correlation treatments, including complete active space self consistent fields (CASSCF), multireference configuration interaction (MRCI), and averaged coupled-pair functional (ACPF) methods, as well as the single-reference quadratic configuration interaction (QCISD(T)) method. Our best estimate, including a correction for basis set incompleteness, is that the linear form lies above the cyclic from by 5.2(+1.5 to -1.0) kcal/mol. The 2 sigma(sup +, sub u) state is probably not a transition state, but a local minimum. Reliable computation of the cyclic/linear energy difference in C3(+) is extremely demanding of the electron correlation treatment used: of the single-reference methods previously considered, CCSD(T) and QCISD(T) perform best. The MRCI + Q(0.01)/(4s2p1d) energy separation of 1.68 kcal/mol should provide a comparison standard for other electron correlation methods applied to this system.
ERIC Educational Resources Information Center
Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel
2012-01-01
A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, Fionn D., E-mail: f.malone13@imperial.ac.uk; Lee, D. K. K.; Foulkes, W. M. C.
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing ourmore » results to previous work where possible.« less
An evidential reasoning extension to quantitative model-based failure diagnosis
NASA Technical Reports Server (NTRS)
Gertler, Janos J.; Anderson, Kenneth C.
1992-01-01
The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.
Systematic theoretical study of non-nuclear electron density maxima in some diatomic molecules.
Terrabuio, Luiz A; Teodoro, Tiago Q; Rachid, Marina G; Haiduke, Roberto L A
2013-10-10
First, exploratory calculations were performed to investigate the presence of non-nuclear maxima (NNMs) in ground-state electron densities of homonuclear diatomic molecules from hydrogen up to calcium at their equilibrium geometries. In a second stage, only for the cases in which these features were previously detected, a rigorous analysis was carried out by several combinations of theoretical methods and basis sets in order to ensure that they are not only calculation artifacts. Our best results support that Li2, B2, C2, and P2 are molecules that possess true NNMs. A NNM was found in values obtained from the largest basis sets for Na2, but it disappeared at the experimental geometry because optimized bond lengths are significantly inaccurate for this case (deviations of 0.10 Å). Two of these maxima are also observed in Si2 with CCSD and large basis sets, but they are no longer detected as core-valence correlation or multiconfigurational wave functions are taken into account. Therefore, the NNMs in Si2 can be considered unphysical features due to an incomplete treatment of electron correlation. Finally, we show that a NNM is encountered in LiNa, representing the first discovery of such electron density maxima in a heteronuclear diatomic system at its equilibrium geometry, to our knowledge. Some results for LiNa, found in variations in internuclear distances, suggest that molecular electric moments, such as dipole and quadrupole, are sensitive to the presence of NNMs.
Is HO3 minimum cis or trans? An analytic full-dimensional ab initio isomerization path.
Varandas, A J C
2011-05-28
The minimum energy path for isomerization of HO(3) has been explored in detail using accurate high-level ab initio methods and techniques for extrapolation to the complete basis set limit. In agreement with other reports, the best estimates from both valence-only and all-electron single-reference methods here utilized predict the minimum of the cis-HO(3) isomer to be deeper than the trans-HO(3) one. They also show that the energy varies by less than 1 kcal mol(-1) or so over the full isomerization path. A similar result is found from valence-only multireference configuration interaction calculations with the size-extensive Davidson correction and a correlation consistent triple-zeta basis, which predict the energy difference between the two isomers to be of only Δ = -0.1 kcal mol(-1). However, single-point multireference calculations carried out at the optimum triple-zeta geometry with basis sets of the correlation consistent family but cardinal numbers up to X = 6 lead upon a dual-level extrapolation to the complete basis set limit of Δ = (0.12 ± 0.05) kcal mol(-1). In turn, extrapolations with the all-electron single-reference coupled-cluster method including the perturbative triples correction yield values of Δ = -0.19 and -0.03 kcal mol(-1) when done from triple-quadruple and quadruple-quintuple zeta pairs with two basis sets of increasing quality, namely cc-cpVXZ and aug-cc-pVXZ. Yet, if added a value of 0.25 kcal mol(-1) that accounts for the effect of triple and perturbative quadruple excitations with the VTZ basis set, one obtains a coupled cluster estimate of Δ = (0.14 ± 0.08) kcal mol(-1). It is then shown for the first time from systematic ab initio calculations that the trans-HO(3) isomer is more stable than the cis one, in agreement with the available experimental evidence. Inclusion of the best reported zero-point energy difference (0.382 kcal mol(-1)) from multireference configuration interaction calculations enhances further the relative stability to ΔE(ZPE) = (0.51 ± 0.08) kcal mol(-1). A scheme is also suggested to model the full-dimensional isomerization potential-energy surface using a quadratic expansion that is parametrically represented by a Fourier analysis in the torsion angle. The method illustrated at the raw and complete basis-set limit coupled-cluster levels can provide a valuable tool for a future analysis of the available (incomplete thus far) experimental rovibrational data. This journal is © the Owner Societies 2011
Incompletely characterized incidental renal masses: emerging data support conservative management.
Silverman, Stuart G; Israel, Gary M; Trinh, Quoc-Dien
2015-04-01
With imaging, most incidental renal masses can be diagnosed promptly and with confidence as being either benign or malignant. For those that cannot, management recommendations can be devised on the basis of a thorough evaluation of imaging features. However, most renal masses are either too small to characterize completely or are detected initially in imaging examinations that are not designed for full evaluation of them. These masses constitute a group of masses that are considered incompletely characterized. On the basis of current published guidelines, many masses warrant additional imaging. However, while the diagnosis of renal cancer at a curable stage remains the first priority, there is the additional need to reduce unnecessary healthcare costs and radiation exposure. As such, emerging data now support foregoing additional imaging for many incompletely characterized renal masses. These data include the low risk of progression to metastases or death for small renal masses that have undergone active surveillance (including biopsy-proven cancers) and a better understanding of how specific imaging features can be used to diagnose their origins. These developments support (a) avoidance of imaging entirely for those incompletely characterized renal masses that are highly likely to be benign cysts and (b) delay of further imaging of small solid masses in selected patients. Although more evidence-based data are needed and comprehensive management algorithms have yet to be defined, these recommendations are medically appropriate and practical, while limiting the imaging of many incompletely characterized incidental renal masses.
Locally indistinguishable orthogonal product bases in arbitrary bipartite quantum system
Xu, Guang-Bao; Yang, Ying-Hui; Wen, Qiao-Yan; Qin, Su-Juan; Gao, Fei
2016-01-01
As we know, unextendible product basis (UPB) is an incomplete basis whose members cannot be perfectly distinguished by local operations and classical communication. However, very little is known about those incomplete and locally indistinguishable product bases that are not UPBs. In this paper, we first construct a series of orthogonal product bases that are completable but not locally distinguishable in a general m ⊗ n (m ≥ 3 and n ≥ 3) quantum system. In particular, we give so far the smallest number of locally indistinguishable states of a completable orthogonal product basis in arbitrary quantum systems. Furthermore, we construct a series of small and locally indistinguishable orthogonal product bases in m ⊗ n (m ≥ 3 and n ≥ 3). All the results lead to a better understanding of the structures of locally indistinguishable product bases in arbitrary bipartite quantum system. PMID:27503634
Link Prediction in Criminal Networks: A Tool for Criminal Intelligence Analysis
Berlusconi, Giulia; Calderoni, Francesco; Parolini, Nicola; Verani, Marco; Piccardi, Carlo
2016-01-01
The problem of link prediction has recently received increasing attention from scholars in network science. In social network analysis, one of its aims is to recover missing links, namely connections among actors which are likely to exist but have not been reported because data are incomplete or subject to various types of uncertainty. In the field of criminal investigations, problems of incomplete information are encountered almost by definition, given the obvious anti-detection strategies set up by criminals and the limited investigative resources. In this paper, we work on a specific dataset obtained from a real investigation, and we propose a strategy to identify missing links in a criminal network on the basis of the topological analysis of the links classified as marginal, i.e. removed during the investigation procedure. The main assumption is that missing links should have opposite features with respect to marginal ones. Measures of node similarity turn out to provide the best characterization in this sense. The inspection of the judicial source documents confirms that the predicted links, in most instances, do relate actors with large likelihood of co-participation in illicit activities. PMID:27104948
Lin, Hui; Wang, Zhou-Jing
2017-09-17
Low-carbon tourism plays an important role in carbon emission reduction and environmental protection. Low-carbon tourism destination selection often involves multiple conflicting and incommensurate attributes or criteria and can be modelled as a multi-attribute decision-making problem. This paper develops a framework to solve multi-attribute group decision-making problems, where attribute evaluation values are provided as linguistic terms and the attribute weight information is incomplete. In order to obtain a group risk preference captured by a linguistic term set with triangular fuzzy semantic information, a nonlinear programming model is established on the basis of individual risk preferences. We first convert individual linguistic-term-based decision matrices to their respective triangular fuzzy decision matrices, which are then aggregated into a group triangular fuzzy decision matrix. Based on this group decision matrix and the incomplete attribute weight information, a linear program is developed to find an optimal attribute weight vector. A detailed procedure is devised for tackling linguistic multi-attribute group decision making problems. A low-carbon tourism destination selection case study is offered to illustrate how to use the developed group decision-making model in practice.
Lin, Hui; Wang, Zhou-Jing
2017-01-01
Low-carbon tourism plays an important role in carbon emission reduction and environmental protection. Low-carbon tourism destination selection often involves multiple conflicting and incommensurate attributes or criteria and can be modelled as a multi-attribute decision-making problem. This paper develops a framework to solve multi-attribute group decision-making problems, where attribute evaluation values are provided as linguistic terms and the attribute weight information is incomplete. In order to obtain a group risk preference captured by a linguistic term set with triangular fuzzy semantic information, a nonlinear programming model is established on the basis of individual risk preferences. We first convert individual linguistic-term-based decision matrices to their respective triangular fuzzy decision matrices, which are then aggregated into a group triangular fuzzy decision matrix. Based on this group decision matrix and the incomplete attribute weight information, a linear program is developed to find an optimal attribute weight vector. A detailed procedure is devised for tackling linguistic multi-attribute group decision making problems. A low-carbon tourism destination selection case study is offered to illustrate how to use the developed group decision-making model in practice. PMID:28926985
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Clift, Simon S.; Tang, Wei-Pai
1994-01-01
We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.
Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun
2016-02-09
Previously, we applied basic group theory and related concepts to scales of measurement of clinical disease states and clinical findings (including laboratory data). To gain a more concrete comprehension, we here apply the concept of matrix representation, which was not explicitly exploited in our previous work. Starting with a set of orthonormal vectors, called the basis, an operator Rj (an N-tuple patient disease state at the j-th session) was expressed as a set of stratified vectors representing plural operations on individual components, so as to satisfy the group matrix representation. The stratified vectors containing individual unit operations were combined into one-dimensional square matrices [Rj]s. The [Rj]s meet the matrix representation of a group (ring) as a K-algebra. Using the same-sized matrix of stratified vectors, we can also express changes in the plural set of [Rj]s. The method is demonstrated on simple examples. Despite the incompleteness of our model, the group matrix representation of stratified vectors offers a formal mathematical approach to clinical medicine, aligning it with other branches of natural science.
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Safety assessment for In-service Pressure Bending Pipe Containing Incomplete Penetration Defects
NASA Astrophysics Data System (ADS)
Wang, M.; Tang, P.; Xia, J. F.; Ling, Z. W.; Cai, G. Y.
2017-12-01
Incomplete penetration defect is a common defect in the welded joint of pressure pipes. While the safety classification of pressure pipe containing incomplete penetration defects, according to periodical inspection regulations in present, is more conservative. For reducing the repair of incomplete penetration defect, a scientific and applicable safety assessment method for pressure pipe is needed. In this paper, the stress analysis model of the pipe system was established for the in-service pressure bending pipe containing incomplete penetration defects. The local finite element model was set up to analyze the stress distribution of defect location and the stress linearization. And then, the applicability of two assessment methods, simplified assessment and U factor assessment method, to the assessment of incomplete penetration defects located at pressure bending pipe were analyzed. The results can provide some technical supports for the safety assessment of complex pipelines in the future.
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2003-07-01
Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied.
From plane waves to local Gaussians for the simulation of correlated periodic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de
2016-08-28
We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less
Inheritance of evolved resistance to a novel herbicide (pyroxasulfone).
Busi, Roberto; Gaines, Todd A; Vila-Aiub, Martin M; Powles, Stephen B
2014-03-01
Agricultural weeds have rapidly adapted to intensive herbicide selection and resistance to herbicides has evolved within ecological timescales. Yet, the genetic basis of broad-spectrum generalist herbicide resistance is largely unknown. This study aims to determine the genetic control of non-target-site herbicide resistance trait(s) that rapidly evolved under recurrent selection of the novel lipid biosynthesis inhibitor pyroxasulfone in Lolium rigidum. The phenotypic segregation of pyroxasulfone resistance in parental, F1 and back-cross (BC) families was assessed in plants exposed to a gradient of pyroxasulfone doses. The inheritance of resistance to chemically dissimilar herbicides (cross-resistance) was also evaluated. Evolved resistance to the novel selective agent (pyroxasulfone) is explained by Mendelian segregation of one semi-dominant allele incrementally herbicide-selected at higher frequency in the progeny. In BC families, cross-resistance is conferred by an incompletely dominant single major locus. This study confirms that herbicide resistance can rapidly evolve to any novel selective herbicide agents by continuous and repeated herbicide use. The results imply that the combination of herbicide options (rotation, mixtures or combinations) to exploit incomplete dominance can provide acceptable control of broad-spectrum generalist resistance-endowing monogenic traits. Herbicide diversity within a set of integrated management tactics can be one important component to reduce the herbicide selection intensity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A Comparison of Item-Level and Scale-Level Multiple Imputation for Questionnaire Batteries
ERIC Educational Resources Information Center
Gottschall, Amanda C.; West, Stephen G.; Enders, Craig K.
2012-01-01
Behavioral science researchers routinely use scale scores that sum or average a set of questionnaire items to address their substantive questions. A researcher applying multiple imputation to incomplete questionnaire data can either impute the incomplete items prior to computing scale scores or impute the scale scores directly from other scale…
50 CFR 679.6 - Exempted fisheries.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...
50 CFR 679.6 - Exempted fisheries.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...
50 CFR 679.6 - Exempted fisheries.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...
50 CFR 679.6 - Exempted fisheries.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...
50 CFR 679.6 - Exempted fisheries.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (iv) Experimental design (e.g., sampling procedures, the data and samples to be collected, and... Exempted fisheries. (a) General. For limited experimental purposes, the Regional Administrator may... basis of incomplete information or design flaws, the applicant will be provided an opportunity to...
ERIC Educational Resources Information Center
Gold, Michael S.; Bentler, Peter M.; Kim, Kevin H.
2003-01-01
This article describes a Monte Carlo study of 2 methods for treating incomplete nonnormal data. Skewed, kurtotic data sets conforming to a single structured model, but varying in sample size, percentage of data missing, and missing-data mechanism, were produced. An asymptotically distribution-free available-case (ADFAC) method and structured-model…
Lemieux, P M; Lee, C W; Ryan, J V
2000-12-01
Emissions of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDDs/Fs) from incinerators and other stationary combustion sources are of environmental concern because of the toxicity of certain PCDD/F congeners. Measurement of trace levels of PCDDs/Fs in combustor emissions is not a trivial matter. Development of one or more simple, easy-to-measure, reliable indicators of stack PCDD/F concentrations not only would enable incinerator operators to economically optimize system performance with respect to PCDD/F emissions, but could also provide a potential technique for demonstrating compliance status on a more frequent basis. This paper focuses on one approach to empirically estimate PCDD/F emissions using easy-to-measure volatile organic C2 chlorinated alkene precursors coupled with flue gas cleaning parameters. Three data sets from pilot-scale incineration experiments were examined for correlations between C2 chlorinated alkenes and PCDDs/Fs. Each data set contained one or more C2 chloroalkenes that were able to account for a statistically significant fraction of the variance in PCDD/F emissions. Variations in the vinyl chloride concentrations were able to account for the variations in the PCDD/F concentrations strongly in two of the three data sets and weakly in one of the data sets.
Lemieux, P M; Lee, C W; Ryan, J V
2000-12-01
Emissions of polychlorinated dibenzo-p-dioxins and poly-chlorinated dibenzofurans (PCDDs/Fs) from incinerators and other stationary combustion sources are of environmental concern because of the toxicity of certain PCDD/F congeners. Measurement of trace levels of PCDDs/Fs in combustor emissions is not a trivial matter. Development of one or more simple, easy-to-measure, reliable indicators of stack PCDD/F concentrations not only would enable incinerator operators to economically optimize system performance with respect to PCDD/F emissions, but could also provide a potential technique for demonstrating compliance status on a more frequent basis. This paper focuses on one approach to empirically estimate PCDD/F emissions using easy-to-measure volatile organic C 2 chlorinated alk-ene precursors coupled with flue gas cleaning parameters. Three data sets from pilot-scale incineration experiments were examined for correlations between C 2 chlorinated alk-enes and PCDDs/Fs. Each data set contained one or more C 2 chloroalkenes that were able to account for a statistically significant fraction of the variance in PCDD/F emissions. Variations in the vinyl chloride concentrations were able to account for the variations in the PCDD/F concentrations strongly in two of the three data sets and weakly in one of the data sets.
Complete genome assemblies and methylome characterization in infectious diseases
USDA-ARS?s Scientific Manuscript database
Understanding the genetic basis of infectious diseases is a critical component to effective treatments. Because of the rapid evolution of bacterial strains and frequent horizontal transfer of DNA between them, resequencing of new isolates against known reference strains often provides an incomplete ...
Rough Set Approach to Incomplete Multiscale Information System
Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu
2014-01-01
Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852
Probabilistic Open Set Recognition
NASA Astrophysics Data System (ADS)
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
Designing for Compressive Sensing: Compressive Art, Camouflage, Fonts, and Quick Response Codes
2018-01-01
an example where the signal is non-sparse in the standard basis, but sparse in the discrete cosine basis . The top plot shows the signal from the...previous example, now used as sparse discrete cosine transform (DCT) coefficients . The next plot shows the non-sparse signal in the standard...Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math . 2006;59(8):1207–1223. 3. Donoho DL
ERIC Educational Resources Information Center
To, Son Thanh
2012-01-01
"Belief state" refers to the set of possible world states satisfying the agent's (usually imperfect) knowledge. The use of belief state allows the agent to reason about the world with incomplete information, by considering each possible state in the belief state individually, in the same way as if it had perfect knowledge. However, the…
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2014-01-01
This research note contributes to the discussion of methods that can be used to identify useful auxiliary variables for analyses of incomplete data sets. A latent variable approach is discussed, which is helpful in finding auxiliary variables with the property that if included in subsequent maximum likelihood analyses they may enhance considerably…
Uncertainties in scaling factors for ab initio vibrational zero-point energies
NASA Astrophysics Data System (ADS)
Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger
2009-03-01
Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.
Zeidler-Erdely, Patti C; Calhoun, William J; Ameredes, Bill T; Clark, Melissa P; Deye, Gregory J; Baron, Paul; Jones, William; Blake, Terri; Castranova, Vincent
2006-01-01
Background Synthetic vitreous fibers (SVFs) are inorganic noncrystalline materials widely used in residential and industrial settings for insulation, filtration, and reinforcement purposes. SVFs conventionally include three major categories: fibrous glass, rock/slag/stone (mineral) wool, and ceramic fibers. Previous in vitro studies from our laboratory demonstrated length-dependent cytotoxic effects of glass fibers on rat alveolar macrophages which were possibly associated with incomplete phagocytosis of fibers ≥ 17 μm in length. The purpose of this study was to examine the influence of fiber length on primary human alveolar macrophages, which are larger in diameter than rat macrophages, using length-classified Manville Code 100 glass fibers (8, 10, 16, and 20 μm). It was hypothesized that complete engulfment of fibers by human alveolar macrophages could decrease fiber cytotoxicity; i.e. shorter fibers that can be completely engulfed might not be as cytotoxic as longer fibers. Human alveolar macrophages, obtained by segmental bronchoalveolar lavage of healthy, non-smoking volunteers, were treated with three different concentrations (determined by fiber number) of the sized fibers in vitro. Cytotoxicity was assessed by monitoring cytosolic lactate dehydrogenase release and loss of function as indicated by a decrease in zymosan-stimulated chemiluminescence. Results Microscopic analysis indicated that human alveolar macrophages completely engulfed glass fibers of the 20 μm length. All fiber length fractions tested exhibited equal cytotoxicity on a per fiber basis, i.e. increasing lactate dehydrogenase and decreasing chemiluminescence in the same concentration-dependent fashion. Conclusion The data suggest that due to the larger diameter of human alveolar macrophages, compared to rat alveolar macrophages, complete phagocytosis of longer fibers can occur with the human cells. Neither incomplete phagocytosis nor length-dependent toxicity was observed in fiber-exposed human macrophage cultures. In contrast, rat macrophages exhibited both incomplete phagocytosis of long fibers and length-dependent toxicity. The results of the human and rat cell studies suggest that incomplete engulfment may enhance cytotoxicity of fiber glass. However, the possibility should not be ruled out that differences between human versus rat macrophages other than cell diameter could account for differences in fiber effects. PMID:16569233
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.
NASA Astrophysics Data System (ADS)
Shepherd, James J.; López Ríos, Pablo; Needs, Richard J.; Drummond, Neil D.; Mohr, Jennifer A.-F.; Booth, George H.; Grüneis, Andreas; Kresse, Georg; Alavi, Ali
2013-03-01
Full configuration interaction quantum Monte Carlo1 (FCIQMC) and its initiator adaptation2 allow for exact solutions to the Schrödinger equation to be obtained within a finite-basis wavefunction ansatz. In this talk, we explore an application of FCIQMC to the homogeneous electron gas (HEG). In particular we use these exact finite-basis energies to compare with approximate quantum chemical calculations from the VASP code3. After removing the basis set incompleteness error by extrapolation4,5, we compare our energies with state-of-the-art diffusion Monte Carlo calculations from the CASINO package6. Using a combined approach of the two quantum Monte Carlo methods, we present the highest-accuracy thermodynamic (infinite-particle) limit energies for the HEG achieved to date. 1 G. H. Booth, A. Thom, and A. Alavi, J. Chem. Phys. 131, 054106 (2009). 2 D. Cleland, G. H. Booth, and A. Alavi, J. Chem. Phys. 132, 041103 (2010). 3 www.vasp.at (2012). 4 J. J. Shepherd, A. Grüneis, G. H. Booth, G. Kresse, and A. Alavi, Phys. Rev. B. 86, 035111 (2012). 5 J. J. Shepherd, G. H. Booth, and A. Alavi, J. Chem. Phys. 136, 244101 (2012). 6 R. Needs, M. Towler, N. Drummond, and P. L. Ríos, J. Phys.: Condensed Matter 22, 023201 (2010).
Paradox as a Therapeutic Technique: A Review
ERIC Educational Resources Information Center
Soper, Patricia H.; L'Abate Luciano
1977-01-01
The increasing use of paradoxical messages and injunctions in marital and familial therapies is reviewed. The theoretical, empirical, and clinical grounds for this practice, on the basis of this review, are still incomplete and questionable. The need for empirical research in this area is still great. (Author)
Classifying with confidence from incomplete information.
Parrish, Nathan; Anderson, Hyrum S.; Gupta, Maya R.; ...
2013-12-01
For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize thismore » goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.« less
NASA Technical Reports Server (NTRS)
Morris, Richard V.; Golden, D. C.; Bell, J. F., III; Lauer, H. V., Jr.; Adams, J. B.
1992-01-01
The study of palagonitic soils is an active area of research in martian geoscience because the spectral and magnetic properties of a subset are spectral and/or magnetic analogues of martian bright regions. An understanding of the composition, distribution, and mineralogy of ferric-bearing phases for palagonitic soils forms, through spectral and magnetic data, a basis for inferring the nature of ferric-bearing phases on Mars. Progress has been made in this area, but the data set is incomplete, especially with respect to the nature of pigmenting phases. The purpose of this study is to identify the nature of the pigment for Hawaiian palagonitic soil PN-9 by using extraction procedures to selectively remove iron oxide phases. This soil was collected at the same locale as samples Hawaii 34 and VOL02. All three soils are good spectral analogues for martian bright regions.
Rello-Varona, Santiago; Herrero-Martín, David; López-Alemany, Roser; Muñoz-Pinedo, Cristina; Tirado, Oscar M
2015-03-15
During the last decades, the knowledge of cell death mechanisms involved in anticancer therapy has grown exponentially. However, in many studies, cell death is still described in an incomplete manner. The frequent use of indirect proliferation assays, unspecific probes, or bulk analyses leads too often to misunderstandings regarding cell death events. There is a trend to focus on molecular or genetic regulations of cell demise without a proper characterization of the phenotype that is the object of this study. Sometimes, cancer researchers can feel overwhelmed or confused when faced with such a corpus of detailed insights, nomenclature rules, and debates about the accuracy of a particular probe or assay. On the basis of the information available, we propose a simple guide to distinguish forms of cell death in experimental settings using cancer cell lines. ©2015 American Association for Cancer Research.
Trainees' perceptions of practitioner competence during patient transfer.
Grierson, Lawrence; Dubrowski, Adam; So, Steph; Kistner, Nicole; Carnahan, Heather
2012-01-01
Technical and communicative skills are both important features for one's perception of practitioner competence. This research examines how trainees' perceptions of practitioner competence change as they view health care practitioners who vary in their technical and communicative skill proficiencies. Occupational therapy students watched standardized encounters of a practitioner performing a patient transfer in combinations of low and high technical and communicative proficiency and then reported their perceptions of practitioner competence. The reports indicate that technical and communicative skills have independently identifiable impacts on the perceptions of practitioner competency, but technical proficiency has a special impact on the students' perceptions of practitioner communicative competence. The results are discussed with respect to the way in which students may evaluate their own competence on the basis of either technical or communicative skill. The issue of how this may lead trainees to dedicate their independent learning efforts to an incomplete set of features needed for the development of practitioner competency is raised.
A relativistic coupled-cluster interaction potential and rovibrational constants for the xenon dimer
NASA Astrophysics Data System (ADS)
Jerabek, Paul; Smits, Odile; Pahl, Elke; Schwerdtfeger, Peter
2018-01-01
An accurate potential energy curve has been derived for the xenon dimer using state-of-the-art relativistic coupled-cluster theory up to quadruple excitations accounting for both basis set superposition and incompleteness errors. The data obtained is fitted to a computationally efficient extended Lennard-Jones potential form and to a modified Tang-Toennies potential function treating the short- and long-range part separately. The vibrational spectrum of Xe2 obtained from a numerical solution of the rovibrational Schrödinger equation and subsequently derived spectroscopic constants are in excellent agreement with experimental values. We further present solid-state calculations for xenon using a static many-body expansion up to fourth-order in the xenon interaction potential including dynamic effects within the Einstein approximation. Again we find very good agreement with the experimental (face-centred cubic) lattice constant and cohesive energy.
Sensor-based activity recognition using extended belief rule-based inference methodology.
Calzada, A; Liu, J; Nugent, C D; Wang, H; Martinez, L
2014-01-01
The recently developed extended belief rule-based inference methodology (RIMER+) recognizes the need of modeling different types of information and uncertainty that usually coexist in real environments. A home setting with sensors located in different rooms and on different appliances can be considered as a particularly relevant example of such an environment, which brings a range of challenges for sensor-based activity recognition. Although RIMER+ has been designed as a generic decision model that could be applied in a wide range of situations, this paper discusses how this methodology can be adapted to recognize human activities using binary sensors within smart environments. The evaluation of RIMER+ against other state-of-the-art classifiers in terms of accuracy, efficiency and applicability was found to be significantly relevant, specially in situations of input data incompleteness, and it demonstrates the potential of this methodology and underpins the basis to develop further research on the topic.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
School Finance Reform: Factors that Mediate Legal Initiatives.
ERIC Educational Resources Information Center
Sweetland, Scott R.
2000-01-01
Although the Ohio Supreme Court announced its unconstitutionality verdict 3 years ago, litigation and outcomes are incomplete. Due to legislative and referenda failures, implementation has reverted to the judiciary branch. An effective solution may be to address school-finance reform on a case-by-case basis. (Contains 35 references.) (MLH)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-15
... Concerning Household Goods Carriers Requiring Shippers To Sign Blank or Incomplete Documents AGENCY: Federal.... Basis for This Notice The Agency has received numerous consumer complaints concerning household goods.... The FMCSA is issuing this regulatory guidance to eliminate any ambiguity or confusion concerning the...
12 CFR Supplement I to Part 202 - Official Staff Interpretations
Code of Federal Regulations, 2013 CFR
2013-01-01
... an application on the basis of incompleteness.) 2(g) Business credit. 1. Definition. The test for deciding whether a transaction qualifies as business credit is one of primary purpose. For example, an open... becomes available. 4. Effects test and disparate treatment. An empirically derived, demonstrably and...
12 CFR Supplement I to Part 202 - Official Staff Interpretations
Code of Federal Regulations, 2012 CFR
2012-01-01
... an application on the basis of incompleteness.) 2(g) Business credit. 1. Definition. The test for deciding whether a transaction qualifies as business credit is one of primary purpose. For example, an open... becomes available. 4. Effects test and disparate treatment. An empirically derived, demonstrably and...
12 CFR Supplement I to Part 202 - Official Staff Interpretations
Code of Federal Regulations, 2011 CFR
2011-01-01
... an application on the basis of incompleteness.) 2(g) Business credit. 1. Definition. The test for deciding whether a transaction qualifies as business credit is one of primary purpose. For example, an open... becomes available. 4. Effects test and disparate treatment. An empirically derived, demonstrably and...
12 CFR Supplement I to Part 202 - Official Staff Interpretations
Code of Federal Regulations, 2014 CFR
2014-01-01
... an application on the basis of incompleteness.) 2(g) Business credit. 1. Definition. The test for deciding whether a transaction qualifies as business credit is one of primary purpose. For example, an open... becomes available. 4. Effects test and disparate treatment. An empirically derived, demonstrably and...
Barriers to Specialty Care and Specialty Referral Completion in the Community Health Center Setting
Zuckerman, Katharine E.; Perrin, James M.; Hobrecker, Karin; Donelan, Karen
2013-01-01
Objective To assess the frequency of barriers to specialty care and to assess which barriers are associated with an incomplete specialty referral (not attending a specialty visit when referred by a primary care provider) among children seen in community health centers. Study design Two months after their child’s specialty referral, 341 parents completed telephone surveys assessing whether a specialty visit was completed and whether they experienced any of 10 barriers to care. Family/community barriers included difficulty leaving work, obtaining childcare, obtaining transportation, and inadequate insurance. Health care system barriers included getting appointments quickly, understanding doctors and nurses, communicating with doctors’ offices, locating offices, accessing interpreters, and inconvenient office hours. We calculated barrier frequency and total barriers experienced. Using logistic regression, we assessed which barriers were associated with incomplete referral, and whether experiencing ≥4 barriers was associated with incomplete referral. Results A total of 22.9% of families experienced incomplete referral. 42.0% of families encountered 1 or more barriers. The most frequent barriers were difficulty leaving work, obtaining childcare, and obtaining transportation. On multivariate analysis, difficulty getting appointments quickly, difficulty finding doctors’ offices, and inconvenient office hours were associated with incomplete referral. Families experiencing ≥4 barriers were more likely than those experiencing ≤3 barriers to have incomplete referral. Conclusion Barriers to specialty care were common and associated with incomplete referral. Families experiencing many barriers had greater risk of incomplete referral. Improving family/community factors may increase satisfaction with specialty care; however, improving health system factors may be the best way to reduce incomplete referrals. PMID:22929162
Community-based outbreaks of tuberculosis.
Raffalli, J; Sepkowitz, K A; Armstrong, D
1996-05-27
Numerous recent reports have detailed outbreaks of tuberculosis in hospitals and other congregate settings. The characteristics of such settings, including high concentrations of infectious patients and immunocompromised hosts, the potential for sustained daily contact for weeks and often months, and improper precautions taken for protection, make them well suited for tuberculosis transmission. However, community-based outbreaks, which are the source of much public concern, have not been reviewed since 1964, when 109 community outbreaks were examined. Since few of the characteristics of institutional settings are present in the community, the lessons learned may not be applicable to community-based outbreaks. Furthermore, recent studies with analysis by restriction fragment length polymorphisms have documented unexpectedly high rates of primary disease in certain urban communities, suggesting that our understanding of community-based transmission may be incomplete. We reviewed all reported community-based outbreaks of tuberculosis occurring in the last 30 years to assess the basis of our current understanding of community-based transmission. More than 70 outbreaks were identified, with schools being the most common site. In most, a delay in diagnosis, sustained contact with the index case, inadequate ventilation, or overcrowding was contributory. We conclude that community-based outbreaks of tuberculosis continue to occur and that well-established risks contribute to most outbreaks. Many outbreaks can be prevented or limited by attention to basic infection control principles.
Adversarial risk analysis with incomplete information: a level-k approach.
Rothschild, Casey; McLay, Laura; Guikema, Seth
2012-07-01
This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.
Newborns' Mooney-Face Perception
ERIC Educational Resources Information Center
Leo, Irene; Simion, Francesca
2009-01-01
The aim of this study is to investigate whether newborns detect a face on the basis of a Gestalt representation based on first-order relational information (i.e., the basic arrangement of face features) by using Mooney stimuli. The incomplete 2-tone Mooney stimuli were used because they preclude focusing both on the local features (i.e., the fine…
A Culturally Sensitive Analysis of Culture in the Context of Context: When Is Enough Enough?
ERIC Educational Resources Information Center
Kahn, Peter H., Jr.
Cultural context is not the sole source of human knowledge. Postmodern theory, in both its deconstructionist and affirmative approaches, offers an incomplete basis by which to study race, class, and gender, and undermines ethical interaction. Deconstructionism calls for the abandonment of generalizable research findings, asserting that the concept…
Knowledge of the Debate Critic-Judge.
ERIC Educational Resources Information Center
Phillips, Neil
Arguing that any discussion of debate theory is incomplete without at least some analysis or review of paradigm theory, this paper begins by analogizing the arguments over paradigms to a battle ground over control of the activity. The analysis then shifts to an examination of Thomas Kuhn's sociological theory as a basis for the argument that the…
Effect of projectile on incomplete fusion reactions at low energies
NASA Astrophysics Data System (ADS)
Sharma, Vijay R.; Shuaib, Mohd.; Yadav, Abhishek; Singh, Pushpendra P.; Sharma, Manoj K.; Kumar, R.; Singh, Devendra P.; Singh, B. P.; Muralithar, S.; Singh, R. P.; Bhowmik, R. K.; Prasad, R.
2017-11-01
Present work deals with the experimental studies of incomplete fusion reaction dynamics at energies as low as ≈ 4 - 7 MeV/A. Excitation functions populated via complete fusion and/or incomplete fusion processes in 12C+175Lu, and 13C+169Tm systems have been measured within the framework of PACE4 code. Data of excitation function measurements on comparison with different projectile-target combinations suggest the existence of ICF even at slightly above barrier energies where complete fusion (CF) is supposed to be the sole contributor, and further demonstrates strong projectile structure dependence of ICF. The incomplete fusion strength functions for 12C+175Lu, and 13C+169Tm systems are analyzed as a function of various physical parameters at a constant vrel ≈ 0.053c. It has been found that one neutron (1n) excess projectile 13C (as compared to 12C) results in less incomplete fusion contribution due to its relatively large negative α-Q-value, hence, α Q-value seems to be a reliable parameter to understand the ICF dynamics at low energies. In order to explore the reaction modes on the basis of their entry state spin population, the spin distribution of residues populated via CF and/or ICF in 16O+159Tb system has been done using particle-γ coincidence technique. CF-α and ICF-α channels have been identified from backward (B) and forward (F) α-gated γspectra, respectively. Reaction dependent decay patterns have been observed in different α emitting channels. The CF channels are found to be fed over a broad spin range, however, ICF-α channels was observed only for high-spin states. Further, the existence of incomplete fusion at low bombarding energies indicates the possibility to populate high spin states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Kensaku, E-mail: moriken@md.tsukuba.ac.jp; Saida, Tsukasa; Shibuya, Yoko
Purpose: To compare the status of uterine and ovarian arteries after uterine artery embolization (UAE) in patients with incomplete and complete fibroid infarction via unenhanced 3D time-of-flight magnetic resonance (MR) angiography. Materials and Methods: Thirty-five consecutive women (mean age 43 years; range 26-52 years) with symptomatic uterine fibroids underwent UAE and MR imaging before and within 2 months after UAE. The patients were divided into incomplete and complete fibroid infarction groups on the basis of the postprocedural gadolinium-enhanced MR imaging findings. Two independent observers reviewed unenhanced MR angiography before and after UAE to determine bilateral uterine and ovarian arterial flowmore » scores. The total arterial flow scores were calculated by summing the scores of the 4 arteries. All scores were compared with the Mann-Whitney test. Results: Fourteen and 21 patients were assigned to the incomplete and complete fibroid infarction groups, respectively. The total arterial flow score in the incomplete fibroid infarction group was significantly greater than that in the complete fibroid infarction group (P = 0.019 and P = 0.038 for observers 1 and 2, respectively). In 3 patients, additional therapy was recommended for insufficient fibroid infarction. In 1 of the 3 patients, bilateral ovarian arteries were invisible before UAE but seemed enlarged after UAE. Conclusion: The total arterial flow from bilateral uterine and ovarian arteries in patients with incomplete fibroid infarction is less well reduced than in those with complete fibroid infarction. Postprocedural MR angiography provides useful information to estimate the cause of insufficient fibroid infarction in individual cases.« less
Platelet function analysis with two different doses of aspirin.
Aydinalp, Alp; Atar, Ilyas; Altin, Cihan; Gülmez, Oykü; Atar, Asli; Açikel, Sadik; Bozbaş, Hüseyin; Yildirir, Aylin; Müderrisoğlu, Haldun
2010-06-01
We aimed to compare the level of platelet inhibition using the platelet function analyzer (PFA)-100 in patients receiving low and medium doses of aspirin. On a prospective basis, 159 cardiology outpatients (83 men, 76 women; mean age 60.9 ± 9.9 years) taking 100 mg/day or 300 mg/day aspirin at least for the previous 15 days were included. Of these, 79 patients (50%) were on 100 mg and 80 patients (50.3%) were on 300 mg aspirin treatment. Blood samples were collected between 09:30 and 11:00 hours in the morning. Platelet reactivity was measured with the PFA-100 system. Incomplete platelet inhibition was defined as a normal collagen/epinephrine closure time (< 165 sec) despite aspirin treatment. Baseline clinical and laboratory characteristics of the patient groups taking 100 mg or 300 mg aspirin were similar. The overall prevalence of incomplete platelet inhibition was 22% (35 patients). The prevalence of incomplete platelet inhibition was significantly higher in patients treated with 100 mg of aspirin (n = 24/79, 30.4%) compared with those treated with 300 mg of aspirin (n = 11/80, 13.8%) (p = 0.013). In univariate analysis, female sex (p = 0.002) and aspirin dose (p = 0.013) were significantly correlated with incomplete platelet inhibition. In multivariate analysis, female sex (OR: 0.99; 95% CI 0.9913-0.9994; p = 0.025) and aspirin dose (OR: 3.38; 95% CI 1.4774-7.7469; p = 0.003) were found as independent factors predictive of incomplete platelet inhibition. Our findings suggest that treatment with higher doses of aspirin can reduce incomplete platelet inhibition especially in female patients.
Insecurity of Wireless Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheldon, Frederick T; Weber, John Mark; Yoo, Seong-Moo
Wireless is a powerful core technology enabling our global digital infrastructure. Wi-Fi networks are susceptible to attacks on Wired Equivalency Privacy, Wi-Fi Protected Access (WPA), and WPA2. These attack signatures can be profiled into a system that defends against such attacks on the basis of their inherent characteristics. Wi-Fi is the standard protocol for wireless networks used extensively in US critical infrastructures. Since the Wired Equivalency Privacy (WEP) security protocol was broken, the Wi-Fi Protected Access (WPA) protocol has been considered the secure alternative compatible with hardware developed for WEP. However, in November 2008, researchers developed an attack on WPA,more » allowing forgery of Address Resolution Protocol (ARP) packets. Subsequent enhancements have enabled ARP poisoning, cryptosystem denial of service, and man-in-the-middle attacks. Open source systems and methods (OSSM) have long been used to secure networks against such attacks. This article reviews OSSMs and the results of experimental attacks on WPA. These experiments re-created current attacks in a laboratory setting, recording both wired and wireless traffic. The article discusses methods of intrusion detection and prevention in the context of cyber physical protection of critical Internet infrastructure. The basis for this research is a specialized (and undoubtedly incomplete) taxonomy of Wi-Fi attacks and their adaptations to existing countermeasures and protocol revisions. Ultimately, this article aims to provide a clearer picture of how and why wireless protection protocols and encryption must achieve a more scientific basis for detecting and preventing such attacks.« less
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
A bird's eye view: the cognitive strategies of experts interpreting seismic profiles
NASA Astrophysics Data System (ADS)
Bond, C. E.; Butler, R.
2012-12-01
Geoscience is perhaps unique in its reliance on incomplete datasets and building knowledge from their interpretation. This interpretation basis for the science is fundamental at all levels; from creation of a geological map to interpretation of remotely sensed data. To teach and understand better the uncertainties in dealing with incomplete data we need to understand the strategies individual practitioners deploy that make them effective interpreters. The nature of interpretation is such that the interpreter needs to use their cognitive ability in the analysis of the data to propose a sensible solution in their final output that is both consistent not only with the original data but also with other knowledge and understanding. In a series of experiments Bond et al. (2007, 2008, 2011, 2012) investigated the strategies and pitfalls of expert and non-expert interpretation of seismic images. These studies focused on large numbers of participants to provide a statistically sound basis for analysis of the results. The outcome of these experiments showed that techniques and strategies are more important than expert knowledge per se in developing successful interpretations. Experts are successful because of their application of these techniques. In a new set of experiments we have focused on a small number of experts to determine how they use their cognitive and reasoning skills, in the interpretation of 2D seismic profiles. Live video and practitioner commentary were used to track the evolving interpretation and to gain insight on their decision processes. The outputs of the study allow us to create an educational resource of expert interpretation through online video footage and commentary with associated further interpretation and analysis of the techniques and strategies employed. This resource will be of use to undergraduate, post-graduate, industry and academic professionals seeking to improve their seismic interpretation skills, develop reasoning strategies for dealing with incomplete datasets, and for assessing the uncertainty in these interpretations. Bond, C.E. et al. (2012). 'What makes an expert effective at interpreting seismic images?' Geology, 40, 75-78. Bond, C. E. et al. (2011). 'When there isn't a right answer: interpretation and reasoning, key skills for 21st century geoscience'. International Journal of Science Education, 33, 629-652. Bond, C. E. et al. (2008). 'Structural models: Optimizing risk analysis by understanding conceptual uncertainty'. First Break, 26, 65-71. Bond, C. E. et al., (2007). 'What do you think this is?: "Conceptual uncertainty" In geoscience interpretation'. GSA Today, 17, 4-10.
Tsai, Christopher C; Tsai, Sarai H; Zeng-Treitler, Qing; Liang, Bryan A
2007-10-11
The quality of user-generated health information on consumer health social networking websites has not been studied. We collected a set of postings related to Diabetes Mellitus Type I from three such sites and classified them based on accuracy, error type, and clinical significance of error. We found 48% of postings contained medical content, and 54% of these were either incomplete or contained errors. About 85% of the incomplete and erroneous messages were potentially clinically significant.
Induction of belief decision trees from data
NASA Astrophysics Data System (ADS)
AbuDahab, Khalil; Xu, Dong-ling; Keane, John
2012-09-01
In this paper, a method for acquiring belief rule-bases by inductive inference from data is described and evaluated. Existing methods extract traditional rules inductively from data, with consequents that are believed to be either 100% true or 100% false. Belief rules can capture uncertain or incomplete knowledge using uncertain belief degrees in consequents. Instead of using singled-value consequents, each belief rule deals with a set of collectively exhaustive and mutually exclusive consequents. The proposed method extracts belief rules from data which contain uncertain or incomplete knowledge.
Basis sets for the calculation of core-electron binding energies
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-05-01
Core-electron binding energies (CEBEs) computed within a Δ self-consistent field approach require large basis sets to achieve convergence with respect to the basis set limit. It is shown that supplementing a basis set with basis functions from the corresponding basis set for the element with the next highest nuclear charge (Z + 1) provides basis sets that give CEBEs close to the basis set limit. This simple procedure provides relatively small basis sets that are well suited for calculations where the description of a core-ionised state is important, such as time-dependent density functional theory calculations of X-ray emission spectroscopy.
Dieterich, Johannes M; Werner, Hans-Joachim; Mata, Ricardo A; Metz, Sebastian; Thiel, Walter
2010-01-21
Energy and free energy barriers for acetaldehyde conversion in aldehyde oxidoreductase are determined for three reaction pathways using quantum mechanical/molecular mechanical (QM/MM) calculations on the solvated enzyme. Ab initio single-point QM/MM energies are obtained at the stationary points optimized at the DFT(B3LYP)/MM level. These ab initio calculations employ local correlation treatments [LMP2 and LCCSD(T0)] in combination with augmented triple- and quadruple-zeta basis sets, and the final coupled cluster results include MP2-based corrections for basis set incompleteness and for the domain approximation. Free energy perturbation (FEP) theory is used to generate free energy profiles at the DFT(B3LYP)/MM level for the most important reaction steps by sampling along the corresponding reaction paths using molecular dynamics. The ab initio and FEP QM/MM results are combined to derive improved estimates of the free energy barriers, which differ from the corresponding DFT(B3LYP)/MM energy barriers by about 3 kcal mol(-1). The present results confirm the qualitative mechanistic conclusions from a previous DFT(B3LYP)/MM study. Most favorable is a three-step Lewis base catalyzed mechanism with an initial proton transfer from the cofactor to the Glu869 residue, a subsequent nucleophilic attack that yields a tetrahedral intermediate (IM2), and a final rate-limiting hydride transfer. The competing metal center activated pathway has the same final step but needs to overcome a higher barrier in the initial step on the route to IM2. The concerted mechanism has the highest free energy barrier and can be ruled out. While confirming the qualitative mechanistic scenario proposed previously on the basis of DFT(B3LYP)/MM energy profiles, the present ab initio and FEP QM/MM calculations provide corrections to the barriers that are important when aiming at high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papajak, Ewa; Truhlar, Donald G.
We present sets of convergent, partially augmented basis set levels corresponding to subsets of the augmented “aug-cc-pV(n+d)Z” basis sets of Dunning and co-workers. We show that for many molecular properties a basis set fully augmented with diffuse functions is computationally expensive and almost always unnecessary. On the other hand, unaugmented cc-pV(n+d)Z basis sets are insufficient for many properties that require diffuse functions. Therefore, we propose using intermediate basis sets. We developed an efficient strategy for partial augmentation, and in this article, we test it and validate it. Sequentially deleting diffuse basis functions from the “aug” basis sets yields the “jul”,more » “jun”, “may”, “apr”, etc. basis sets. Tests of these basis sets for Møller-Plesset second-order perturbation theory (MP2) show the advantages of using these partially augmented basis sets and allow us to recommend which basis sets offer the best accuracy for a given number of basis functions for calculations on large systems. Similar truncations in the diffuse space can be performed for the aug-cc-pVxZ, aug-cc-pCVxZ, etc. basis sets.« less
Do Social Conditions Affect Capuchin Monkeys' (Cebus apella) Choices in a Quantity Judgment Task?
Beran, Michael J; Perdue, Bonnie M; Parrish, Audrey E; Evans, Theodore A
2012-01-01
Beran et al. (2012) reported that capuchin monkeys closely matched the performance of humans in a quantity judgment test in which information was incomplete but a judgment still had to be made. In each test session, subjects first made quantity judgments between two known options. Then, they made choices where only one option was visible. Both humans and capuchin monkeys were guided by past outcomes, as they shifted from selecting a known option to selecting an unknown option at the point at which the known option went from being more than the average rate of return to less than the average rate of return from earlier choices in the test session. Here, we expanded this assessment of what guides quantity judgment choice behavior in the face of incomplete information to include manipulations to the unselected quantity. We manipulated the unchosen set in two ways: first, we showed the monkeys what they did not get (the unchosen set), anticipating that "losses" would weigh heavily on subsequent trials in which the same known quantity was presented. Second, we sometimes gave the unchosen set to another monkey, anticipating that this social manipulation might influence the risk-taking responses of the focal monkey when faced with incomplete information. However, neither manipulation caused difficulty for the monkeys who instead continued to use the rational strategy of choosing known sets when they were as large as or larger than the average rate of return in the session, and choosing the unknown (riskier) set when the known set was not sufficiently large. As in past experiments, this was true across a variety of daily ranges of quantities, indicating that monkeys were not using some absolute quantity as a threshold for selecting (or not) the known set, but instead continued to use the daily average rate of return to determine when to choose the known versus the unknown quantity.
Smith, J; Kiupel, M; Farrelly, J; Cohen, R; Olmsted, G; Kirpensteijn, J; Brocks, B; Post, G
2017-03-01
Grade II mast cell tumours (MCT) are tumours with variable biologic behaviour. Multiple factors have been associated with outcome, including proliferation markers. The purpose of this study was to determine if extent of surgical excision affects recurrence rate in dogs with grade II MCT with low proliferation activity, determined by Ki67 and argyrophilic nucleolar organising regions (AgNOR). Eighty-six dogs with cutaneous MCT were evaluated. All dogs had surgical excision of their MCT with a low Ki67 index and combined AgNORxKi67 (Ag67) values. Twenty-three (27%) dogs developed local or distant recurrence during the median follow-up time. Of these dogs, six (7%) had local recurrence, one had complete and five had incomplete histologic margins. This difference in recurrence rates between dogs with complete and incomplete histologic margins was not significant. On the basis of this study, ancillary therapy may not be necessary for patients with incompletely excised grade II MCT with low proliferation activity. © 2015 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
[The conditions and mode of life of teenagers in a big city].
Gadzhiev, R S; Ramazanov, R S
2004-01-01
A total of 1646 teenagers were questioned (including students of schools, technical schools and high schools located in the city of Makhachkala, Republic of Dagestan) for the purpose of working out mixed programs for the healthy mode of life and for the prevention of drug-addiction and toxicomania among youth. According to the study results, teenagers living in incomplete families are subjected more to bad habits versus those living in normal families. Above 95% of parents said they did not spend their free time with their children; 12% of boys and 3% of girls prefer to visit discos when they have free time--a place where one can easily buy psychoactive drugs; 22% of teenagers have a history of minor offences, which means they could potentially use psychoactives and live asocial life. Finally, a set of measures was worked out on the basis of study results for the healthy mode of life and for the prevention of drug-addiction and toxicomania among teenagers.
Yang, Deming; Xu, Zhenming
2011-09-15
Crushing and separating technology is widely used in waste printed circuit boards (PCBs) recycling process. A set of automatic line without negative impact to environment for recycling waste PCBs was applied in industry scale. Crushed waste PCBs particles grinding and classification cyclic system is the most important part of the automatic production line, and it decides the efficiency of the whole production line. In this paper, a model for computing the process of the system was established, and matrix analysis method was adopted. The result showed that good agreement can be achieved between the simulation model and the actual production line, and the system is anti-jamming. This model possibly provides a basis for the automatic process control of waste PCBs production line. With this model, many engineering problems can be reduced, such as metals and nonmetals insufficient dissociation, particles over-pulverizing, incomplete comminuting, material plugging and equipment fever. Copyright © 2011 Elsevier B.V. All rights reserved.
Attention to Attributes and Objects in Working Memory
Cowan, Nelson; Blume, Christopher L.; Saults, J. Scott
2013-01-01
It has been debated on the basis of change-detection procedures whether visual working memory is limited by the number of objects, task-relevant attributes within those objects, or bindings between attributes. This debate, however, has been hampered by several limitations, including the use of conditions that vary between studies and the absence of appropriate mathematical models to estimate the number of items in working memory in different stimulus conditions. We re-examined working memory limits in two experiments with a wide array of conditions involving color and shape attributes, relying on a set of new models to fit various stimulus situations. In Experiment 2, a new procedure allowed identical retrieval conditions across different conditions of attention at encoding. The results show that multiple attributes compete for attention, but that retaining the binding between attributes is accomplished only by retaining the attributes themselves. We propose a theoretical account in which a fixed object capacity limit contains within it the possibility of the incomplete retention of object attributes, depending on the direction of attention. PMID:22905929
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-23
... present a disproportionate risk to children. The basis for this belief is that the Agency used the DRAS...: ``Under civil and criminal penalty of law for the making or submission of false or fraudulent statements... determined by EPA in its sole discretion to be false, inaccurate or incomplete, and upon conveyance of this...
Myopia: Prevalence and Progression
1989-01-01
accommodative or pseudomyopia ) of low degree (Banerjee, 1933; Borghi and Rouse, 1985; Bothman, 1931; Conrad, 1874; Ebenholtz, 1986a; Hynes, 1956...at entrance by the end of the summer vacation period. These studies suggest pseudomyopia (i.e., incomplete relaxation of accommodation) as the basis...a continuous state of accommodation (spasm or tonic accommodation). This condition ( pseudomyopia ) might be expected to be limited by the amplitude of
Image-processing algorithms for inspecting characteristics of hybrid rice seed
NASA Astrophysics Data System (ADS)
Cheng, Fang; Ying, Yibin
2004-03-01
Incompletely closed glumes, germ and disease are three characteristics of hybrid rice seed. Image-processing algorithms developed to detect these seed characteristics were presented in this paper. The rice seed used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and IIyou. The algorithms were implemented with a 5*600 images set, a 4*400 images set and the other 5*600 images set respectively. The image sets included black background images, white background images and both sides images of rice seed. Results show that the algorithm for inspecting seeds with incompletely closed glumes based on Radon Transform achieved an accuracy of 96% for normal seeds, 92% for seeds with fine fissure and 87% for seeds with unclosed glumes, the algorithm for inspecting germinated seeds on panicle based on PCA and ANN achieved n average accuracy of 98% for normal seeds, 88% for germinated seeds on panicle and the algorithm for inspecting diseased seeds based on color features achieved an accuracy of 92% for normal and healthy seeds, 95% for spot diseased seeds and 83% for severe diseased seeds.
Yu, Liang; Wang, Bingbo; Ma, Xiaoke; Gao, Lin
2016-12-23
Extracting drug-disease correlations is crucial in unveiling disease mechanisms, as well as discovering new indications of available drugs, or drug repositioning. Both the interactome and the knowledge of disease-associated and drug-associated genes remain incomplete. We present a new method to predict the associations between drugs and diseases. Our method is based on a module distance, which is originally proposed to calculate distances between modules in incomplete human interactome. We first map all the disease genes and drug genes to a combined protein interaction network. Then based on the module distance, we calculate the distances between drug gene sets and disease gene sets, and take the distances as the relationships of drug-disease pairs. We also filter possible false positive drug-disease correlations by p-value. Finally, we validate the top-100 drug-disease associations related to six drugs in the predicted results. The overlapping between our predicted correlations with those reported in Comparative Toxicogenomics Database (CTD) and literatures, and their enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways demonstrate our approach can not only effectively identify new drug indications, but also provide new insight into drug-disease discovery.
Lorenz, Gödel and Penrose: new perspectives on determinism and causality in fundamental physics
NASA Astrophysics Data System (ADS)
Palmer, T. N.
2014-07-01
Despite being known for his pioneering work on chaotic unpredictability, the key discovery at the core of meteorologist Ed Lorenz's work is the link between space-time calculus and state-space fractal geometry. Indeed, properties of Lorenz's fractal invariant set relate space-time calculus to deep areas of mathematics such as Gödel's Incompleteness Theorem. Could such properties also provide new perspectives on deep unsolved issues in fundamental physics? Recent developments in cosmology motivate what is referred to as the 'cosmological invariant set postulate': that the universe ? can be considered a deterministic dynamical system evolving on a causal measure-zero fractal invariant set ? in its state space. Symbolic representations of ? are constructed explicitly based on permutation representations of quaternions. The resulting 'invariant set theory' provides some new perspectives on determinism and causality in fundamental physics. For example, while the cosmological invariant set appears to have a rich enough structure to allow a description of (quantum) probability, its measure-zero character ensures it is sparse enough to prevent invariant set theory being constrained by the Bell inequality (consistent with a partial violation of the so-called measurement independence postulate). The primacy of geometry as embodied in the proposed theory extends the principles underpinning general relativity. As a result, the physical basis for contemporary programmes which apply standard field quantisation to some putative gravitational lagrangian is questioned. Consistent with Penrose's suggestion of a deterministic but non-computable theory of fundamental physics, an alternative 'gravitational theory of the quantum' is proposed based on the geometry of ?, with new perspectives on the problem of black-hole information loss and potential observational consequences for the dark universe.
Ecker, Willi; Kupfer, Jochen; Gönner, Sascha
2014-01-01
This paper examines the contribution of incompleteness/'not just right experiences' (NJREs) to an understanding of the relationship between obsessive-compulsive disorder (OCD) and obsessive-compulsive personality traits (OCPTs). It investigates the association of specific OCD symptom dimensions with OCPTs, conceptualized as continuous phenomena that are also observable below the diagnostic threshold. As empirical findings and clinical observation suggest that incompleteness feelings/NJREs may play a significant affective and motivational role for certain OCD subtypes, but also for patients with accentuated OCPTs, we hypothesized that OCPTs are selectively linked with incompleteness-associated OCD symptom dimensions (ordering, checking, hoarding and counting). Moreover, we assumed that this selective relationship cannot be demonstrated any more after statistical control of incompleteness, whereas it is preserved after statistical control of anxiety, depression, pathological worry and harm avoidance. Results from a study with a large clinical sample (n = 185) partially support these hypotheses and suggest that NJREs may be an important connecting link between specific OCD symptom dimensions, in particular ordering and checking, and accentuated OCPTs. Obsessive-compulsive personality traits (OCPTs) are positively related to obsessive-compulsive disorder symptom dimensions (ordering, checking, hoarding and counting) hypothesized or found to be associated with incompleteness/'not just right experiences' (NJREs), but not to washing and obsessions. This positive relationship, which is strongest for ordering and checking, is eliminated when NJREs are statistically controlled. Ordering, checking and accentuated OCPTs may share NJREs as a common affective-motivational underpinning.Dysfunctional behaviour patterns of people with accentuated OCPTs or obsessive-compulsive personality disorder (OCPD) may be viewed as efforts to avoid or reduce subjectively intolerable NJREs. On the basis of such a conceptualization of OCPD as an emotional disorder, a novel treatment approach for OCPD focusing on habituation to NJREs could be developed. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Bo; Kowalski, Karol
The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less
Peng, Bo; Kowalski, Karol
2017-09-12
The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.
NASA Astrophysics Data System (ADS)
Kougioumtzoglou, Ioannis A.; dos Santos, Ketson R. M.; Comerford, Liam
2017-09-01
Various system identification techniques exist in the literature that can handle non-stationary measured time-histories, or cases of incomplete data, or address systems following a fractional calculus modeling. However, there are not many (if any) techniques that can address all three aforementioned challenges simultaneously in a consistent manner. In this paper, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. Several linear and nonlinear time-variant systems with fractional derivative elements are used as numerical examples to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.
Correlation consistent basis sets for the atoms In–Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahler, Andrew; Wilson, Angela K., E-mail: akwilson@unt.edu
In this work, the correlation consistent family of Gaussian basis sets has been expanded to include all-electron basis sets for In–Xe. The methodology for developing these basis sets is described, and several examples of the performance and utility of the new sets have been provided. Dissociation energies and bond lengths for both homonuclear and heteronuclear diatomics demonstrate the systematic convergence behavior with respect to increasing basis set quality expected by the family of correlation consistent basis sets in describing molecular properties. Comparison with recently developed correlation consistent sets designed for use with the Douglas-Kroll Hamiltonian is provided.
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
ERIC Educational Resources Information Center
Fellini, Laetitia; Florian, Cedrick; Courtey, Julie; Roullet, Pascal
2009-01-01
Pattern completion is the ability to retrieve complete information on the basis of incomplete retrieval cues. Although it has been demonstrated that this cognitive capacity depends on the NMDA receptors (NMDA-Rs) of the hippocampal CA3 region, the role played by these glutamatergic receptors in the pattern completion process has not yet been…
Group prioritisation with unknown expert weights in incomplete linguistic context
NASA Astrophysics Data System (ADS)
Cheng, Dong; Cheng, Faxin; Zhou, Zhili; Wang, Juan
2017-09-01
In this paper, we study a group prioritisation problem in situations when the expert weights are completely unknown and their judgement preferences are linguistic and incomplete. Starting from the theory of relative entropy (RE) and multiplicative consistency, an optimisation model is provided for deriving an individual priority vector without estimating the missing value(s) of an incomplete linguistic preference relation. In order to address the unknown expert weights in the group aggregating process, we define two new kinds of expert weight indicators based on RE: proximity entropy weight and similarity entropy weight. Furthermore, a dynamic-adjusting algorithm (DAA) is proposed to obtain an objective expert weight vector and capture the dynamic properties involved in it. Unlike the extant literature of group prioritisation, the proposed RE approach does not require pre-allocation of expert weights and can solve incomplete preference relations. An interesting finding is that once all the experts express their preference relations, the final expert weight vector derived from the DAA is fixed irrespective of the initial settings of expert weights. Finally, an application example is conducted to validate the effectiveness and robustness of the RE approach.
Physical basis of destruction of concrete and other building materials
NASA Astrophysics Data System (ADS)
Suleymanova, L. A.; Pogorelova, I. A.; Kirilenko, S. V.; Suleymanov, K. A.
2018-03-01
In the article the scientifically-grounded views of authors on the physical essence of destruction process of concrete and other materials are stated; it is shown that the mechanism of destruction of materials is similar in its essence during the mechanical, thermal, physical-chemical and combined influences, and that in its basis Newton's third law lays. In all cases destruction consists in decompaction of structures, loosening of the internal bonds in materials, in the further integrity damage and their division into separate loosely-bound (full destruction) and unbound with each other (incomplete destruction) elements, which depends on the kind of external influence and perfection of materials structure.
Self-reports of induced abortion: an empathetic setting can improve the quality of data.
Rasch, V; Muhammad, H; Urassa, E; Bergström, S
2000-01-01
OBJECTIVES: This study estimated the proportion of incomplete abortions that are induced in hospital-based settings in Tanzania. METHODS: A cross-sectional questionnaire study was conducted in 2 phases at 3 hospitals in Tanzania. Phase 1 included 302 patients with a diagnosis of incomplete abortion, and phase 2 included 823 such patients. RESULTS: In phase 1, in which cases were classified by clinical criteria and information from the patient, 3.9% to 16.1% of the cases were classified as induced abortion. In phase 2, in which the structured interview was changed to an empathetic dialogue and previously used clinical criteria were omitted, 30.9% to 60.0% of the cases were classified as induced abortion. CONCLUSIONS: An empathetic dialogue improves the quality of data collected among women with induced abortion. PMID:10897196
NASA Astrophysics Data System (ADS)
Behzadi, Naghi; Ahansaz, Bahram
2018-04-01
We propose a mechanism for quantum state transfer (QST) over a binary tree spin network on the basis of incomplete collapsing measurements. To this aim, we perform initially a weak measurement (WM) on the central qubit of the binary tree network where the state of our concern has been prepared on that qubit. After the time evolution of the whole system, a quantum measurement reversal (QMR) is performed on a chosen target qubit. By taking optimal value for the strength of QMR, it is shown that the QST quality from the sending qubit to any typical target qubit on the binary tree is considerably improved in terms of the WM strength. Also, we show that how high-quality entanglement distribution over the binary tree network is achievable by using this approach.
Structural interpretation of seismic data and inherent uncertainties
NASA Astrophysics Data System (ADS)
Bond, Clare
2013-04-01
Geoscience is perhaps unique in its reliance on incomplete datasets and building knowledge from their interpretation. This interpretation basis for the science is fundamental at all levels; from creation of a geological map to interpretation of remotely sensed data. To teach and understand better the uncertainties in dealing with incomplete data we need to understand the strategies individual practitioners deploy that make them effective interpreters. The nature of interpretation is such that the interpreter needs to use their cognitive ability in the analysis of the data to propose a sensible solution in their final output that is both consistent not only with the original data but also with other knowledge and understanding. In a series of experiments Bond et al. (2007, 2008, 2011, 2012) investigated the strategies and pitfalls of expert and non-expert interpretation of seismic images. These studies focused on large numbers of participants to provide a statistically sound basis for analysis of the results. The outcome of these experiments showed that a wide variety of conceptual models were applied to single seismic datasets. Highlighting not only spatial variations in fault placements, but whether interpreters thought they existed at all, or had the same sense of movement. Further, statistical analysis suggests that the strategies an interpreter employs are more important than expert knowledge per se in developing successful interpretations. Experts are successful because of their application of these techniques. In a new set of experiments a small number of experts are focused on to determine how they use their cognitive and reasoning skills, in the interpretation of 2D seismic profiles. Live video and practitioner commentary were used to track the evolving interpretation and to gain insight on their decision processes. The outputs of the study allow us to create an educational resource of expert interpretation through online video footage and commentary with associated further interpretation and analysis of the techniques and strategies employed. This resource will be of use to undergraduate, post-graduate, industry and academic professionals seeking to improve their seismic interpretation skills, develop reasoning strategies for dealing with incomplete datasets, and for assessing the uncertainty in these interpretations. Bond, C.E. et al. (2012). 'What makes an expert effective at interpreting seismic images?' Geology, 40, 75-78. Bond, C. E. et al. (2011). 'When there isn't a right answer: interpretation and reasoning, key skills for 21st century geoscience'. International Journal of Science Education, 33, 629-652. Bond, C. E. et al. (2008). 'Structural models: Optimizing risk analysis by understanding conceptual uncertainty'. First Break, 26, 65-71. Bond, C. E. et al., (2007). 'What do you think this is?: "Conceptual uncertainty" In geoscience interpretation'. GSA Today, 17, 4-10.
Estimation of Blood Flow Rates in Large Microvascular Networks
Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.
2012-01-01
Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980
Curated eutherian third party data gene data sets.
Premzl, Marko
2016-03-01
The free available eutherian genomic sequence data sets advanced scientific field of genomics. Of note, future revisions of gene data sets were expected, due to incompleteness of public eutherian genomic sequence assemblies and potential genomic sequence errors. The eutherian comparative genomic analysis protocol was proposed as guidance in protection against potential genomic sequence errors in public eutherian genomic sequences. The protocol was applicable in updates of 7 major eutherian gene data sets, including 812 complete coding sequences deposited in European Nucleotide Archive as curated third party data gene data sets.
Construction of energy-stable Galerkin reduced order models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan
2013-05-01
This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less
Optimization of selected molecular orbitals in group basis sets.
Ferenczy, György G; Adams, William H
2009-04-07
We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.
Arney, Jennifer; Rafalovich, Adam
2007-01-01
The researchers collected a data set of consumer-directed print advertisements for antidepressant medications from three female-directed magazines, three male-directed magazines, and four common readership magazines published between 1997 and 2003. They evaluated these data for advertising techniques that enable drug advertisements to function as agents of medicalization. The investigators discuss the use of incomplete syllogisms in drug advertisements and identify strategies that might lead readers to frame personal physical and/or emotional conditions medically. Key features in advertisements function as the particular and general premises of a syllogism, and the concluding premise--that the reader has a mood disorder--is unarticulated but implied. The researchers examine the implications of incomplete syllogisms in advertisements and suggest that their use might lead readers to redefine their physical and/or emotional problems to fit medical models of mental distress.
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac; Lin, Y. K.; Zhu, Li-Ping; Fang, Jian-Jie; Cai, G. Q.
1994-01-01
This report supplements a previous report of the same title submitted in June, 1992. It summarizes additional analytical techniques which have been developed for predicting the response of linear and nonlinear structures to noise excitations generated by large propulsion power plants. The report is divided into nine chapters. The first two deal with incomplete knowledge of boundary conditions of engineering structures. The incomplete knowledge is characterized by a convex set, and its diagnosis is formulated as a multi-hypothesis discrete decision-making algorithm with attendant criteria of adaptive termination.
Accurate Methods for Large Molecular Systems (Preprint)
2009-01-06
tensor, EFP calculations are basis set dependent. The smallest recommended basis set is 6- 31++G( d , p )52 The dependence of the computational cost of...and second order perturbation theory (MP2) levels with the 6-31G( d , p ) basis set. Additional SFM tests are presented for a small set of alpha...helices using the 6-31++G( d , p ) basis set. The larger 6-311++G(3df,2p) basis set is employed for creating all EFPs used for non- bonded interactions, since
Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin
2011-06-07
The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
NASA Astrophysics Data System (ADS)
Chmela, Jiří; Harding, Michael E.
2018-06-01
Optimised auxiliary basis sets for lanthanide atoms (Ce to Lu) for four basis sets of the Karlsruhe error-balanced segmented contracted def2 - series (SVP, TZVP, TZVPP and QZVPP) are reported. These auxiliary basis sets enable the use of the resolution-of-the-identity (RI) approximation in post Hartree-Fock methods - as for example, second-order perturbation theory (MP2) and coupled cluster (CC) theory. The auxiliary basis sets are tested on an enlarged set of about a hundred molecules where the test criterion is the size of the RI error in MP2 calculations. Our tests also show that the same auxiliary basis sets can be used together with different effective core potentials. With these auxiliary basis set calculations of MP2 and CC quality can now be performed efficiently on medium-sized molecules containing lanthanides.
NASA Astrophysics Data System (ADS)
Holman, I.; Rey Vicario, D.
2016-12-01
Improving community preparedness for climate change can be supported by developing resilience to past events, focused on those changes of particular relevance (such as floods and droughts). However, communities' perceptions of impacts and risk can be influenced by an incomplete appreciation of historical baseline climate variability. This can arise from a number of factors including individual's age, access to long term data records and availability of local knowledge. For example, the most significant recent drought in the UK occurred in 1976/77 but does it represent the worst drought that did occur (or could have occurred) without climate change? We focus on the east of England where most irrigated agriculture is located and where many local farmers interviewed were either not in business then or have an incomplete memory of the impacts of the drought. This paper describes a comparison of an annual agroclimatic indicator closely linked to irrigation demand (maximum Potential Soil Moisture Deficit) calculated from three sources of long term observational and simulated historical weather data with recent data. These long term datasets include gridded measured / calculated datasets of precipitation and reference evapotranspiration; a dynamically downscaled 20th Century Re-analysis dataset, and two Regional Climate Model ensemble datasets (FutureFlows and the MaRIUS event set) which each provide between 110 and 3000 years of baseline weather. The comparison shows that the long term datasets provide a wider characterisation of current climate variability and affect the perception of current drought frequency and severity. The paper will show that using a more comprehensive understanding of current climate variability and drought risk as a basis for adapting irrigated systems to droughts can provide substantial increased resilience to (uncertain) climate change.
Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.
Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland
2009-06-09
The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Dattilio, Frank M; Edwards, David J A; Fishman, Daniel B
2010-12-01
This article addresses the long-standing divide between researchers and practitioners in the field of psychotherapy, regarding what really works in treatment and the extent to which interventions should be governed by outcomes generated in a "laboratory atmosphere." This alienation has its roots in a positivist paradigm, which is epistemologically incomplete because it fails to provide for context-based practical knowledge. In other fields of evaluation research, it has been superseded by a mixed methods paradigm, which embraces pragmatism and multiplicity. On the basis of this paradigm, we propose and illustrate new scientific standards for research on the evaluation of psychotherapeutic treatments. These include the requirement that projects should comprise several parallel studies that involve randomized controlled trials, qualitative examinations of the implementation of treatment programs, and systematic case studies. The uniqueness of this article is that it contributes a guideline for involving a set of complementary publications, including a review that offers an overall synthesis of the findings from different methodological approaches. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
SATAYATHUM, SUDTIDA A.; MUCHIRI, ERIC M.; OUMA, JOHN H.; WHALEN, CHRISTOPHER C.; KING, CHARLES H.
2010-01-01
Urinary schistosomiasis remains a significant burden for Africa and the Middle East. Success of regional control strategies will depend, in part, on what influence local environmental and behavioral factors have on individual risk for primary infection and/or reinfection. Based on experience in a multi-year (1984–1992), school-based Schistosoma haematobium control program in Coast Province, Kenya, we examined risk for infection outcomes as a function of age, sex, pretreatment morbidity, treatment regimen, water contact, and residence location, with the use of life tables and Cox proportional-hazards analysis. After adjustment, location of residence, age less than 12 years, pretreatment hematuria, and incomplete treatment were the significant independent predictors of infection, whereas sex and frequency of water contact were not. We conclude that local physical features and age-related factors play a predominant role in S. haematobium transmission in this setting. In large population-based control programs, treatment allocation strategies may need to be tailored to local conditions on a village-by-village basis. PMID:16837713
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linkins, A.E.
1992-01-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Tuomas P., E-mail: tuomas.rossi@alumni.aalto.fi; Sakko, Arto; Puska, Martti J.
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate thatmore » the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond.« less
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2017-06-01
With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.
Seismicity map of the State of Illinois
Stover, C.W.; Reagor, B.G.; Algermissen, S.T.
1979-01-01
The earthquake data shown on this map and listed in table 1 are a list of earthquakes that were originally used in preparing the Seismic Risk Studies in the United States (Algermissen, 1969) which have been recompiled and updated through 1977. The data have been reexamined and intensities assigned where none had been assigned before, on the basis of available data. Other intensity values were updated from new and additional data sources that were not available at the time of original compilation. Some epicenters were relocated on the basis of new information. The data shown in table 1 are estimates of the most accurate epicenter, magnitude, and intensity of teach earthquake, on the basis of historical and current information. Some of the aftershocks from large earthquakes are listed but are incomplete in many instances, especially for the ones that occurred before seismic instruments were in universal usage.
Jørgensen, Vivien; Roaldsen, Kirsti Skavberg
2016-01-01
Objective: Explore and describe experiences and perceptions of falls, risk of falling, and fall-related consequences in individuals with incomplete spinal cord injury (SCI) who are still walking. Design: A qualitative interview study applying interpretive content analysis with an inductive approach. Setting: Specialized rehabilitation hospital. Subjects: A purposeful sample of 15 individuals (10 men), 23 to 78 years old, 2-34 years post injury with chronic incomplete traumatic SCI, and walking ⩾75% of time for mobility needs. Methods: Individual, semi-structured face-to-face interviews were recorded, condensed, and coded to find themes and subthemes. Results: One overarching theme was revealed: “Falling challenges identity and self-image as normal” which comprised two main themes “Walking with incomplete SCI involves minimizing fall risk and fall-related concerns without compromising identity as normal” and “Walking with incomplete SCI implies willingness to increase fall risk in order to maintain identity as normal”. Informants were aware of their increased fall risk and took precautions, but willingly exposed themselves to risky situations when important to self-identity. All informants expressed some conditional fall-related concerns, and a few experienced concerns limiting activity and participation. Conclusion: Ambulatory individuals with incomplete SCI considered falls to be a part of life. However, falls interfered with the informants’ identities and self-images as normal, healthy, and well-functioning. A few expressed dysfunctional concerns about falling, and interventions should target these. PMID:27170274
Lu, Timothy Tehua; Lao, Oscar; Nothnagel, Michael; Junge, Olaf; Freitag-Wolf, Sandra; Caliebe, Amke; Balascakova, Miroslava; Bertranpetit, Jaume; Bindoff, Laurence Albert; Comas, David; Holmlund, Gunilla; Kouvatsi, Anastasia; Macek, Milan; Mollet, Isabelle; Nielsen, Finn; Parson, Walther; Palo, Jukka; Ploski, Rafal; Sajantila, Antti; Tagliabracci, Adriano; Gether, Ulrik; Werge, Thomas; Rivadeneira, Fernando; Hofman, Albert; Uitterlinden, André Gerardus; Gieger, Christian; Wichmann, Heinz-Erich; Ruether, Andreas; Schreiber, Stefan; Becker, Christian; Nürnberg, Peter; Nelson, Matthew Roberts; Kayser, Manfred; Krawczak, Michael
2009-07-01
Genetic matching potentially provides a means to alleviate the effects of incomplete Mendelian randomization in population-based gene-disease association studies. We therefore evaluated the genetic-matched pair study design on the basis of genome-wide SNP data (309,790 markers; Affymetrix GeneChip Human Mapping 500K Array) from 2457 individuals, sampled at 23 different recruitment sites across Europe. Using pair-wise identity-by-state (IBS) as a matching criterion, we tried to derive a subset of markers that would allow identification of the best overall matching (BOM) partner for a given individual, based on the IBS status for the subset alone. However, our results suggest that, by following this approach, the prediction accuracy is only notably improved by the first 20 markers selected, and increases proportionally to the marker number thereafter. Furthermore, in a considerable proportion of cases (76.0%), the BOM of a given individual, based on the complete marker set, came from a different recruitment site than the individual itself. A second marker set, specifically selected for ancestry sensitivity using singular value decomposition, performed even more poorly and was no more capable of predicting the BOM than randomly chosen subsets. This leads us to conclude that, at least in Europe, the utility of the genetic-matched pair study design depends critically on the availability of comprehensive genotype information for both cases and controls.
Increasing patient safety and efficiency in transfusion therapy using formal process definitions.
Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L
2007-01-01
The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.
Analyzing and synthesizing phylogenies using tree alignment graphs.
Smith, Stephen A; Brown, Joseph W; Hinchliff, Cody E
2013-01-01
Phylogenetic trees are used to analyze and visualize evolution. However, trees can be imperfect datatypes when summarizing multiple trees. This is especially problematic when accommodating for biological phenomena such as horizontal gene transfer, incomplete lineage sorting, and hybridization, as well as topological conflict between datasets. Additionally, researchers may want to combine information from sets of trees that have partially overlapping taxon sets. To address the problem of analyzing sets of trees with conflicting relationships and partially overlapping taxon sets, we introduce methods for aligning, synthesizing and analyzing rooted phylogenetic trees within a graph, called a tree alignment graph (TAG). The TAG can be queried and analyzed to explore uncertainty and conflict. It can also be synthesized to construct trees, presenting an alternative to supertrees approaches. We demonstrate these methods with two empirical datasets. In order to explore uncertainty, we constructed a TAG of the bootstrap trees from the Angiosperm Tree of Life project. Analysis of the resulting graph demonstrates that areas of the dataset that are unresolved in majority-rule consensus tree analyses can be understood in more detail within the context of a graph structure, using measures incorporating node degree and adjacency support. As an exercise in synthesis (i.e., summarization of a TAG constructed from the alignment trees), we also construct a TAG consisting of the taxonomy and source trees from a recent comprehensive bird study. We synthesized this graph into a tree that can be reconstructed in a repeatable fashion and where the underlying source information can be updated. The methods presented here are tractable for large scale analyses and serve as a basis for an alternative to consensus tree and supertree methods. Furthermore, the exploration of these graphs can expose structures and patterns within the dataset that are otherwise difficult to observe.
Analyzing and Synthesizing Phylogenies Using Tree Alignment Graphs
Smith, Stephen A.; Brown, Joseph W.; Hinchliff, Cody E.
2013-01-01
Phylogenetic trees are used to analyze and visualize evolution. However, trees can be imperfect datatypes when summarizing multiple trees. This is especially problematic when accommodating for biological phenomena such as horizontal gene transfer, incomplete lineage sorting, and hybridization, as well as topological conflict between datasets. Additionally, researchers may want to combine information from sets of trees that have partially overlapping taxon sets. To address the problem of analyzing sets of trees with conflicting relationships and partially overlapping taxon sets, we introduce methods for aligning, synthesizing and analyzing rooted phylogenetic trees within a graph, called a tree alignment graph (TAG). The TAG can be queried and analyzed to explore uncertainty and conflict. It can also be synthesized to construct trees, presenting an alternative to supertrees approaches. We demonstrate these methods with two empirical datasets. In order to explore uncertainty, we constructed a TAG of the bootstrap trees from the Angiosperm Tree of Life project. Analysis of the resulting graph demonstrates that areas of the dataset that are unresolved in majority-rule consensus tree analyses can be understood in more detail within the context of a graph structure, using measures incorporating node degree and adjacency support. As an exercise in synthesis (i.e., summarization of a TAG constructed from the alignment trees), we also construct a TAG consisting of the taxonomy and source trees from a recent comprehensive bird study. We synthesized this graph into a tree that can be reconstructed in a repeatable fashion and where the underlying source information can be updated. The methods presented here are tractable for large scale analyses and serve as a basis for an alternative to consensus tree and supertree methods. Furthermore, the exploration of these graphs can expose structures and patterns within the dataset that are otherwise difficult to observe. PMID:24086118
Constantinou, Anthony Costa; Yet, Barbaros; Fenton, Norman; Neil, Martin; Marsh, William
2016-01-01
Inspired by real-world examples from the forensic medical sciences domain, we seek to determine whether a decision about an interventional action could be subject to amendments on the basis of some incomplete information within the model, and whether it would be worthwhile for the decision maker to seek further information prior to suggesting a decision. The method is based on the underlying principle of Value of Information to enhance decision analysis in interventional and counterfactual Bayesian networks. The method is applied to two real-world Bayesian network models (previously developed for decision support in forensic medical sciences) to examine the average gain in terms of both Value of Information (average relative gain ranging from 11.45% and 59.91%) and decision making (potential amendments in decision making ranging from 0% to 86.8%). We have shown how the method becomes useful for decision makers, not only when decision making is subject to amendments on the basis of some unknown risk factors, but also when it is not. Knowing that a decision outcome is independent of one or more unknown risk factors saves us from the trouble of seeking information about the particular set of risk factors. Further, we have also extended the assessment of this implication to the counterfactual case and demonstrated how answers about interventional actions are expected to change when some unknown factors become known, and how useful this becomes in forensic medical science. Copyright © 2015 Elsevier B.V. All rights reserved.
Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.
Saller, Maximilian A C; Habershon, Scott
2017-07-11
Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.
Polarization functions for the modified m6-31G basis sets for atoms Ga through Kr.
Mitin, Alexander V
2013-09-05
The 2df polarization functions for the modified m6-31G basis sets of the third-row atoms Ga through Kr (Int J Quantum Chem, 2007, 107, 3028; Int J. Quantum Chem, 2009, 109, 1158) are proposed. The performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets were examined in molecular calculations carried out by the density functional theory (DFT) method with B3LYP hybrid functional, Møller-Plesset perturbation theory of the second order (MP2), quadratic configuration interaction method with single and double substitutions and were compared with those for the known 6-31G basis sets as well as with the other similar 641 and 6-311G basis sets with and without polarization functions. Obtained results have shown that the performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets are better in comparison with the performances of the known 6-31G, 6-31G(d,p) and 6-31G(2df,p) basis sets. These improvements are mainly reached due to better approximations of different electrons belonging to the different atomic shells in the modified basis sets. Applicability of the modified basis sets in thermochemical calculations is also discussed. © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linkins, A.E.
1992-09-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
do Nascimento, Antonio Rogério Bezerra; Farias, Juliano Ricardo; Bernardi, Daniel; Horikoshi, Renato Jun; Omoto, Celso
2016-04-01
An understanding of the genetic basis of insect resistance to insecticides is important for the establishment of insect resistance management (IRM) strategies. In this study we evaluated the inheritance pattern of resistance to the chitin synthesis inhibitor lufenuron in Spodoptera frugiperda. The LC50 values (95% CI) were 0.23 µg lufenuron mL(-1) water (ppm) (0.18-0.28) for the susceptible strain (SUS) and 210.6 µg mL(-1) (175.90-258.10) for the lufenuron-resistant strain (LUF-R), based on diet-overlay bioassay. The resistance ratio was ≈ 915-fold. The LC50 values for reciprocal crosses were 4.89 µg mL(-1) (3.79-5.97) for female LUF-R and male SUS and 5.74 µg mL(-1) (4.70-6.91) for female SUS and male LUF-R, indicating that the inheritance of S. frugiperda resistance to lufenuron is an autosomal, incompletely recessive trait. Backcrosses of the progeny of reciprocal crosses with the parental LUF-R showed a polygenic effect. The estimated minimum number of independent segregations was in the 11.02 range, indicating that resistance to lufenuron is associated with multiple genes in S. frugiperda. Based on genetic crosses, the inheritance pattern of lufenuron resistance in S. frugiperda was autosomal, incompletely recessive and polygenic. Implications of this finding to IRM are discussed in this paper. © 2015 Society of Chemical Industry.
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less
NASA Astrophysics Data System (ADS)
Perov, N. I.
1985-02-01
A physical-geometrical method for computing the orbits of earth satellites on the basis of an inadequate number of angular observations (N3) was developed. Specifically, a new method has been developed for calculating the elements of Keplerian orbits of unidentified artificial satellites using two angular observations (alpha sub k, S sub k, k = 1). The first section gives procedures for determining the topocentric distance to AES on the basis of one optical observation. This is followed by description of a very simple method for determining unperturbed orbits using two satellite position vectors and a time interval which is applicable even in the case of antiparallel AED position vectors, a method designated the R sub 2 iterations method.
“The Mind Is Its Own Place”: Amelioration of Claustrophobia in Semantic Dementia
Clark, Camilla N.; Downey, Laura E.; Golden, Hannah L.; Fletcher, Phillip D.; Cifelli, Alberto; Warren, Jason D.
2014-01-01
Phobias are among the few intensely fearful experiences we regularly have in our everyday lives, yet the brain basis of phobic responses remains incompletely understood. Here we describe the case of a 71-year-old patient with a typical clinicoanatomical syndrome of semantic dementia led by selective (predominantly right-sided) temporal lobe atrophy, who showed striking amelioration of previously disabling claustrophobia following onset of her cognitive syndrome. We interpret our patient's newfound fearlessness as an interaction of damaged limbic and autonomic responsivity with loss of the cognitive meaning of previously threatening situations. This case has implications for our understanding of brain network disintegration in semantic dementia and the neurocognitive basis of phobias more generally. PMID:24825962
Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.
Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads
2018-06-27
We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.
Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures
NASA Astrophysics Data System (ADS)
Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads
2018-06-01
We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.
Social Interactions under Incomplete Information: Games, Equilibria, and Expectations
NASA Astrophysics Data System (ADS)
Yang, Chao
My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.
On effective and optical resolutions of diffraction data sets.
Urzhumtseva, Ludmila; Klaholz, Bruno; Urzhumtsev, Alexandre
2013-10-01
In macromolecular X-ray crystallography, diffraction data sets are traditionally characterized by the highest resolution dhigh of the reflections that they contain. This measure is sensitive to individual reflections and does not refer to the eventual data incompleteness and anisotropy; it therefore does not describe the data well. A physically relevant and robust measure that provides a universal way to define the `actual' effective resolution deff of a data set is introduced. This measure is based on the accurate calculation of the minimum distance between two immobile point scatterers resolved as separate peaks in the Fourier map calculated with a given set of reflections. This measure is applicable to any data set, whether complete or incomplete. It also allows characterizion of the anisotropy of diffraction data sets in which deff strongly depends on the direction. Describing mathematical objects, the effective resolution deff characterizes the `geometry' of the set of measured reflections and is irrelevant to the diffraction intensities. At the same time, the diffraction intensities reflect the composition of the structure from physical entities: the atoms. The minimum distance for the atoms typical of a given structure is a measure that is different from and complementary to deff; it is also a characteristic that is complementary to conventional measures of the data-set quality. Following the previously introduced terms, this value is called the optical resolution, dopt. The optical resolution as defined here describes the separation of the atomic images in the `ideal' crystallographic Fourier map that would be calculated if the exact phases were known. The effective and optical resolution, as formally introduced in this work, are of general interest, giving a common `ruler' for all kinds of crystallographic diffraction data sets.
Derivation of a formula for the resonance integral for a nonorthogonal basis set
Yim, Yung-Chang; Eyring, Henry
1981-01-01
In a self-consistent field calculation, a formula for the off-diagonal matrix elements of the core Hamiltonian is derived for a nonorthogonal basis set by a polyatomic approach. A set of parameters is then introduced for the repulsion integral formula of Mataga-Nishimoto to fit the experimental data. The matrix elements computed for the nonorthogonal basis set in the π-electron approximation are transformed to those for an orthogonal basis set by the Löwdin symmetrical orthogonalization. PMID:16593009
Classification and data acquisition with incomplete data
NASA Astrophysics Data System (ADS)
Williams, David P.
In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the closely related problem of active data acquisition, which develops a strategy to acquire missing features and labels that will most benefit the classification task. We first address the general problem of classification with incomplete data, maintaining the view that all data (i.e., information) is valuable. We employ a logistic regression framework within which we formulate a supervised classification algorithm for incomplete data. This principled, yet flexible, framework permits several interesting extensions that allow all available data to be utilized. One extension incorporates labeling error, which permits the usage of potentially imperfectly labeled data in learning a classifier. A second major extension converts the proposed algorithm to a semi-supervised approach by utilizing unlabeled data via graph-based regularization. Finally, the classification algorithm is extended to the case in which (image) data---from which features are extracted---are available from multiple resolutions. Taken together, this family of incomplete-data classification algorithms exploits all available data in a principled manner by avoiding explicit imputation. Instead, missing data is integrated out analytically with the aid of an estimated conditional density function (conditioned on the observed features). This feat is accomplished by invoking only mild assumptions. We also address the problem of active data acquisition by determining which missing data should be acquired to most improve performance. Specifically, we examine this data acquisition task when the data to be acquired can be either labels or features. The proposed approach is based on a criterion that accounts for the expected benefit of the acquisition. This approach, which is applicable for any general missing data problem, exploits the incomplete-data classification framework introduced in the first part of this dissertation. This data acquisition approach allows for the acquisition of both labels and features. Moreover, several types of feature acquisition are permitted, including the acquisition of individual or multiple features for individual or multiple data points, which may be either labeled or unlabeled. Furthermore, if different types of data acquisition are feasible for a given application, the algorithm will automatically determine the most beneficial type of data to acquire. Experimental results on both benchmark machine learning data sets and real (i.e., measured) remote-sensing data demonstrate the advantages of the proposed incomplete-data classification and active data acquisition algorithms.
Mackie, Iain D; DiLabio, Gino A
2011-10-07
The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics
Denison, Julie A.; Koole, Olivier; Tsui, Sharon; Menten, Joris; Torpey, Kwasi; van Praag, Eric; Mukadi, Ya Diul; Colebunders, Robert; Auld, Andrew F.; Agolory, Simon; Kaplan, Jonathan E.; Mulenga, Modest; Kwesigabo, Gideon P.; Wabwire-Mangen, Fred; Bangsberg, David R.
2016-01-01
Objectives To characterize antiretroviral therapy (ART) adherence across different programmes and examine the relationship between individual and programme characteristics and incomplete adherence among ART clients in sub-Saharan Africa. Design A cross-sectional study. Methods Systematically selected ART clients (≥18 years; on ART ≥6 months) attending 18 facilities in three countries (250 clients/facility) were interviewed. Client self-reports (3-day, 30-day, Case Index ≥48 consecutive hours of missed ART), healthcare provider estimates and the pharmacy medication possession ratio (MPR) were used to estimate ART adherence. Participants from two facilities per country underwent HIV RNA testing. Optimal adherence measures were selected on the basis of degree of association with concurrent HIV RNA dichotomized at less than or greater/equal to 1000 copies/ml. Multivariate regression analysis, adjusted for site-level clustering, assessed associations between incomplete adherence and individual and programme factors. Results A total of 4489 participants were included, of whom 1498 underwent HIV RNA testing. Nonadherence ranged from 3.2% missing at least 48 consecutive hours to 40.1% having an MPR of less than 90%. The percentage with HIV RNA at least 1000 copies/ml ranged from 7.2 to 17.2% across study sites (mean = 9.9%). Having at least 48 consecutive hours of missed ART was the adherence measure most strongly related to virologic failure. Factors significantly related to incomplete adherence included visiting a traditional healer, screening positive for alcohol abuse, experiencing more HIV symptoms, having an ART regimen without nevirapine and greater levels of internalized stigma. Conclusion Results support more in-depth investigations of the role of traditional healers, and the development of interventions to address alcohol abuse and internalized stigma among treatment-experienced adult ART patients. PMID:25686684
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
The consequences of hospital autonomization in Colombia: a transaction cost economics analysis.
Castano, Ramon; Mills, Anne
2013-03-01
Granting autonomy to public hospitals in developing countries has been common over recent decades, and implies a shift from hierarchical to contract-based relationships with health authorities. Theory on transactions costs in contractual relationships suggests they stem from relationship-specific investments and contract incompleteness. Transaction cost economics argues that the parties involved in exchanges seek to reduce transaction costs. The objective of this research was to analyse the relationships observed between purchasers and the 22 public hospitals of the city of Bogota, Colombia, in order to understand the role of relationship-specific investments and contract incompleteness as sources of transaction costs, through a largely qualitative study. We found that contract-based relationships showed relevant transaction costs associated mainly with contract incompleteness, not with relationship-specific investments. Regarding relationships between insurers and local hospitals for primary care services, compulsory contracting regulations locked-in the parties to the contracts. For high-complexity services (e.g. inpatient care), no restrictions applied and relationships suggested transaction-cost minimizing behaviour. Contract incompleteness was found to be a source of transaction costs on its own. We conclude that transaction costs seemed to play a key role in contract-based relationships, and contract incompleteness by itself appeared to be a source of transaction costs. The same findings are likely in other contexts because of difficulties in defining, observing and verifying the contracted products and the underlying information asymmetries. The role of compulsory contracting might be context-specific, although it is likely to emerge in other settings due to the safety-net role of public hospitals.
SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN
In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...
NASA Astrophysics Data System (ADS)
Martin, Jan M. L.; Sundermann, Andreas
2001-02-01
We propose large-core correlation-consistent (cc) pseudopotential basis sets for the heavy p-block elements Ga-Kr and In-Xe. The basis sets are of cc-pVTZ and cc-pVQZ quality, and have been optimized for use with the large-core (valence-electrons only) Stuttgart-Dresden-Bonn (SDB) relativistic pseudopotentials. Validation calculations on a variety of third-row and fourth-row diatomics suggest them to be comparable in quality to the all-electron cc-pVTZ and cc-pVQZ basis sets for lighter elements. Especially the SDB-cc-pVQZ basis set in conjunction with a core polarization potential (CPP) yields excellent agreement with experiment for compounds of the later heavy p-block elements. For accurate calculations on Ga (and, to a lesser extent, Ge) compounds, explicit treatment of 13 valence electrons appears to be desirable, while it seems inevitable for In compounds. For Ga and Ge, we propose correlation consistent basis sets extended for (3d) correlation. For accurate calculations on organometallic complexes of interest to homogenous catalysis, we recommend a combination of the standard cc-pVTZ basis set for first- and second-row elements, the presently derived SDB-cc-pVTZ basis set for heavier p-block elements, and for transition metals, the small-core [6s5p3d] Stuttgart-Dresden basis set-relativistic effective core potential combination supplemented by (2f1g) functions with exponents given in the Appendix to the present paper.
Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG
NASA Astrophysics Data System (ADS)
Stoudenmire, Edwin Miles; White, Steven
We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.
Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman
2008-04-24
We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results insignificantly for iron(II) porphyrin coordinated with imidazole. Poor performance of a "locally dense" basis set with a large number of basis functions on the Fe center was observed in calculation of quintet-triplet gaps. Our results lead to a series of suggestions for density functional theory calculations of quintet-triplet energy gaps in ferrohemes with a single axial imidazole; these suggestions are potentially applicable for other transition-metal complexes.
Concept Learning and Heuristic Classification in Weak-Theory Domains
1990-03-01
age and noise-induced cochlear age..gt.60 noise-induced cochlear air(mild) age-induced cochlear history(noise) norma ]_ear speechpoor)acousticneuroma...Annual review of computer science. Machine Learning, 4, 1990. (to appear). [18] R.T. Duran . Concept learning with incomplete data sets. Master’s thesis
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2011 CFR
2011-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2012 CFR
2012-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2014 CFR
2014-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2013 CFR
2013-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
Black carbon (BC), light absorbing particles emitted primarily from incomplete combustion, is operationally defined through a variety of instrumental measurements rather than with a universal definition set forth by the research or regulatory communities. To examine the consiste...
Efficiently Ranking Hyphotheses in Machine Learning
NASA Technical Reports Server (NTRS)
Chien, Steve
1997-01-01
This paper considers the problem of learning the ranking of a set of alternatives based upon incomplete information (e.g. a limited number of observations). At each decision cycle, the system can output a complete ordering on the hypotheses or decide to gather additional information (e.g. observation) at some cost.
Localized basis sets for unbound electrons in nanoelectronics.
Soriano, D; Jacob, D; Palacios, J J
2008-02-21
It is shown how unbound electron wave functions can be expanded in a suitably chosen localized basis sets for any desired range of energies. In particular, we focus on the use of Gaussian basis sets, commonly used in first-principles codes. The possible usefulness of these basis sets in a first-principles description of field emission or scanning tunneling microscopy at large bias is illustrated by studying a simpler related phenomenon: The lifetime of an electron in a H atom subjected to a strong electric field.
Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran
2014-03-01
Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.
Near Hartree-Fock quality GTO basis sets for the second-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry
1987-01-01
Energy optimized, near Hartree-Fock quality Gaussian basis sets ranging in size from (17s12p) to (20s15p) are presented for the ground states of the second-row atoms for Na(2P), Na(+), Na(-), Mg(3P), P(-), S(-), and Cl(-). In addition, optimized supplementary functions are given for the ground state basis sets to describe the negative ions, and the excited Na(2P) and Mg(3P) atomic states. The ratios of successive orbital exponents describing the inner part of the 1s and 2p orbitals are found to be nearly independent of both nuclear charge and basis set size. This provides a method of obtaining good starting estimates for other basis set optimizations.
Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.
Würflinger, T; Gamper, I; Aach, T; Sechi, A S
2011-01-01
Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Orellana, Diego A.; Salas, Alberto A.; Solarz, Pablo F.; Medina Ruiz, Luis; Rotger, Viviana I.
2016-04-01
The production of clinical information about each patient is constantly increasing, and it is noteworthy that the information is created in different formats and at diverse points of care, resulting in fragmented, incomplete, inaccurate and isolated, health information. The use of health information technology has been promoted as having a decisive impact to improve the efficiency, cost-effectiveness, quality and safety of medical care delivery. However in developing countries the utilization of health information technology is insufficient and lacking of standards among other situations. In the present work we evaluate the framework EHRGen, based on the openEHR standard, as mean to reach generation and availability of patient centered information. The framework has been evaluated through the provided tools for final users, that is, without intervention of computer experts. It makes easier to adopt the openEHR ideas and provides an open source basis with a set of services, although some limitations in its current state conspire against interoperability and usability. However, despite the described limitations respect to usability and semantic interoperability, EHRGen is, at least regionally, a considerable step toward EHR adoption and interoperability, so that it should be supported from academic and administrative institutions.
M13 multiple stellar populations seen with the eyes of Strömgren photometry
NASA Astrophysics Data System (ADS)
Savino, A.; Massari, D.; Bragaglia, A.; Dalessandro, E.; Tolstoy, E.
2018-03-01
We present a photometric study of M13 multiple stellar populations over a wide field of view, covering approximately 6.5 half-light radii, using archival Isaac Newton Telescope observations to build an accurate multiband Strömgren catalogue. The use of the Strömgren index cy permits us to separate the multiple populations of M13 on the basis of their position on the red giant branch. The comparison with medium and high resolution spectroscopic analysis confirms the robustness of our selection criterion. To determine the radial distribution of stars in M13, we complemented our data set with Hubble Space Telescope observations of the cluster core, to compensate for the effect of incompleteness affecting the most crowded regions. From the analysis of the radial distributions, we do not find any significant evidence of spatial segregation. Some residuals may be present in the external regions where we observe only a small number of stars. This finding is compatible with the short dynamical time-scale of M13 and represents, to date, one of the few examples of fully spatially mixed multiple populations in a massive globular cluster.
Teodoro, Tiago Quevedo; Visscher, Lucas; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade
2017-03-14
The f-block elements are addressed in this third part of a series of prolapse-free basis sets of quadruple-ζ quality (RPF-4Z). Relativistic adapted Gaussian basis sets (RAGBSs) are used as primitive sets of functions while correlating/polarization (C/P) functions are chosen by analyzing energy lowerings upon basis set increments in Dirac-Coulomb multireference configuration interaction calculations with single and double excitations of the valence spinors. These function exponents are obtained by applying the RAGBS parameters in a polynomial expression. Moreover, through the choice of C/P characteristic exponents from functions of lower angular momentum spaces, a reduction in the computational demand is attained in relativistic calculations based on the kinetic balance condition. The present study thus complements the RPF-4Z sets for the whole periodic table (Z ≤ 118). The sets are available as Supporting Information and can also be found at http://basis-sets.iqsc.usp.br .
Combination of large and small basis sets in electronic structure calculations on large systems
NASA Astrophysics Data System (ADS)
Røeggen, Inge; Gao, Bin
2018-04-01
Two basis sets—a large and a small one—are associated with each nucleus of the system. Each atom has its own separate one-electron basis comprising the large basis set of the atom in question and the small basis sets for the partner atoms in the complex. The perturbed atoms in molecules and solids model is at core of the approach since it allows for the definition of perturbed atoms in a system. It is argued that this basis set approach should be particularly useful for periodic systems. Test calculations are performed on one-dimensional arrays of H and Li atoms. The ground-state energy per atom in the linear H array is determined versus bond length.
Incomplete Multisource Transfer Learning.
Ding, Zhengming; Shao, Ming; Fu, Yun
2018-02-01
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
A novel multisensor traffic state assessment system based on incomplete data.
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang
2014-01-01
A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system.
A Novel Multisensor Traffic State Assessment System Based on Incomplete Data
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang
2014-01-01
A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system. PMID:25162055
Wen, Dingqiao; Yu, Yun; Hahn, Matthew W.; Nakhleh, Luay
2016-01-01
The role of hybridization and subsequent introgression has been demonstrated in an increasing number of species. Recently, Fontaine et al. (Science, 347, 2015, 1258524) conducted a phylogenomic analysis of six members of the Anopheles gambiae species complex. Their analysis revealed a reticulate evolutionary history and pointed to extensive introgression on all four autosomal arms. The study further highlighted the complex evolutionary signals that the co-occurrence of incomplete lineage sorting (ILS) and introgression can give rise to in phylogenomic analyses. While tree-based methodologies were used in the study, phylogenetic networks provide a more natural model to capture reticulate evolutionary histories. In this work, we reanalyse the Anopheles data using a recently devised framework that combines the multispecies coalescent with phylogenetic networks. This framework allows us to capture ILS and introgression simultaneously, and forms the basis for statistical methods for inferring reticulate evolutionary histories. The new analysis reveals a phylogenetic network with multiple hybridization events, some of which differ from those reported in the original study. To elucidate the extent and patterns of introgression across the genome, we devise a new method that quantifies the use of reticulation branches in the phylogenetic network by each genomic region. Applying the method to the mosquito data set reveals the evolutionary history of all the chromosomes. This study highlights the utility of ‘network thinking’ and the new insights it can uncover, in particular in phylogenomic analyses of large data sets with extensive gene tree incongruence. PMID:26808290
Effects of incomplete mixing on reactive transport in flows through heterogeneous porous media
NASA Astrophysics Data System (ADS)
Wright, Elise E.; Richter, David H.; Bolster, Diogo
2017-11-01
The phenomenon of incomplete mixing reduces bulk effective reaction rates in reactive transport. Many existing models do not account for these effects, resulting in the overestimation of reaction rates in laboratory and field settings. To date, most studies on incomplete mixing have focused on diffusive systems; here, we extend these to explore the role that flow heterogeneity has on incomplete mixing. To do this, we examine reactive transport using a Lagrangian reactive particle tracking algorithm in two-dimensional idealized heterogeneous porous media. Contingent on the nondimensional Peclet and Damköhler numbers in the system, it was found that near well-mixed behavior could be observed at late times in the heterogeneous flow field simulations. We look at three common flow deformation metrics that describe the enhancement of mixing in the flow due to velocity gradients: the Okubo-Weiss parameter (θ ), the largest eigenvalue of the Cauchy-Green strain tensor (λC), and the finite-time Lyapunov exponent (Λ ). Strong mixing regions in the heterogeneous flow field identified by these metrics were found to correspond to regions with higher numbers of reactions, but the infrequency of these regions compared to the large numbers of reactions occurring elsewhere in the domain imply that these strong mixing regions are insufficient in explaining the observed near well-mixed behavior. Since it was found that reactive transport in these heterogeneous flows could overcome the effects of incomplete mixing, we also search for a closure for the mean concentration. The conservative quantity u2¯, where u =CA-CB , was found to predict the late time scaling of the mean concentration, i.e., Ci¯˜u2¯ .
Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory
NASA Technical Reports Server (NTRS)
Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.
1990-01-01
New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
High quality Gaussian basis sets for fourth-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry; Faegri, Knut, Jr.
1992-01-01
Energy optimized Gaussian basis sets of triple-zeta quality for the atoms Rb-Xe have been derived. Two series of basis sets are developed: (24s 16p 10d) and (26s 16p 10d) sets which were expanded to 13d and 19p functions as the 4d and 5p shells become occupied. For the atoms lighter than Cd, the (24s 16p 10d) sets with triple-zeta valence distributions are higher in energy than the corresponding double-zeta distribution. To ensure a triple-zeta distribution and a global energy minimum, the (26s 16p 10d) sets were derived. Total atomic energies from the largest basis sets are between 198 and 284 (mu)E(sub H) above the numerical Hartree-Fock energies.
Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers
Wang, Jun; Luo, Ray
2009-01-01
CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271
MalaCards: an integrated compendium for diseases and their annotation
Rappaport, Noa; Nativ, Noam; Stelzer, Gil; Twik, Michal; Guan-Golan, Yaron; Iny Stein, Tsippi; Bahir, Iris; Belinky, Frida; Morrey, C. Paul; Safran, Marilyn; Lancet, Doron
2013-01-01
Comprehensive disease classification, integration and annotation are crucial for biomedical discovery. At present, disease compilation is incomplete, heterogeneous and often lacking systematic inquiry mechanisms. We introduce MalaCards, an integrated database of human maladies and their annotations, modeled on the architecture and strategy of the GeneCards database of human genes. MalaCards mines and merges 44 data sources to generate a computerized card for each of 16 919 human diseases. Each MalaCard contains disease-specific prioritized annotations, as well as inter-disease connections, empowered by the GeneCards relational database, its searches and GeneDecks set analyses. First, we generate a disease list from 15 ranked sources, using disease-name unification heuristics. Next, we use four schemes to populate MalaCards sections: (i) directly interrogating disease resources, to establish integrated disease names, synonyms, summaries, drugs/therapeutics, clinical features, genetic tests and anatomical context; (ii) searching GeneCards for related publications, and for associated genes with corresponding relevance scores; (iii) analyzing disease-associated gene sets in GeneDecks to yield affiliated pathways, phenotypes, compounds and GO terms, sorted by a composite relevance score and presented with GeneCards links; and (iv) searching within MalaCards itself, e.g. for additional related diseases and anatomical context. The latter forms the basis for the construction of a disease network, based on shared MalaCards annotations, embodying associations based on etiology, clinical features and clinical conditions. This broadly disposed network has a power-law degree distribution, suggesting that this might be an inherent property of such networks. Work in progress includes hierarchical malady classification, ontological mapping and disease set analyses, striving to make MalaCards an even more effective tool for biomedical research. Database URL: http://www.malacards.org/ PMID:23584832
Skolem and pessimism about proof in mathematics.
Cohen, Paul J
2005-10-15
Attitudes towards formalization and proof have gone through large swings during the last 150 years. We sketch the development from Frege's first formalization, to the debates over intuitionism and other schools, through Hilbert's program and the decisive blow of the Gödel Incompleteness Theorem. A critical role is played by the Skolem-Lowenheim Theorem, which showed that no first-order axiom system can characterize a unique infinite model. Skolem himself regarded this as a body blow to the belief that mathematics can be reliably founded only on formal axiomatic systems. In a remarkably prescient paper, he even sketches the possibility of interesting new models for set theory itself, something later realized by the method of forcing. This is in contrast to Hilbert's belief that mathematics could resolve all its questions. We discuss the role of new axioms for set theory, questions in set theory itself, and their relevance for number theory. We then look in detail at what the methods of the predicate calculus, i.e. mathematical reasoning, really entail. The conclusion is that there is no reasonable basis for Hilbert's assumption. The vast majority of questions even in elementary number theory, of reasonable complexity, are beyond the reach of any such reasoning. Of course this cannot be proved and we present only plausibility arguments. The great success of mathematics comes from considering 'natural problems', those which are related to previous work and offer a good chance of being solved. The great glories of human reasoning, beginning with the Greek discovery of geometry, are in no way diminished by this pessimistic view. We end by wishing good health to present-day mathematics and the mathematics of many centuries to come.
Design prediction for long term stress rupture service of composite pressure vessels
NASA Technical Reports Server (NTRS)
Robinson, Ernest Y.
1992-01-01
Extensive stress rupture studies on glass composites and Kevlar composites were conducted by the Lawrence Radiation Laboratory beginning in the late 1960's and extending to about 8 years in some cases. Some of the data from these studies published over the years were incomplete or were tainted by spurious failures, such as grip slippage. Updated data sets were defined for both fiberglass and Kevlar composite stand test specimens. These updated data are analyzed in this report by a convenient form of the bivariate Weibull distribution, to establish a consistent set of design prediction charts that may be used as a conservative basis for predicting the stress rupture life of composite pressure vessels. The updated glass composite data exhibit an invariant Weibull modulus with lifetime. The data are analyzed in terms of homologous service load (referenced to the observed median strength). The equations relating life, homologous load, and probability are given, and corresponding design prediction charts are presented. A similar approach is taken for Kevlar composites, where the updated stand data do show a turndown tendency at long life accompanied by a corresponding change (increase) of the Weibull modulus. The turndown characteristic is not present in stress rupture test data of Kevlar pressure vessels. A modification of the stress rupture equations is presented to incorporate a latent, but limited, strength drop, and design prediction charts are presented that incorporate such behavior. The methods presented utilize Cartesian plots of the probability distributions (which are a more natural display for the design engineer), based on median normalized data that are independent of statistical parameters and are readily defined for any set of test data.
Setting practical conservation priorities for birds in the Western Andes of Colombia.
Ocampo-Peñuela, Natalia; Pimm, Stuart L
2014-10-01
We aspired to set conservation priorities in ways that lead to direct conservation actions. Very large-scale strategic mapping leads to familiar conservation priorities exemplified by biodiversity hotspots. In contrast, tactical conservation actions unfold on much smaller geographical extents and they need to reflect the habitat loss and fragmentation that have sharply restricted where species now live. Our aspirations for direct, practical actions were demanding. First, we identified the global, strategic conservation priorities and then downscaled to practical local actions within the selected priorities. In doing this, we recognized the limitations of incomplete information. We started such a process in Colombia and used the results presented here to implement reforestation of degraded land to prevent the isolation of a large area of cloud forest. We used existing range maps of 171 bird species to identify priority conservation areas that would conserve the greatest number of species at risk in Colombia. By at risk species, we mean those that are endemic and have small ranges. The Western Andes had the highest concentrations of such species-100 in total-but the lowest densities of national parks. We then adjusted the priorities for this region by refining these species ranges by selecting only areas of suitable elevation and remaining habitat. The estimated ranges of these species shrank by 18-100% after accounting for habitat and suitable elevation. Setting conservation priorities on the basis of currently available range maps excluded priority areas in the Western Andes and, by extension, likely elsewhere and for other taxa. By incorporating detailed maps of remaining natural habitats, we made practical recommendations for conservation actions. One recommendation was to restore forest connections to a patch of cloud forest about to become isolated from the main Andes. © 2014 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
Relativistic well-tempered Gaussian basis sets for helium through mercury
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, S.; Matsuoka, O.
1989-10-01
Exponent parameters of the nonrelativistically optimized well-tempered Gaussian basis sets of Huzinaga and Klobukowski have been employed for Dirac--Fock--Roothaan calculations without their reoptimization. For light atoms He (atomic number {ital Z}=2)--Rh ({ital Z}=45), the number of exponent parameters used has been the same as the nonrelativistic basis sets and for heavier atoms Pd ({ital Z}=46)--Hg({ital Z}=80), two 2{ital p} (and three 3{ital d}) Gaussian basis functions have been augmented. The scheme of kinetic energy balance and the uniformly charged sphere model of atomic nuclei have been adopted. The qualities of the calculated basis sets are close to the Dirac--Fock limit.
NASA Astrophysics Data System (ADS)
Wang, Feng; Pang, Wenning; Duffy, Patrick
2012-12-01
Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.
Plumley, Joshua A.; Dannenberg, J. J.
2011-01-01
We evaluate the performance of nine functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-DFT molecular orbital calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise corrected PES. The calculated ΔE's with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, due to error compensation, the smaller basis sets gave the best results (in comparison to experimental and high level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. Since many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: 1) D95(d,p) with B3LYP, B97D, M06 or MPWB1k; 2) 6-311G(d,p) with B3LYP; 3) D95++(d,p) with B3LYP, B97D or MPWB1K; 4)6-311++G(d,p) with B3LYP or B97D; and 5) aug-cc-pVDZ with M05-2X, M06-2X or X3LYP. PMID:21328398
Plumley, Joshua A; Dannenberg, J J
2011-06-01
We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP. Copyright © 2011 Wiley Periodicals, Inc.
On the asteroidal jet-stream Flora A
NASA Technical Reports Server (NTRS)
Klacka, Jozef
1992-01-01
The problems of the virtual existence of the Flora 1, separated from the rest of the Flora family, and jet-stream Flora A (Alfven 1969) is discussed in connection with the observational selection effects. It is shown that observational selection effects operate as a whole and can be important in incomplete observational data set.
Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…
Partial and Incomplete Voices: The Political and Three Early Childhood Teachers' Learning
ERIC Educational Resources Information Center
Henderson, Linda
2014-01-01
The early childhood-school relationship is reported as having points of separation and difference. In particular, early childhood teachers located in a school setting report experiencing a push-down effect. This paper reports on a participatory action research project involving three early childhood teachers working within an independent school.…
On Testability of Missing Data Mechanisms in Incomplete Data Sets
ERIC Educational Resources Information Center
Raykov, Tenko
2011-01-01
This article is concerned with the question of whether the missing data mechanism routinely referred to as missing completely at random (MCAR) is statistically examinable via a test for lack of distributional differences between groups with observed and missing data, and related consequences. A discussion is initially provided, from a formal logic…
Communication: Listening and Responding. Affective 4.0.
ERIC Educational Resources Information Center
Borgers, Sherry B., Comp.; Ward, G. Robert, Comp.
This module is designed to provide practice in listening effectively and in responding to messages sent by another. The module is divided into two sets of activities, the first is the formation of a triad enabling the student to investigate the following: do you listen, listening and the unrelated response, incomplete listening, listening for…
Alternative models of recreational off-highway vehicle site demand
Jeffrey Englin; Thomas Holmes; Rebecca Niell
2006-01-01
A controversial recreation activity is off-highway vehicle use. Off-highway vehicle use is controversial because it is incompatible with most other activities and is extremely hard on natural eco-systems. This study estimates utility theoretic incomplete demand systems for four off-highway vehicle sites. Since two sets of restrictions are equally consistent with...
NASA Astrophysics Data System (ADS)
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-01
We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin
2013-08-22
For a set of eight equilibrium intermolecular complexes, it is discovered in this paper that the basis set limit (BSL) cannot be reached by aug-cc-pV5Z for three of the Minnesota density functionals: M06-L, M06-HF, and M11-L. In addition, the M06 and M11 functionals exhibit substantial, but less severe, difficulties in reaching the BSL. By using successively finer grids, it is demonstrated that this issue is not related to the numerical integration of the exchange-correlation functional. In addition, it is shown that the difficulty in reaching the BSL is not a direct consequence of the structure of the augmented functions inmore » Dunning’s basis sets, since modified augmentation yields similar results. By using a very large custom basis set, the BSL appears to be reached for the HF dimer for all of the functionals. As a result, it is concluded that the difficulties faced by several of the Minnesota density functionals are related to an interplay between the form of these functionals and the structure of standard basis sets. It is speculated that the difficulty in reaching the basis set limit is related to the magnitude of the inhomogeneity correction factor (ICF) of the exchange functional. A simple modification of the M06-L exchange functional that systematically reduces the basis set superposition error (BSSE) for the HF dimer in the aug-cc-pVQZ basis set is presented, further supporting the speculation that the difficulty in reaching the BSL is caused by the magnitude of the exchange functional ICF. In conclusion, the BSSE is plotted with respect to the internuclear distance of the neon dimer for two of the examined functionals.« less
"Antelope": a hybrid-logic model checker for branching-time Boolean GRN analysis
2011-01-01
Background In Thomas' formalism for modeling gene regulatory networks (GRNs), branching time, where a state can have more than one possible future, plays a prominent role. By representing a certain degree of unpredictability, branching time can model several important phenomena, such as (a) asynchrony, (b) incompletely specified behavior, and (c) interaction with the environment. Introducing more than one possible future for a state, however, creates a difficulty for ordinary simulators, because infinitely many paths may appear, limiting ordinary simulators to statistical conclusions. Model checkers for branching time, by contrast, are able to prove properties in the presence of infinitely many paths. Results We have developed Antelope ("Analysis of Networks through TEmporal-LOgic sPEcifications", http://turing.iimas.unam.mx:8080/AntelopeWEB/), a model checker for analyzing and constructing Boolean GRNs. Currently, software systems for Boolean GRNs use branching time almost exclusively for asynchrony. Antelope, by contrast, also uses branching time for incompletely specified behavior and environment interaction. We show the usefulness of modeling these two phenomena in the development of a Boolean GRN of the Arabidopsis thaliana root stem cell niche. There are two obstacles to a direct approach when applying model checking to Boolean GRN analysis. First, ordinary model checkers normally only verify whether or not a given set of model states has a given property. In comparison, a model checker for Boolean GRNs is preferable if it reports the set of states having a desired property. Second, for efficiency, the expressiveness of many model checkers is limited, resulting in the inability to express some interesting properties of Boolean GRNs. Antelope tries to overcome these two drawbacks: Apart from reporting the set of all states having a given property, our model checker can express, at the expense of efficiency, some properties that ordinary model checkers (e.g., NuSMV) cannot. This additional expressiveness is achieved by employing a logic extending the standard Computation-Tree Logic (CTL) with hybrid-logic operators. Conclusions We illustrate the advantages of Antelope when (a) modeling incomplete networks and environment interaction, (b) exhibiting the set of all states having a given property, and (c) representing Boolean GRN properties with hybrid CTL. PMID:22192526
Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A
2016-10-26
Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Aprà, Edoardo; Xantheas, Sotiris S.
We establish a new estimate for the binding energy between two benzene molecules in the parallel-displaced (PD) conformation by systematically converging (i) the intra- and intermolecular geometry at the minimum, (ii) the expansion of the orbital basis set, and (iii) the level of electron correlation. The calculations were performed at the second-order Møller–Plesset perturbation (MP2) and the coupled cluster including singles, doubles, and a perturbative estimate of triples replacement [CCSD(T)] levels of electronic structure theory. At both levels of theory, by including results corrected for basis set superposition error (BSSE), we have estimated the complete basis set (CBS) limit bymore » employing the family of Dunning’s correlation-consistent polarized valence basis sets. The largest MP2 calculation was performed with the cc-pV6Z basis set (2772 basis functions), whereas the largest CCSD(T) calculation was with the cc-pV5Z basis set (1752 basis functions). The cluster geometries were optimized with basis sets up to quadruple-ζ quality, observing that both its intra- and intermolecular parts have practically converged with the triple-ζ quality sets. The use of converged geometries was found to play an important role for obtaining accurate estimates for the CBS limits. Our results demonstrate that the binding energies with the families of the plain (cc-pVnZ) and augmented (aug-cc-pVnZ) sets converge [within <0.01 kcal/mol for MP2 and <0.15 kcal/mol for CCSD(T)] to the same CBS limit. In addition, the average of the uncorrected and BSSE-corrected binding energies was found to converge to the same CBS limit much faster than either of the two constituents (uncorrected or BSSE-corrected binding energies). Due to the fact that the family of augmented basis sets (especially for the larger sets) causes serious linear dependency problems, the plain basis sets (for which no linear dependencies were found) are deemed as a more efficient and straightforward path for obtaining an accurate CBS limit. We considered extrapolations of the uncorrected (ΔE) and BSSE-corrected (ΔE cp) binding energies, their average value (ΔE ave), as well as the average of the latter over the plain and augmented sets (Δ~E ave) with the cardinal number of the basis set n. Our best estimate of the CCSD(T)/CBS limit for the π–π binding energy in the PD benzene dimer is D e = -2.65 ± 0.02 kcal/mol. The best CCSD(T)/cc-pV5Z calculated value is -2.62 kcal/mol, just 0.03 kcal/mol away from the CBS limit. For comparison, the MP2/CBS limit estimate is -5.00 ± 0.01 kcal/mol, demonstrating a 90% overbinding with respect to CCSD(T). Finally, the spin-component-scaled (SCS) MP2 variant was found to closely reproduce the CCSD(T) results for each basis set, while scaled opposite spin (SOS) MP2 yielded results that are too low when compared to CCSD(T).« less
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
ERIC Educational Resources Information Center
Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.
2007-01-01
The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.
Energy Efficient and QoS sensitive Routing Protocol for Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Saeed Tanoli, Tariq; Khalid Khan, Muhammad
2013-12-01
Efficient routing is an important part of wireless ad hoc networks. Since in ad hoc networks we have limited resources, there are many limitations like bandwidth, battery consumption, and processing cycle etc. Reliability is also necessary since there is no allowance for invalid or incomplete information (and expired data is useless). There are various protocols that perform routing by considering one parameter but ignoring other parameters. In this paper we present a protocol that finds route on the basis of bandwidth, energy and mobility of the nodes participating in the communication.
Reconstruction of incomplete cell paths through a 3D-2D level set segmentation
NASA Astrophysics Data System (ADS)
Hariri, Maia; Wan, Justin W. L.
2012-02-01
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
The effect of diffuse basis functions on valence bond structural weights
NASA Astrophysics Data System (ADS)
Galbraith, John Morrison; James, Andrew M.; Nemes, Coleen T.
2014-03-01
Structural weights and bond dissociation energies have been determined for H-F, H-X, and F-X molecules (-X = -OH, -NH2, and -CH3) at the valence bond self-consistent field (VBSCF) and breathing orbital valence bond (BOVB) levels of theory with the aug-cc-pVDZ and 6-31++G(d,p) basis sets. At the BOVB level, the aug-cc-pVDZ basis set yields a counterintuitive ordering of ionic structural weights when the initial heavy atom s-type basis functions are included. For H-F, H-OH, and F-X, the ordering follows chemical intuition when these basis functions are not included. These counterintuitive weights are shown to be a result of the diffuse polarisation function on one VB fragment being spatially located, in part, on the other VB fragment. Except in the case of F-CH3, this problem is corrected with the 6-31++G(d,p) basis set. The initial heavy atom s-type functions are shown to make an important contribution to the VB orbitals and bond dissociation energies and, therefore, should not be excluded. It is recommended to not use diffuse basis sets in valence bond calculations unless absolutely necessary. If diffuse basis sets are needed, the 6-31++G(d,p) basis set should be used with caution and the structural weights checked against VBSCF values which have been shown to follow the expected ordering in all cases.
NASA Astrophysics Data System (ADS)
Park, H.; Han, C.; Gould, A.; Udalski, A.; Sumi, T.; Fouqué, P.; Choi, J.-Y.; Christie, G.; Depoy, D. L.; Dong, Subo; Gaudi, B. S.; Hwang, K.-H.; Jung, Y. K.; Kavka, A.; Lee, C.-U.; Monard, L. A. G.; Natusch, T.; Ngan, H.; Pogge, R. W.; Shin, I.-G.; Yee, J. C.; μFUN Collaboration; Szymański, M. K.; Kubiak, M.; Soszyński, I.; Pietrzyński, G.; Poleski, R.; Ulaczyk, K.; Pietrukowicz, P.; Kozłowski, S.; Skowron, J.; Wyrzykowski, Ł.; OGLE Collaboration; Abe, F.; Bennett, D. P.; Bond, I. A.; Botzler, C. S.; Chote, P.; Freeman, M.; Fukui, A.; Fukunaga, D.; Harris, P.; Itow, Y.; Koshimoto, N.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Namba, S.; Ohnishi, K.; Rattenbury, N. J.; Saito, To.; Sullivan, D. J.; Sweatman, W. L.; Suzuki, D.; Tristram, P. J.; Wada, K.; Yamai, N.; Yock, P. C. M.; Yonehara, A.; MOA Collaboration
2014-05-01
Characterizing a microlensing planet is done by modeling an observed lensing light curve. In this process, it is often confronted that solutions of different lensing parameters result in similar light curves, causing difficulties in uniquely interpreting the lens system, and thus understanding the causes of different types of degeneracy is important. In this work, we show that incomplete coverage of a planetary perturbation can result in degenerate solutions even for events where the planetary signal is detected with a high level of statistical significance. We demonstrate the degeneracy for an actually observed event OGLE-2012-BLG-0455/MOA-2012-BLG-206. The peak of this high-magnification event (A max ~ 400) exhibits very strong deviation from a point-lens model with Δχ2 >~ 4000 for data sets with a total of 6963 measurements. From detailed modeling of the light curve, we find that the deviation can be explained by four distinct solutions, i.e., two very different sets of solutions, each with a twofold degeneracy. While the twofold (so-called close/wide) degeneracy is well understood, the degeneracy between the radically different solutions is not previously known. The model light curves of this degeneracy differ substantially in the parts that were not covered by observation, indicating that the degeneracy is caused by the incomplete coverage of the perturbation. It is expected that the frequency of the degeneracy introduced in this work will be greatly reduced with the improvement of the current lensing survey and follow-up experiments and the advent of new surveys.
Real-Time Data Collection Using Text Messaging in a Primary Care Clinic.
Rai, Manisha; Moniz, Michelle H; Blaszczak, Julie; Richardson, Caroline R; Chang, Tammy
2017-12-01
The use of text messaging is nearly ubiquitous and represents a promising method of collecting data from diverse populations. The purpose of this study was to assess the feasibility and acceptability of text message surveys in a clinical setting and to describe key lessons to minimize attrition. We obtained a convenience sample of individuals who entered the waiting room of a low-income, primary care clinic. Participants were asked to answer between 17 and 30 survey questions on a variety of health-related topics, including both open- and closed-ended questions. Descriptive statistics were used to characterize the participants and determine the response rates. Bivariate analyses were used to identify predictors of incomplete surveys. Our convenience sample consisted of 461 individuals. Of those who attempted the survey, 80% (370/461) completed it in full. The mean age of respondents was 35.4 years (standard deviation = 12.4). Respondents were predominantly non-Hispanic black (42%) or non-Hispanic white (41%), female (75%), and with at least some college education (70%). Of those who completed the survey, 84% (312/370) reported willingness to do another text message survey. Those with incomplete surveys answered a median of nine questions before stopping. Smartphone users were less likely to leave the survey incomplete compared with non-smartphone users (p = 0.004). Text-message surveys are a feasible and acceptable method to collect real-time data among low-income, clinic-based populations. Offering participants a setting for immediate survey completion, minimizing survey length, simplifying questions, and allowing "free text" responses for all questions may optimize response rates.
On the effects of basis set truncation and electron correlation in conformers of 2-hydroxy-acetamide
NASA Astrophysics Data System (ADS)
Szarecka, A.; Day, G.; Grout, P. J.; Wilson, S.
Ab initio quantum chemical calculations have been used to study the differences in energy between two gas phase conformers of the 2-hydroxy-acetamide molecule that possess intramolecular hydrogen bonding. In particular, rotation around the central C-C bond has been considered as a factor determining the structure of the hydrogen bond and stabilization of the conformer. Energy calculations include full geometiy optimization using both the restricted matrix Hartree-Fock model and second-order many-body perturbation theory with a number of commonly used basis sets. The basis sets employed ranged from the minimal STO-3G set to [`]split-valence' sets up to 6-31 G. The effects of polarization functions were also studied. The results display a strong basis set dependence.
Risk factor assessment of endoscopically removed malignant colorectal polyps.
Netzer, P; Forster, C; Biral, R; Ruchti, C; Neuweiler, J; Stauffer, E; Schönegg, R; Maurer, C; Hüsler, J; Halter, F; Schmassmann, A
1998-11-01
Malignant colorectal polyps are defined as endoscopically removed polyps with cancerous tissue which has invaded the submucosa. Various histological criteria exist for managing these patients. To determine the significance of histological findings of patients with malignant polyps. Five pathologists reviewed the specimens of 85 patients initially diagnosed with malignant polyps. High risk malignant polyps were defined as having one of the following: incomplete polypectomy, a margin not clearly cancer-free, lymphatic or venous invasion, or grade III carcinoma. Adverse outcome was defined as residual cancer in a resection specimen and local or metastatic recurrence in the follow up period (mean 67 months). Malignant polyps were confirmed in 70 cases. In the 32 low risk malignant polyps, no adverse outcomes occurred; 16 (42%) of the 38 patients with high risk polyps had adverse outcomes (p<0.001). Independent adverse risk factors were incomplete polypectomy and a resected margin not clearly cancer-free; all other risk factors were only associated with adverse outcome when in combination. As no patients with low risk malignant polyps had adverse outcomes, polypectomy alone seems sufficient for these cases. In the high risk group, surgery is recommended when either of the two independent risk factors, incomplete polypectomy or a resection margin not clearly cancer-free, is present or if there is a combination of other risk factors. As lymphatic or venous invasion or grade III cancer did not have an adverse outcome when the sole risk factor, operations in such cases should be individually assessed on the basis of surgical risk.
On the optimization of Gaussian basis sets
NASA Astrophysics Data System (ADS)
Petersson, George A.; Zhong, Shijun; Montgomery, John A.; Frisch, Michael J.
2003-01-01
A new procedure for the optimization of the exponents, αj, of Gaussian basis functions, Ylm(ϑ,φ)rle-αjr2, is proposed and evaluated. The direct optimization of the exponents is hindered by the very strong coupling between these nonlinear variational parameters. However, expansion of the logarithms of the exponents in the orthonormal Legendre polynomials, Pk, of the index, j: ln αj=∑k=0kmaxAkPk((2j-2)/(Nprim-1)-1), yields a new set of well-conditioned parameters, Ak, and a complete sequence of well-conditioned exponent optimizations proceeding from the even-tempered basis set (kmax=1) to a fully optimized basis set (kmax=Nprim-1). The error relative to the exact numerical self-consistent field limit for a six-term expansion is consistently no more than 25% larger than the error for the completely optimized basis set. Thus, there is no need to optimize more than six well-conditioned variational parameters, even for the largest sets of Gaussian primitives.
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
NASA Astrophysics Data System (ADS)
Zhu, Wuming; Trickey, S. B.
2017-12-01
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
Timing of silicone stent removal in patients with post-tuberculosis bronchial stenosis
Eom, Jung Seop; Kim, Hojoong; Park, Hye Yun; Jeon, Kyeongman; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O. Jung
2013-01-01
CONTEXT: In patients with post-tuberculosis bronchial stenosis (PTBS), the severity of bronchial stenosis affects the restenosis rate after the silicone stent is removed. In PTBS patients with incomplete bronchial obstruction, who had a favorable prognosis, the timing of stent removal to ensure airway patency is not clear. AIMS: We evaluated the time for silicone stent removal in patients with incomplete PTBS. SETTINGS AND DESIGN: A retrospective study examined PTBS patients who underwent stenting and removal of a silicone stent. METHODS: Incomplete bronchial stenosis was defined as PTBS other than total bronchial obstruction, which had a luminal opening at the stenotic segment on bronchoscopic intervention. The duration of stenting was defined as the interval from stent insertion to removal. The study included 44 PTBS patients and the patients were grouped at intervals of 6 months according to the duration of stenting. RESULTS: Patients stented for more than 12 months had a significantly lower restenosis rate than those stented for less than 12 months (4% vs. 35%, P = 0.009). Multiple logistic regression revealed an association between stenting for more than 12 months and a low restenosis rate (odds ratio 12.095; 95% confidence interval 1.097-133.377). Moreover, no restenosis was observed in PTBS patients when the stent was placed more than 14 months previously. CONCLUSIONS: In patients with incomplete PTBS, stent placement for longer than 12 months reduced restenosis after stent removal. PMID:24250736
Metheny, Leland; Eid, Saada; Lingas, Karen; Ofir, Racheli; Pinzur, Lena; Meyerson, Howard; Lazarus, Hillard M.; Huang, Alex Y.
2018-01-01
Late-term complications of hematopoietic cell transplantation (HCT) are numerous and include incomplete engraftment. One possible mechanism of incomplete engraftment after HCT is cytokine-mediated suppression or dysfunction of the bone marrow microenvironment. Mesenchymal stromal cells (MSCs) elaborate cytokines that nurture or stimulate the marrow microenvironment by several mechanisms. We hypothesize that the administration of exogenous MSCs may modulate the bone marrow milieu and improve peripheral blood count recovery in the setting of incomplete engraftment. In the current study, we demonstrated that posttransplant intramuscular administration of human placental derived mesenchymal-like adherent stromal cells [PLacental eXpanded (PLX)-R18] harvested from a three-dimensional in vitro culture system improved posttransplant engraftment of human immune compartment in an immune-deficient murine transplantation model. As measured by the percentage of CD45+ cell recovery, we observed improvement in the peripheral blood counts at weeks 6 (8.4 vs. 24.1%, p < 0.001) and 8 (7.3 vs. 13.1%, p < 0.05) and in the bone marrow at week 8 (28 vs. 40.0%, p < 0.01) in the PLX-R18 cohort. As measured by percentage of CD19+ cell recovery, there was improvement at weeks 6 (12.6 vs. 3.8%) and 8 (10.1 vs. 4.1%). These results suggest that PLX-R18 may have a therapeutic role in improving incomplete engraftment after HCT. PMID:29520362
Metheny, Leland; Eid, Saada; Lingas, Karen; Ofir, Racheli; Pinzur, Lena; Meyerson, Howard; Lazarus, Hillard M; Huang, Alex Y
2018-01-01
Late-term complications of hematopoietic cell transplantation (HCT) are numerous and include incomplete engraftment. One possible mechanism of incomplete engraftment after HCT is cytokine-mediated suppression or dysfunction of the bone marrow microenvironment. Mesenchymal stromal cells (MSCs) elaborate cytokines that nurture or stimulate the marrow microenvironment by several mechanisms. We hypothesize that the administration of exogenous MSCs may modulate the bone marrow milieu and improve peripheral blood count recovery in the setting of incomplete engraftment. In the current study, we demonstrated that posttransplant intramuscular administration of human placental derived mesenchymal-like adherent stromal cells [PLacental eXpanded (PLX)-R18] harvested from a three-dimensional in vitro culture system improved posttransplant engraftment of human immune compartment in an immune-deficient murine transplantation model. As measured by the percentage of CD45 + cell recovery, we observed improvement in the peripheral blood counts at weeks 6 (8.4 vs. 24.1%, p < 0.001) and 8 (7.3 vs. 13.1%, p < 0.05) and in the bone marrow at week 8 (28 vs. 40.0%, p < 0.01) in the PLX-R18 cohort. As measured by percentage of CD19 + cell recovery, there was improvement at weeks 6 (12.6 vs. 3.8%) and 8 (10.1 vs. 4.1%). These results suggest that PLX-R18 may have a therapeutic role in improving incomplete engraftment after HCT.
Basis set limit and systematic errors in local-orbital based all-electron DFT
NASA Astrophysics Data System (ADS)
Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias
2006-03-01
With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Gaigong; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less
Zhang, Gaigong; Lin, Lin; Hu, Wei; ...
2017-01-27
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Gaigong; Lin, Lin; Hu, Wei
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less
NASA Astrophysics Data System (ADS)
Zhang, Gaigong; Lin, Lin; Hu, Wei; Yang, Chao; Pask, John E.
2017-04-01
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn-Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann-Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann-Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H2 and liquid Al-Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.
Benchmark of Ab Initio Bethe-Salpeter Equation Approach with Numeric Atom-Centered Orbitals
NASA Astrophysics Data System (ADS)
Liu, Chi; Kloppenburg, Jan; Kanai, Yosuke; Blum, Volker
The Bethe-Salpeter equation (BSE) approach based on the GW approximation has been shown to be successful for optical spectra prediction of solids and recently also for small molecules. We here present an all-electron implementation of the BSE using numeric atom-centered orbital (NAO) basis sets. In this work, we present benchmark of BSE implemented in FHI-aims for low-lying excitation energies for a set of small organic molecules, the well-known Thiel's set. The difference between our implementation (using an analytic continuation of the GW self-energy on the real axis) and the results generated by a fully frequency dependent GW treatment on the real axis is on the order of 0.07 eV for the benchmark molecular set. We study the convergence behavior to the complete basis set limit for excitation spectra, using a group of valence correlation consistent NAO basis sets (NAO-VCC-nZ), as well as for standard NAO basis sets for ground state DFT with extended augmentation functions (NAO+aug). The BSE results and convergence behavior are compared to linear-response time-dependent DFT, where excellent numerical convergence is shown for NAO+aug basis sets.
ERIC Educational Resources Information Center
Parcover, Jason; Mays, Sally; McCarthy, Amy
2015-01-01
The mental health needs of college students are placing increasing demands on counseling center resources, and traditional outreach efforts may be outdated or incomplete. The public health model provides an approach for reaching more students, decreasing stigma, and addressing mental health concerns before they reach crisis levels. Implementing a…
Fire potential rating for wildland fuelbeds using the Fuel Characteristic Classification System.
David V. Sandberg; Cynthia L. Riccardi; Mark D. Schaff
2007-01-01
The Fuel Characteristic Classification System (FCCS) is a systematic catalog of inherent physical properties of wildland fuelbeds that allows land managers, policymakers, and scientists to build and calculate fuel characteristics with complete or incomplete information. The FCCS is equipped with a set of equations to calculate the potential of any real-world or...
Sex-oriented stable matchings of the marriage problem with correlated and incomplete information
NASA Astrophysics Data System (ADS)
Caldarelli, Guido; Capocci, Andrea; Laureti, Paolo
2001-10-01
In the stable marriage problem two sets of agents must be paired according to mutual preferences, which may happen to conflict. We present two generalizations of its sex-oriented version, aiming to take into account correlations between the preferences of agents and costly information. Their effects are investigated both numerically and analytically.
The Inversion of Ionospheric/plasmaspheric Electron Density From GPS Beacon Observations
NASA Astrophysics Data System (ADS)
Zou, Y. H.; Xu, J. S.; Ma, S. Y.
It is a space-time 4-D tomography to reconstruct ionospheric/ plasmaspheric elec- tron density, Ne, from ground-based GPS beacon measurements. The mathematical foundation of such inversion is studied in this paper and some simulation results of reconstruction for GPS network observation are presented. Assuming reasonably a power law dependence of NE on time with an index number of 1-3 during one ob- servational time of GPS (60-90min.), 4-D inversion in consideration is reduced to a 3-D cone-beam tomography with incomplete projections. To see clearly the effects of the incompleteness on the quality of reconstruction for 3-D condition, we deduced theoretically the formulae of 3-D parallel-beam tomography. After establishing the mathematical basis, we adopt linear temporal dependence of NE and voxel elemental functions to perform simulation of NE reconstruction with the help of IRI90 model. Reasonable time-dependent 3-D images of ionosphere/ plasmasphere electron density distributions are obtained when taking proper layout of the GPS network and allowing variable resolutions in vertical.
NASA Astrophysics Data System (ADS)
Kurniati, D. R.; Rohman, I.
2018-05-01
This study aims to analyze the concepts and science process skills in bomb calorimeter experiment as a basis for developing the virtual laboratory of bomb calorimeter. This study employed research and development method (R&D) to gain the answer to the proposed problems. This paper discussed the concepts and process skills analysis. The essential concepts and process skills associated with bomb calorimeter are analyze by optimizing the bomb calorimeter experiment. The concepts analysis found seven fundamental concepts to be concerned in developing the virtual laboratory that are internal energy, burning heat, perfect combustion, incomplete combustion, calorimeter constant, bomb calorimeter, and Black principle. Since the concept of bomb calorimeter, perfect and incomplete combustion created to figure out the real situation and contain controllable variables, in virtual the concepts displayed in the form of simulation. Meanwhile, the last four concepts presented in the form of animation because no variable found to be controlled. The process skills analysis detect four notable skills to be developed that are ability to observe, design experiment, interpretation, and communication skills.
Klimm, Wojciech; Kade, Grzegorz; Spaleniak, Sebastian; Dubchak, Ivanna; Niemczyk, Stanisław
2014-07-01
Diagnostic of renal tubular disorders can be often difficult. Incomplete form of distal Renal Tubular Acidosis (dRta) in course of Graves' disease was de novo recognized in a young woman hospitalized with a deep deficiency of potassium in blood serum complicated with cardiac arrest. Series of tests assessing the types and severity of water-electrolyte, acid-base and thyroid disorders were performed during a complex diagnosis. During the treatment of acute phase of the disease we intensified efforts to maintain basic life functions and to eliminate deep water-electrolyte disturbances. In the second phase of the treatment we determined an underlying cause of the disease, recognized dRTA, and introduced a specific long-term electrolyte and hormonal therapy. To confirm the diagnosis oral test with ammonium chloride (Wrong-Davies' test) was performed. After completion of the diagnostic and therapeutic process, the patient was included in the nephrological supervision on an outpatient basis. The basic drug for the therapy was sodium citrate. After a year of observation and continuing treatment we evaluated therapeutic results as good and permanent.
Intrarectal pressures and balloon expulsion related to evacuation proctography.
Halligan, S; Thomas, J; Bartram, C
1995-01-01
Seventy four patients with constipation were examined by standard evacuation proctography and then attempted to expel a small, non-deformable rectal balloon, connected to a pressure transducer to measure intrarectal pressure. Simultaneous imaging related the intrarectal position of the balloon to rectal deformity. Inability to expel the balloon was associated proctographically with prolonged evacuation, incomplete evacuation, reduced anal canal diameter, and acute anorectal angulation during evacuation. The presence and size of rectocoele or intussusception was unrelated to voiding of paste or balloon. An independent linear combination of pelvic floor descent and evacuation time on proctography correctly predicted maximum intrarectal pressure in 74% of cases. No patient with both prolonged evacuation and reduced pelvic floor descent on proctography could void the balloon, as maximum intrarectal pressure was reduced in this group. A prolonged evacuation time on proctography, in combination with reduced pelvic floor descent, suggests defecatory disorder may be caused by inability to raise intrarectal pressure. A diagnosis of anismus should not be made on proctography solely on the basis of incomplete/prolonged evacuation, as this may simply reflect inadequate straining. PMID:7672656
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Faegri, Knut, Jr.
1990-01-01
The paper investigates bounds failure in calculations using Gaussian basis sets for the solution of the one-electron Dirac equation for the 2p1/2 state of Hg(79+). It is shown that bounds failure indicates inadequacies in the basis set, both in terms of the exponent range and the number of functions. It is also shown that overrepresentation of the small component space may lead to unphysical results. It is concluded that it is important to use matched large and small component basis sets with an adequate size and exponent range.
Ab Initio and Analytic Intermolecular Potentials for Ar-CF₄
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vayner, Grigoriy; Alexeev, Yuri; Wang, Jiangping
2006-03-09
Ab initio calculations at the CCSD(T) level of theory are performed to characterize the Ar + CF ₄ intermolecular potential. Extensive calculations, with and without a correction for basis set superposition error (BSSE), are performed with the cc-pVTZ basis set. Additional calculations are performed with other correlation consistent (cc) basis sets to extrapolate the Ar---CF₄potential energy minimum to the complete basis set (CBS) limit. Both the size of the basis set and BSSE have substantial effects on the Ar + CF₄ potential. Calculations with the cc-pVTZ basis set and without a BSSE correction, appear to give a good representation ofmore » the potential at the CBS limit and with a BSSE correction. In addition, MP2 theory is found to give potential energies in very good agreement with those determined by the much higher level CCSD(T) theory. Two analytic potential energy functions were determined for Ar + CF₄by fitting the cc-pVTZ calculations both with and without a BSSE correction. These analytic functions were written as a sum of two body potentials and excellent fits to the ab initio potentials were obtained by representing each two body interaction as a Buckingham potential.« less
On the performance of large Gaussian basis sets for the computation of total atomization energies
NASA Technical Reports Server (NTRS)
Martin, J. M. L.
1992-01-01
The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirkov, Leonid; Makarewicz, Jan, E-mail: jama@amu.edu.pl
An ab initio intermolecular potential energy surface (PES) has been constructed for the benzene-krypton (BKr) van der Waals (vdW) complex. The interaction energy has been calculated at the coupled cluster level of theory with single, double, and perturbatively included triple excitations using different basis sets. As a result, a few analytical PESs of the complex have been determined. They allowed a prediction of the complex structure and its vibrational vdW states. The vibrational energy level pattern exhibits a distinct polyad structure. Comparison of the equilibrium structure, the dipole moment, and vibrational levels of BKr with their experimental counterparts has allowedmore » us to design an optimal basis set composed of a small Dunning’s basis set for the benzene monomer, a larger effective core potential adapted basis set for Kr and additional midbond functions. Such a basis set yields vibrational energy levels that agree very well with the experimental ones as well as with those calculated from the available empirical PES derived from the microwave spectra of the BKr complex. The basis proposed can be applied to larger complexes including Kr because of a reasonable computational cost and accurate results.« less
Polarized atomic orbitals for self-consistent field electronic structure calculations
NASA Astrophysics Data System (ADS)
Lee, Michael S.; Head-Gordon, Martin
1997-12-01
We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
Manoharan, Sujatha C; Ramakrishnan, Swaminathan
2009-10-01
In this work, prediction of forced expiratory volume in pulmonary function test, carried out using spirometry and neural networks is presented. The pulmonary function data were recorded from volunteers using commercial available flow volume spirometer in standard acquisition protocol. The Radial Basis Function neural networks were used to predict forced expiratory volume in 1 s (FEV1) from the recorded flow volume curves. The optimal centres of the hidden layer of radial basis function were determined by k-means clustering algorithm. The performance of the neural network model was evaluated by computing their prediction error statistics of average value, standard deviation, root mean square and their correlation with the true data for normal, restrictive and obstructive cases. Results show that the adopted neural networks are capable of predicting FEV1 in both normal and abnormal cases. Prediction accuracy was more in obstructive abnormality when compared to restrictive cases. It appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.
NASA Astrophysics Data System (ADS)
Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan
2016-03-01
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
Medical technology at home: safety-related items in technical documentation.
Hilbers, Ellen S M; de Vries, Claudette G J C A; Geertsma, Robert E
2013-01-01
This study aimed to investigate the technical documentation of manufacturers on issues of safe use of their device in a home setting. Three categories of equipment were selected: infusion pumps, ventilators, and dialysis systems. Risk analyses, instructions for use, labels, and post market surveillance procedures were requested from manufacturers. Additionally, they were asked to fill out a questionnaire on collection of field experience, on incidents, and training activities. Specific risks of device operation by lay users in a home setting were incompletely addressed in the risk analyses. A substantial number of user manuals were designed for professionals, rather than for patients or lay carers. Risk analyses and user information often showed incomplete coherence. Post market surveillance was mainly based on passive collection of field experiences. Manufacturers of infusion pumps, ventilators, and dialysis systems pay insufficient attention to the specific risks of use by lay persons in home settings. It is expected that this conclusion is also applicable for other medical equipment for treatment at home. Manufacturers of medical equipment for home use should pay more attention to use errors, lay use and home-specific risks in design, risk analysis, and user information. Field experiences should be collected more actively. Coherence between risk analysis and user information should be improved. Notified bodies should address these aspects in their assessment. User manuals issued by institutions supervising a specific home therapy should be drawn up in consultation with the manufacturer.
Code of Federal Regulations, 2010 CFR
2010-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule basis...
Higher Order Bases in a 2D Hybrid BEM/FEM Formulation
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.
2002-01-01
The advantages of using higher order, interpolatory basis functions are examined in the analysis of transverse electric (TE) plane wave scattering by homogeneous, dielectric cylinders. A boundary-element/finite-element (BEM/FEM) hybrid formulation is employed in which the interior dielectric region is modeled with the vector Helmholtz equation, and a radiation boundary condition is supplied by an Electric Field Integral Equation (EFIE). An efficient method of handling the singular self-term arising in the EFIE is presented. The iterative solution of the partially dense system of equations is obtained using the Quasi-Minimal Residual (QMR) algorithm with an Incomplete LU Threshold (ILUT) preconditioner. Numerical results are shown for the case of an incident wave impinging upon a square dielectric cylinder. The convergence of the solution is shown versus the number of unknowns as a function of the completeness order of the basis functions.
Projected Hybrid Orbitals: A General QM/MM Method
2015-01-01
A projected hybrid orbital (PHO) method was described to model the covalent boundary in a hybrid quantum mechanical and molecular mechanical (QM/MM) system. The PHO approach can be used in ab initio wave function theory and in density functional theory with any basis set without introducing system-dependent parameters. In this method, a secondary basis set on the boundary atom is introduced to formulate a set of hybrid atomic orbtials. The primary basis set on the boundary atom used for the QM subsystem is projected onto the secondary basis to yield a representation that provides a good approximation to the electron-withdrawing power of the primary basis set to balance electronic interactions between QM and MM subsystems. The PHO method has been tested on a range of molecules and properties. Comparison with results obtained from QM calculations on the entire system shows that the present PHO method is a robust and balanced QM/MM scheme that preserves the structural and electronic properties of the QM region. PMID:25317748
Mulliken, John B; LaBrie, Richard A
2012-02-01
Repair of unilateral cleft lip requires three-dimensional craftsmanship and understanding four-dimensional changes. Ninety-nine children with unilateral complete or incomplete cleft lip were measured by direct anthropometry following rotation-advancement repair (intraoperatively) and again in childhood. Changes in heminasal width, labial height, and labial width were analyzed and compared measures depending on whether the cleft was incomplete/complete or involved left/right side. Average heminasal width (sn-al) was set 1 mm less on the cleft side and measured only 0.7 mm less at 6 years. Labial height (sn-cphi) was slightly greater on the cleft side at repair and matched the noncleft side at follow-up. Vertical dimension (sbal-cphi) was slightly less at operation; the percent change was the same on both sides. Transverse labial width (cphi-ch) was set short on the cleft side and lengthened disproportionately, resulting in less than 1 mm difference at 6 years. All anthropometric dimensions grew less in complete cleft lips compared with incomplete forms; however, only labial height and width were significantly different. There were no disparities in nasolabial growth between left- and right-sided cleft lips. Cleft side alar base drifts laterally and should be positioned slightly more medial and secured to nasalis or periosteum. Growth in labial height lags and, therefore, the repaired side should be equal to or slightly greater than on the normal side, particularly in a complete labial cleft. Transverse labial width grows more on the cleft side; thus, lateral Cupid's bow peak point can be marked closer to the commissure to match the labial height on the noncleft side. Therapeutic, IV.
Tavakkoli, Anna; Law, Ryan J; Bedi, Aarti O; Prabhu, Anoop; Hiatt, Tadd; Anderson, Michelle A; Wamsteker, Erik J; Elmunzer, B Joseph; Piraka, Cyrus R; Scheiman, James M; Elta, Grace H; Kwon, Richard S
2017-09-01
Endoscopic experience is known to correlate with outcomes of endoscopic mucosal resection (EMR), particularly complete resection of the polyp tissue. Whether specialist endoscopists can protect against incomplete polypectomy in the setting of known risk factors for incomplete resection (IR) is unknown. We aimed to characterize how specialist endoscopists may help to mitigate the risk of IR of large sessile polyps. This is a retrospective cohort study of patients who underwent EMR at the University of Michigan from January 1, 2006, to November 15, 2015. The primary outcome was endoscopist-reported polyp tissue remaining at the end of the initial EMR attempt. Specialist endoscopists were defined as endoscopists who receive tertiary referrals for difficult colonoscopy cases and completed at least 20 EMR colonic polyp resections over the study period. A total of 257 patients with 269 polyps were included in the study. IR occurred in 40 (16%) cases. IR was associated with polyp size ≥ 40 mm [adjusted odds ratio (aOR) 3.31, 95% confidence interval (CI) 1.38-7.93], flat/laterally spreading polyps (aOR 2.61, 95% CI 1.24-5.48), and difficulty lifting the polyp (aOR 11.0, 95% CI 2.66-45.3). A specialist endoscopist performing the initial EMR was protective against IR, even in the setting of risk factors for IR (aOR 0.13, 95% CI 0.04-0.41). IR is associated with polyp size ≥ 40 mm, flat and/or laterally spreading polyps, and difficulty lifting the polyp. A specialist endoscopist initiating the EMR was protective of IR.
Shi, Cheng-Min; Yang, Ziheng
2018-01-01
Abstract The phylogenetic relationships among extant gibbon species remain unresolved despite numerous efforts using morphological, behavorial, and genetic data and the sequencing of whole genomes. A major challenge in reconstructing the gibbon phylogeny is the radiative speciation process, which resulted in extremely short internal branches in the species phylogeny and extensive incomplete lineage sorting with extensive gene-tree heterogeneity across the genome. Here, we analyze two genomic-scale data sets, with ∼10,000 putative noncoding and exonic loci, respectively, to estimate the species tree for the major groups of gibbons. We used the Bayesian full-likelihood method bpp under the multispecies coalescent model, which naturally accommodates incomplete lineage sorting and uncertainties in the gene trees. For comparison, we included three heuristic coalescent-based methods (mp-est, SVDQuartets, and astral) as well as concatenation. From both data sets, we infer the phylogeny for the four extant gibbon genera to be (Hylobates, (Nomascus, (Hoolock, Symphalangus))). We used simulation guided by the real data to evaluate the accuracy of the methods used. Astral, while not as efficient as bpp, performed well in estimation of the species tree even in presence of excessive incomplete lineage sorting. Concatenation, mp-est and SVDQuartets were unreliable when the species tree contains very short internal branches. Likelihood ratio test of gene flow suggests a small amount of migration from Hylobates moloch to H. pileatus, while cross-genera migration is absent or rare. Our results highlight the utility of coalescent-based methods in addressing challenging species tree problems characterized by short internal branches and rampant gene tree-species tree discordance. PMID:29087487
Sensory stimulation augments the effects of massed practice training in persons with tetraplegia.
Beekhuizen, Kristina S; Field-Fote, Edelle C
2008-04-01
To compare functional changes and cortical neuroplasticity associated with hand and upper extremity use after massed (repetitive task-oriented practice) training, somatosensory stimulation, massed practice training combined with somatosensory stimulation, or no intervention, in persons with chronic incomplete tetraplegia. Participants were randomly assigned to 1 of 4 groups: massed practice training combined with somatosensory peripheral nerve stimulation (MP+SS), somatosensory peripheral nerve stimulation only (SS), massed practice training only (MP), and no intervention (control). University medical school setting. Twenty-four subjects with chronic incomplete tetraplegia. Intervention sessions were 2 hours per session, 5 days a week for 3 weeks. Massed practice training consisted of repetitive practice of functional tasks requiring skilled hand and upper-extremity use. Somatosensory stimulation consisted of median nerve stimulation with intensity set below motor threshold. Pre- and post-testing assessed changes in functional hand use (Jebsen-Taylor Hand Function Test), functional upper-extremity use (Wolf Motor Function Test), pinch grip strength (key pinch force), sensory function (monofilament testing), and changes in cortical excitation (motor evoked potential threshold). The 3 groups showed significant improvements in hand function after training. The MP+SS and SS groups had significant improvements in upper-extremity function and pinch strength compared with the control group, but only the MP+SS group had a significant change in sensory scores compared with the control group. The MP+SS and MP groups had greater change in threshold measures of cortical excitability. People with chronic incomplete tetraplegia obtain functional benefits from massed practice of task-oriented skills. Somatosensory stimulation appears to be a valuable adjunct to training programs designed to improve hand and upper-extremity function in these subjects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, H.; Han, C.; Choi, J.-Y.
2014-05-20
Characterizing a microlensing planet is done by modeling an observed lensing light curve. In this process, it is often confronted that solutions of different lensing parameters result in similar light curves, causing difficulties in uniquely interpreting the lens system, and thus understanding the causes of different types of degeneracy is important. In this work, we show that incomplete coverage of a planetary perturbation can result in degenerate solutions even for events where the planetary signal is detected with a high level of statistical significance. We demonstrate the degeneracy for an actually observed event OGLE-2012-BLG-0455/MOA-2012-BLG-206. The peak of this high-magnification eventmore » (A {sub max} ∼ 400) exhibits very strong deviation from a point-lens model with Δχ{sup 2} ≳ 4000 for data sets with a total of 6963 measurements. From detailed modeling of the light curve, we find that the deviation can be explained by four distinct solutions, i.e., two very different sets of solutions, each with a twofold degeneracy. While the twofold (so-called close/wide) degeneracy is well understood, the degeneracy between the radically different solutions is not previously known. The model light curves of this degeneracy differ substantially in the parts that were not covered by observation, indicating that the degeneracy is caused by the incomplete coverage of the perturbation. It is expected that the frequency of the degeneracy introduced in this work will be greatly reduced with the improvement of the current lensing survey and follow-up experiments and the advent of new surveys.« less
A novel Gaussian-Sinc mixed basis set for electronic structure calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.
2015-08-14
A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J. Grant, E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu; Peterson, Kirk A., E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu
New correlation consistent basis sets, cc-pVnZ-PP-F12 (n = D, T, Q), for all the post-d main group elements Ga–Rn have been optimized for use in explicitly correlated F12 calculations. The new sets, which include not only orbital basis sets but also the matching auxiliary sets required for density fitting both conventional and F12 integrals, are designed for correlation of valence sp, as well as the outer-core d electrons. The basis sets are constructed for use with the previously published small-core relativistic pseudopotentials of the Stuttgart-Cologne variety. Benchmark explicitly correlated coupled-cluster singles and doubles with perturbative triples [CCSD(T)-F12b] calculations of themore » spectroscopic properties of numerous diatomic molecules involving 4p, 5p, and 6p elements have been carried out and compared to the analogous conventional CCSD(T) results. In general the F12 results obtained with a n-zeta F12 basis set were comparable to conventional aug-cc-pVxZ-PP or aug-cc-pwCVxZ-PP basis set calculations obtained with x = n + 1 or even x = n + 2. The new sets used in CCSD(T)-F12b calculations are particularly efficient at accurately recovering the large correlation effects of the outer-core d electrons.« less
Tran, V H Huynh; Gilbert, H; David, I
2017-01-01
With the development of automatic self-feeders, repeated measurements of feed intake are becoming easier in an increasing number of species. However, the corresponding BW are not always recorded, and these missing values complicate the longitudinal analysis of the feed conversion ratio (FCR). Our aim was to evaluate the impact of missing BW data on estimations of the genetic parameters of FCR and ways to improve the estimations. On the basis of the missing BW profile in French Large White pigs (male pigs weighed weekly, females and castrated males weighed monthly), we compared 2 different ways of predicting missing BW, 1 using a Gompertz model and 1 using a linear interpolation. For the first part of the study, we used 17,398 weekly records of BW and feed intake recorded over 16 consecutive weeks in 1,222 growing male pigs. We performed a simulation study on this data set to mimic missing BW values according to the pattern of weekly proportions of incomplete BW data in females and castrated males. The FCR was then computed for each week using observed data (obser_FCR), data with missing BW (miss_FCR), data with BW predicted using a Gompertz model (Gomp_FCR), and data with BW predicted by linear interpolation (interp_FCR). Heritability (h) was estimated, and the EBV was predicted for each repeated FCR using a random regression model. In the second part of the study, the full data set (males with their complete BW records, castrated males and females with missing BW) was analyzed using the same methods (miss_FCR, Gomp_FCR, and interp_FCR). Results of the simulation study showed that h were overestimated in the case of missing BW and that predicting BW using a linear interpolation provided a more accurate estimation of h and of EBV than a Gompertz model. Over 100 simulations, the correlation between obser_EBV and interp_EBV, Gomp_EBV, and miss_EBV was 0.93 ± 0.02, 0.91 ± 0.01, and 0.79 ± 0.04, respectively. The heritabilities obtained with the full data set were quite similar for miss_FCR, Gomp_FCR, and interp_FCR. In conclusion, when the proportion of missing BW is high, genetic parameters of FCR are not well estimated. In French Large White pigs, in the growing period extending from d 65 to 168, prediction of missing BW using a Gompertz growth model slightly improved the estimations, but the linear interpolation improved the estimation to a greater extent. This result is due to the linear rather than sigmoidal increase in BW over the study period.
Spontaneous Symmetry Breaking as a Basis of Particle Mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quigg, Chris; /Fermilab /CERN
2007-04-01
Electroweak theory joins electromagnetism with the weak force in a single quantum field theory, ascribing the two fundamental interactions--so different in their manifestations--to a common symmetry principle. How the electroweak gauge symmetry is hidden is one of the most urgent and challenging questions facing particle physics. The provisional answer incorporated in the ''standard model'' of particle physics was formulated in the 1960s by Higgs, by Brout & Englert, and by Guralnik, Hagen, & Kibble: The agent of electroweak symmetry breaking is an elementary scalar field whose self-interactions select a vacuum state in which the full electroweak symmetry is hidden, leavingmore » a residual phase symmetry of electromagnetism. By analogy with the Meissner effect of the superconducting phase transition, the Higgs mechanism, as it is commonly known, confers masses on the weak force carriers W{sup {+-}} and Z. It also opens the door to masses for the quarks and leptons, and shapes the world around us. It is a good story--though an incomplete story--and we do not know how much of the story is true. Experiments that explore the Fermi scale (the energy regime around 1 TeV) during the next decade will put the electroweak theory to decisive test, and may uncover new elements needed to construct a more satisfying completion of the electroweak theory. The aim of this article is to set the stage by reporting what we know and what we need to know, and to set some ''Big Questions'' that will guide our explorations.« less
Decryption with incomplete cyphertext and multiple-information encryption in phase space.
Xu, Xiaobin; Wu, Quanying; Liu, Jun; Situ, Guohai
2016-01-25
Recently, we have demonstrated that information encryption in phase space offers security enhancement over the traditional encryption schemes operating in real space. However, there is also an important issue with this technique: increasing the cost for data transmitting and storage. To address this issue, here we investigate the problem of decryption using incomplete cyphertext. We show that the analytic solution under the traditional framework set the lower limit of decryption performance. More importantly, we demonstrate that one just needs a small amount of cyphertext to recover the plaintext signal faithfully using compressive sensing, meaning that the amount of data that needs to transmit and store can be significantly reduced. This leads to multiple information encryption so that we can use the system bandwidth more effectively. We also provide an optical experimental result to demonstrate the plaintext recovered in phase space.
Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies
Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung
2017-01-01
A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods’ performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. PMID:27993672
Parameter Estimation in Rasch Models for Examinee-Selected Items
ERIC Educational Resources Information Center
Liu, Chen-Wei; Wang, Wen-Chung
2017-01-01
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Chemical name extraction based on automatic training data generation and rich feature set.
Yan, Su; Spangler, W Scott; Chen, Ying
2013-01-01
The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
NASA Astrophysics Data System (ADS)
Gavrishchaka, V. V.; Ganguli, S. B.
2001-12-01
Reliable forecasting of rare events in a complex dynamical system is a challenging problem that is important for many practical applications. Due to the nature of rare events, data set available for construction of the statistical and/or machine learning model is often very limited and incomplete. Therefore many widely used approaches including such robust algorithms as neural networks can easily become inadequate for rare events prediction. Moreover in many practical cases models with high-dimensional inputs are required. This limits applications of the existing rare event modeling techniques (e.g., extreme value theory) that focus on univariate cases. These approaches are not easily extended to multivariate cases. Support vector machine (SVM) is a machine learning system that can provide an optimal generalization using very limited and incomplete training data sets and can efficiently handle high-dimensional data. These features may allow to use SVM to model rare events in some applications. We have applied SVM-based system to the problem of large-amplitude substorm prediction and extreme event forecasting in stock and currency exchange markets. Encouraging preliminary results will be presented and other possible applications of the system will be discussed.
Advancing Health Literacy Measurement: A Pathway to Better Health and Health System Performance
Pleasant, Andrew
2014-01-01
The concept of health literacy initially emerged and continues to gain strength as an approach to improving health status and the performance of health systems. Numerous studies clearly link low levels of education, literacy, and health literacy with poor health, poor health care utilization, increased barriers to care, and early death. However, theoretical understandings and methods of measuring the complex social construct of health literacy have experienced a continual evolution that remains incomplete. As a result, the seemingly most-cited definition of health literacy proposed in the now-decade-old Institute of Medicine report on health literacy is long overdue for updating. Such an effort should engage a broad and diverse set of health literacy researchers, practitioners, and members of the public in creating a definition that can earn broad consensus through validation testing in a rigorous scientific approach. That effort also could produce the basis for a new universally applicable measure of health literacy. Funders, health systems, and policymakers should reconsider their timid approach to health literacy. Although the field and corresponding evidence base are not perfect, health literacy—especially when combined with a focus on prevention and integrative health—is one of the most promising approaches to advancing public health. PMID:25491583
Dearman, Rebecca J; Betts, Catherine J; Farr, Craig; McLaughlin, James; Berdasco, Nancy; Wiench, Karin; Kimber, Ian
2007-10-01
There are currently available no systematic experimental data on the skin sensitizing properties of acrylates that are of relevance in occupational settings. Limited information from previous guinea-pig tests or from the local lymph node assay (LLNA) is available; however, these data are incomplete and somewhat contradictory. For those reasons, we have examined in the LLNA 4 acrylates: butyl acrylate (BA), ethyl acrylate (EA), methyl acrylate (MA), and ethylhexyl acrylate (EHA). The LLNA data indicated that all 4 compounds have some potential to cause skin sensitization. In addition, the relative potencies of these acrylates were measured by derivation from LLNA dose-response analyses of EC3 values (the effective concentration of chemical required to induce a threefold increase in proliferation of draining lymph node cells compared with control values). On the basis of 1 scheme for the categorization of skin sensitization potency, BA, EA, and MA were each classified as weak sensitizers. Using the same scheme, EHA was considered a moderate sensitizer. However, it must be emphasized that the EC3 value for this chemical of 9.7% is on the borderline between moderate (<10%) and weak (>10%) categories. Thus, the judicious view is that all 4 chemicals possess relatively weak skin sensitizing potential.
NASA Astrophysics Data System (ADS)
Anderson, Thomas R.; Hessen, Dag O.; Mitra, Aditee; Mayor, Daniel J.; Yool, Andrew
2013-09-01
The performance of four contemporary formulations describing trophic transfer, which have strongly contrasting assumptions as regards the way that consumer growth is calculated as a function of food C:N ratio and in the fate of non-limiting substrates, was compared in two settings: a simple steady-state ecosystem model and a 3D biogeochemical general circulation model. Considerable variation was seen in predictions for primary production, transfer to higher trophic levels and export to the ocean interior. The physiological basis of the various assumptions underpinning the chosen formulations is open to question. Assumptions include Liebig-style limitation of growth, strict homeostasis in zooplankton biomass, and whether excess C and N are released by voiding in faecal pellets or via respiration/excretion post-absorption by the gut. Deciding upon the most appropriate means of formulating trophic transfer is not straightforward because, despite advances in ecological stoichiometry, the physiological mechanisms underlying these phenomena remain incompletely understood. Nevertheless, worrying inconsistencies are evident in the way in which fundamental transfer processes are justified and parameterised in the current generation of marine ecosystem models, manifested in the resulting simulations of ocean biogeochemistry. Our work highlights the need for modellers to revisit and appraise the equations and parameter values used to describe trophic transfer in marine ecosystem models.
Subduction Orogeny and the Late Cenozoic Evolution of the Mediterranean Arcs
NASA Astrophysics Data System (ADS)
Royden, Leigh; Faccenna, Claudio
2018-05-01
The Late Cenozoic tectonic evolution of the Mediterranean region, which is sandwiched between the converging African and European continents, is dominated by the process of subduction orogeny. Subduction orogeny occurs where localized subduction, driven by negative slab buoyancy, is more rapid than the convergence rate of the bounding plates; it is commonly developed in zones of early or incomplete continental collision. Subduction orogens can be distinguished from collisional orogens on the basis of driving mechanism, tectonic setting, and geologic expression. Three distinct Late Cenozoic subduction orogens can be identified in the Mediterranean region, making up the Western Mediterranean (Apennine, external Betic, Maghebride, Rif), Central Mediterranean (Carpathian), and Eastern Mediterranean (southern Dinaride, external Hellenide, external Tauride) Arcs. The Late Cenozoic evolution of these orogens, described in this article, is best understood in light of the processes that govern subduction orogeny and depends strongly on the buoyancy of the locally subducting lithosphere; it is thus strongly related to paleogeography. Because the slow (4–10 mm/yr) convergence rate between Africa and Eurasia has preserved the early collisional environment, and associated tectonism, for tens of millions of years, the Mediterranean region provides an excellent opportunity to elucidate the dynamic and kinematic processes of subduction orogeny and to better understand how these processes operate in other orogenic systems.
Weycker, Derek; Sofrygin, Oleg; Kemner, Jason E; Pelton, Stephen I; Oster, Gerry
2009-08-06
Using a probabilistic model of the clinical and economic burden of rotavirus gastroenteritis (RVGE), we estimated the expected impact of vaccinating a US birth cohort with Rotarix in lieu of RotaTeq. Assuming full vaccination of all children, use of Rotarix - rather than RotaTeq - was estimated to reduce the total number of RVGE events by 5% and associated costs by 8%. On an overall basis, Rotarix would reduce costs by $77.2 million (95% CI $71.5-$86.5). Similar reductions with Rotarix were estimated to occur under an assumption of incomplete immunization of children.
Manna, Debashree; Kesharwani, Manoj K; Sylvetsky, Nitai; Martin, Jan M L
2017-07-11
Benchmark ab initio energies for BEGDB and WATER27 data sets have been re-examined at the MP2 and CCSD(T) levels with both conventional and explicitly correlated (F12) approaches. The basis set convergence of both conventional and explicitly correlated methods has been investigated in detail, both with and without counterpoise corrections. For the MP2 and CCSD-MP2 contributions, rapid basis set convergence observed with explicitly correlated methods is compared to conventional methods. However, conventional, orbital-based calculations are preferred for the calculation of the (T) term, since it does not benefit from F12. CCSD(F12*) converges somewhat faster with the basis set than CCSD-F12b for the CCSD-MP2 term. The performance of various DFT methods is also evaluated for the BEGDB data set, and results show that Head-Gordon's ωB97X-V and ωB97M-V functionals outperform all other DFT functionals. Counterpoise-corrected DSD-PBEP86 and raw DSD-PBEPBE-NL also perform well and are close to MP2 results. In the WATER27 data set, the anionic (deprotonated) water clusters exhibit unacceptably slow basis set convergence with the regular cc-pVnZ-F12 basis sets, which have only diffuse s and p functions. To overcome this, we have constructed modified basis sets, denoted aug-cc-pVnZ-F12 or aVnZ-F12, which have been augmented with diffuse functions on the higher angular momenta. The calculated final dissociation energies of BEGDB and WATER27 data sets are available in the Supporting Information. Our best calculated dissociation energies can be reproduced through n-body expansion, provided one pushes to the basis set and electron correlation limit for the two-body term; for the three-body term, post-MP2 contributions (particularly CCSD-MP2) are important for capturing the three-body dispersion effects. Terms beyond four-body can be adequately captured at the MP2-F12 level.
On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States
NASA Technical Reports Server (NTRS)
Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)
1996-01-01
Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.
Accurate Classification of RNA Structures Using Topological Fingerprints
Li, Kejie; Gribskov, Michael
2016-01-01
While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571
Predictors of seizure freedom after incomplete resection in children.
Perry, M S; Dunoyer, C; Dean, P; Bhatia, S; Bavariya, A; Ragheb, J; Miller, I; Resnick, T; Jayakar, P; Duchowny, M
2010-10-19
Incomplete resection of the epileptogenic zone (EZ) is the most important predictor of poor outcome after resective surgery for intractable epilepsy. We analyzed the contribution of preoperative and perioperative variables including MRI and EEG data as predictors of seizure-free (SF) outcome after incomplete resection. We retrospectively reviewed patients <18 years of age with incomplete resection for epilepsy with 2 years of follow-up. Fourteen preoperative and perioperative variables were compared in SF and non-SF (NSF) patients. We compared lesional patients, categorized by reason for incompleteness, to lesional patients with complete resection. We analyzed for effect of complete EEG resection on SF outcome in patients with incompletely resected MRI lesions and vice versa. Eighty-three patients with incomplete resection were included with 41% becoming SF. Forty-eight lesional patients with complete resection were included. Thirty-eight percent (57/151) of patients with incomplete resection and 34% (47/138) with complete resection were excluded secondary to lack of follow-up or incomplete records. Contiguous MRI lesions were predictive of seizure freedom after incomplete resection. Fifty-seven percent of patients incomplete by MRI alone, 52% incomplete by EEG alone, and 24% incomplete by both became SF compared to 77% of patients with complete resection (p = 0.0005). Complete resection of the MRI- and EEG-defined EZ is the best predictor of seizure freedom, though patients incomplete by EEG or MRI alone have better outcome compared to patients incomplete by both. More than one-third of patients with incomplete resection become SF, with contiguous MRI lesions a predictor of SF outcome.
Perturbation corrections to Koopmans' theorem. V - A study with large basis sets
NASA Technical Reports Server (NTRS)
Chong, D. P.; Langhoff, S. R.
1982-01-01
The vertical ionization potentials of N2, F2 and H2O were calculated by perturbation corrections to Koopmans' theorem using six different basis sets. The largest set used includes several sets of polarization functions. Comparison is made with measured values and with results of computations using Green's functions.
A new basis set for molecular bending degrees of freedom.
Jutier, Laurent
2010-07-21
We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.
Binding and segmentation via a neural mass model trained with Hebbian and anti-Hebbian mechanisms.
Cona, Filippo; Zavaglia, Melissa; Ursino, Mauro
2012-04-01
Synchronization of neural activity in the gamma band, modulated by a slower theta rhythm, is assumed to play a significant role in binding and segmentation of multiple objects. In the present work, a recent neural mass model of a single cortical column is used to analyze the synaptic mechanisms which can warrant synchronization and desynchronization of cortical columns, during an autoassociation memory task. The model considers two distinct layers communicating via feedforward connections. The first layer receives the external input and works as an autoassociative network in the theta band, to recover a previously memorized object from incomplete information. The second realizes segmentation of different objects in the gamma band. To this end, units within both layers are connected with synapses trained on the basis of previous experience to store objects. The main model assumptions are: (i) recovery of incomplete objects is realized by excitatory synapses from pyramidal to pyramidal neurons in the same object; (ii) binding in the gamma range is realized by excitatory synapses from pyramidal neurons to fast inhibitory interneurons in the same object. These synapses (both at points i and ii) have a few ms dynamics and are trained with a Hebbian mechanism. (iii) Segmentation is realized with faster AMPA synapses, with rise times smaller than 1 ms, trained with an anti-Hebbian mechanism. Results show that the model, with the previous assumptions, can correctly reconstruct and segment three simultaneous objects, starting from incomplete knowledge. Segmentation of more objects is possible but requires an increased ratio between the theta and gamma periods.
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim
2009-11-01
Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.
2008-01-01
In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.
NASA Astrophysics Data System (ADS)
Goh, K. L.; Liew, S. C.; Hasegawa, B. H.
1997-12-01
Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.
Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies.
Liu, Jia; Duffy, Ben A; Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung
2017-02-15
A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods' performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
NASA Astrophysics Data System (ADS)
van Hoeve, Miriam D.; Klobukowski, Mariusz
2018-03-01
Simulation of the electronic spectra of HRgF (Rg = Ar, Kr, Xe, Rn) was carried out using the time-dependent density functional method, with the CAMB3LYP functional and several basis sets augmented with even-tempered diffuse functions. A full spectral assignment for the HRgF systems was done. The effect of the rare gas matrix on the HRgF (Rg = Ar and Kr) spectra was investigated and it was found that the matrix blue-shifted the spectra. Scalar relativistic effects on the spectra were also studied and it was found that while the excitation energies of HArF and HKrF were insignificantly affected by relativistic effects, most of the excitation energies of HXeF and HRnF were red-shifted. Spin-orbit coupling was found to significantly affect excitation energies in HRnF. Analysis of performance of the model core potential basis set relative to all-electron (AE) basis sets showed that the former basis set increased computational efficiency and gave results similar to those obtained with the AE basis set.
Midbond basis functions for weakly bound complexes
NASA Astrophysics Data System (ADS)
Shaw, Robert A.; Hill, J. Grant
2018-06-01
Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.
Electromagnetic Fields Exposure Limits
2018-01-01
analysis, synthesis, integration and validation of knowledge derived through the scientific method. In NATO, S&T is addressed using different...Panel • NMSG NATO Modelling and Simulation Group • SAS System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET... integrity or morphology. They later also failed to find a lack of direct DNA damage in human blood (strand breaks, alkali-labile sites, and incomplete
Sociocultural Systems: The Next Step in Army Cultural Capability
2013-09-01
notion of cross-cultural effectiveness in military operations is incomplete if it does not include the concept of sociocultural systems. Discussions...evaluate them, influence them, and operate effectively within them. This research product is an anthology of chapters written by some of the best and...military, and police organizations. Army personnel require a diverse and sophisticated skill set to be effective , and Soldiers often must be
ERIC Educational Resources Information Center
Steinmetz, Jean-Paul; Brunner, Martin; Loarer, Even; Houssemand, Claude
2010-01-01
The Wisconsin Card Sorting Test (WCST) assesses executive and frontal lobe function and can be administered manually or by computer. Despite the widespread application of the 2 versions, the psychometric equivalence of their scores has rarely been evaluated and only a limited set of criteria has been considered. The present experimental study (N =…
ERIC Educational Resources Information Center
Wong, Ting-Hong
2012-01-01
Using the case of Chinese schools in post-Second World War Hong Kong, this paper explores the unintended consequences of an incomplete hegemonic project. After World War II, anti-imperialist pressures and rising educational demands in the local setting propelled the colonial authorities to be more active in providing and funding Chinese schools.…
Varandas, A J C
2009-02-01
The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.
Exact exchange-correlation potentials of singlet two-electron systems
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.
2017-10-01
We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Murphy, T F
1988-01-01
There are religious and philosophical versions of the thesis that AIDS is a punishment for homosexual behaviour. It is argued here that the religious version is seriously incomplete. Because of this incompleteness and because of the indeterminacies that ordinarily attend religious argumentation, it is concluded that the claim may be set aside as unconvincing. Homosexual behaviour is then judged for its morality against utilitarian, deontological, and natural law theories of ethics. It is argued that such behaviour involves no impediment to important moral goals and is not therefore immoral. Where natural law might be used to condemn homosexual behaviour, it is argued that the theory itself is not well established. Consequently there is a prima facie reason for rejecting the philosophical version of the punishment thesis. This conclusion is further supported by noting the lack of proportion between the purported immorality of homosexuality and a punishment as devastating as AIDS. PMID:3184138
A Coupled Approach for Structural Damage Detection with Incomplete Measurements
NASA Technical Reports Server (NTRS)
James, George; Cao, Timothy; Kaouk, Mo; Zimmerman, David
2013-01-01
This historical work couples model order reduction, damage detection, dynamic residual/mode shape expansion, and damage extent estimation to overcome the incomplete measurements problem by using an appropriate undamaged structural model. A contribution of this work is the development of a process to estimate the full dynamic residuals using the columns of a spring connectivity matrix obtained by disassembling the structural stiffness matrix. Another contribution is the extension of an eigenvector filtering procedure to produce full-order mode shapes that more closely match the measured active partition of the mode shapes using a set of modified Ritz vectors. The full dynamic residuals and full mode shapes are used as inputs to the minimum rank perturbation theory to provide an estimate of damage location and extent. The issues associated with this process are also discussed as drivers of near-term development activities to understand and improve this approach.
Correlation consistent basis sets for actinides. I. The Th and U atoms.
Peterson, Kirk A
2015-02-21
New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc - pV nZ - PP and cc - pV nZ - DK3, as well as outer-core correlation (valence + 5s5p5d), cc - pwCV nZ - PP and cc - pwCV nZ - DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThFn (n = 2 - 4), ThO2, and UFn (n = 4 - 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF4, ThF3, ThF2, and ThO2 are all within their experimental uncertainties. Bond dissociation energies of ThF4 and ThF3, as well as UF6 and UF5, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF4 and ThO2. The DKH3 atomization energy of ThO2 was calculated to be smaller than the DKH2 value by ∼1 kcal/mol.
On the basis set convergence of electron–electron entanglement measures: helium-like systems
Hofer, Thomas S.
2013-01-01
A systematic investigation of three different electron–electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li+ and Be2+ using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one–electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li+ and Be2+. In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used. PMID:24790952
On the basis set convergence of electron-electron entanglement measures: helium-like systems.
Hofer, Thomas S
2013-01-01
A systematic investigation of three different electron-electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li(+) and Be(2+) using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one-electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li(+) and Be(2+). In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used.
Orbital-Dependent Density Functionals for Chemical Catalysis
2014-10-17
noncollinear density functional theory to show that the low-spin state of Mn3 in a model of the oxygen -evolving complex of photosystem II avoids...DK, which denotes the cc-pV5Z-DK basis set for 3d metals and hydrogen and the ma-cc- pV5Z-DK basis set for oxygen ) and to nonrelativistic all...cc-pV5Z basis set for oxygen ). As compared to NCBS-DK results, all ECP calculations perform worse than def2-TZVP all-electron relativistic
Electric dipole moment of diatomic molecules by configuration interaction. IV.
NASA Technical Reports Server (NTRS)
Green, S.
1972-01-01
The theory of basis set dependence in configuration interaction calculations is discussed, taking into account a perturbation model which is valid for small changes in the self-consistent field orbitals. It is found that basis set corrections are essentially additive through first order. It is shown that an error found in a previously published dipole moment calculation by Green (1972) for the metastable first excited state of CO was indeed due to an inadequate basis set as claimed.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.
2007-01-01
In this work, we present a new set of basis functions, de ned over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also de ned over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.
2008-01-01
In this work, we present a new set of basis functions, defined over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also defined over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-21
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
NASA Astrophysics Data System (ADS)
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-01
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
NASA Astrophysics Data System (ADS)
Balabanov, Nikolai B.; Peterson, Kirk A.
2005-08-01
Sequences of basis sets that systematically converge towards the complete basis set (CBS) limit have been developed for the first-row transition metal elements Sc-Zn. Two families of basis sets, nonrelativistic and Douglas-Kroll-Hess (-DK) relativistic, are presented that range in quality from triple-ζ to quintuple-ζ. Separate sets are developed for the description of valence (3d4s) electron correlation (cc-pVnZ and cc-pVnZ-DK; n =T,Q, 5) and valence plus outer-core (3s3p3d4s) correlation (cc-pwCVnZ and cc-pwCVnZ-DK; n =T,Q, 5), as well as these sets augmented by additional diffuse functions for the description of negative ions and weak interactions (aug-cc-pVnZ and aug-cc-pVnZ-DK). Extensive benchmark calculations at the coupled cluster level of theory are presented for atomic excitation energies, ionization potentials, and electron affinities, as well as molecular calculations on selected hydrides (TiH, MnH, CuH) and other diatomics (TiF, Cu2). In addition to observing systematic convergence towards the CBS limits, both 3s3p electron correlation and scalar relativity are calculated to strongly impact many of the atomic and molecular properties investigated for these first-row transition metal species.
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.
2017-12-01
New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.
NASA Astrophysics Data System (ADS)
Frisch, Michael J.; Binkley, J. Stephen; Schaefer, Henry F., III
1984-08-01
The relative energies of the stationary points on the FH2 and H2CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Møller-Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H2→FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol-1 of the experimental value using the largest basis set considered. The qualitative features of the H2CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended.
NASA Astrophysics Data System (ADS)
Romero, Angel H.
2017-10-01
The influence of ring puckering angle on the multipole moments of sixteen four-membered heterocycles (1-16) was theoretically estimated using MP2 and different DFTs in combination with the 6-31+G(d,p) basis set. To obtain an accurate evaluation, CCSD/cc-pVDZ level and, the MP2 and PBE1PBE methods in combination with the aug-cc-pVDZ and aug-cc-pVTZ basis sets were performed on the planar geometries of 1-16. In general, the DFT and MP2 approaches provided an identical dependence of the electrical properties with the puckering angle for 1-16. Quantitatively, the quality of the level of theory and basis sets affects significant the predictions of the multipole moments, in particular for the heterocycles containing C=O and C=S bonds. Convergence basis sets within the MP2 and PBE1PBE approximations are reached in the dipole moment calculations when the aug-cc-pVTZ basis set is used, while the quadrupole and octupole moment computations require a larger basis set than aug-cc-pVTZ. On the other hand, the multipole moments showed a strong dependence with the molecular geometry and the nature of the carbon-heteroatom bonds. Specifically, the C-X bond determines the behavior of the μ(ϕ), θ(ϕ) and Ώ(ϕ) functions, while the C=Y bond plays an important role in the magnitude of the studied properties.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Bell, Jill A.; Reed, Melissa A.; Consitt, Leslie A.; Martin, Ola J.; Haynie, Kimberly R.; Hulver, Matthew W.; Muoio, Deborah M.; Dohm, G. Lynis
2010-01-01
Context: Intracellular lipid partitioning toward storage and the incomplete oxidation of fatty acids (FA) have been linked to insulin resistance. Objective: To gain insight into how intracellular lipid metabolism is related to insulin signal transduction, we examined the effects of severe obesity, excess FA, and overexpression of the FA transporter, FA translocase (FAT)/CD36, in primary human skeletal myocytes. Design, Setting, and Patients: Insulin signal transduction, FA oxidation, and metabolism were measured in skeletal muscle cells harvested from lean and severely obese women. To emulate the obesity phenotype in our cell culture system, we incubated cells from lean individuals with excess FA or overexpressed FAT/CD36 using recombinant adenoviral technology. Results: Complete oxidation of FA was significantly reduced, whereas total lipid accumulation, FA esterification into lipid intermediates, and incomplete oxidation were up-regulated in the muscle cells of severely obese subjects. Insulin signal transduction was reduced in the muscle cells from severely obese subjects compared to lean controls. Incubation of muscle cells from lean subjects with lipids reduced insulin signal transduction and increased lipid storage and incomplete FA oxidation. CD36 overexpression increased FA transport capacity, but did not impair complete FA oxidation and insulin signal transduction in muscle cells from lean subjects. Conclusions: Cultured myocytes from severely obese women express perturbations in FA metabolism and insulin signaling reminiscent of those observed in vivo. The obesity phenotype can be recapitulated in muscle cells from lean subjects via exposure to excess lipid, but not by overexpressing the FAT/CD36 FA transporter. PMID:20427507
Papatheodorou, Irene; Ziehm, Matthias; Wieser, Daniela; Alic, Nazif; Partridge, Linda; Thornton, Janet M.
2012-01-01
A challenge of systems biology is to integrate incomplete knowledge on pathways with existing experimental data sets and relate these to measured phenotypes. Research on ageing often generates such incomplete data, creating difficulties in integrating RNA expression with information about biological processes and the phenotypes of ageing, including longevity. Here, we develop a logic-based method that employs Answer Set Programming, and use it to infer signalling effects of genetic perturbations, based on a model of the insulin signalling pathway. We apply our method to RNA expression data from Drosophila mutants in the insulin pathway that alter lifespan, in a foxo dependent fashion. We use this information to deduce how the pathway influences lifespan in the mutant animals. We also develop a method for inferring the largest common sub-paths within each of our signalling predictions. Our comparisons reveal consistent homeostatic mechanisms across both long- and short-lived mutants. The transcriptional changes observed in each mutation usually provide negative feedback to signalling predicted for that mutation. We also identify an S6K-mediated feedback in two long-lived mutants that suggests a crosstalk between these pathways in mutants of the insulin pathway, in vivo. By formulating the problem as a logic-based theory in a qualitative fashion, we are able to use the efficient search facilities of Answer Set Programming, allowing us to explore larger pathways, combine molecular changes with pathways and phenotype and infer effects on signalling in in vivo, whole-organism, mutants, where direct signalling stimulation assays are difficult to perform. Our methods are available in the web-service NetEffects: http://www.ebi.ac.uk/thornton-srv/software/NetEffects. PMID:23251396
Papatheodorou, Irene; Ziehm, Matthias; Wieser, Daniela; Alic, Nazif; Partridge, Linda; Thornton, Janet M
2012-01-01
A challenge of systems biology is to integrate incomplete knowledge on pathways with existing experimental data sets and relate these to measured phenotypes. Research on ageing often generates such incomplete data, creating difficulties in integrating RNA expression with information about biological processes and the phenotypes of ageing, including longevity. Here, we develop a logic-based method that employs Answer Set Programming, and use it to infer signalling effects of genetic perturbations, based on a model of the insulin signalling pathway. We apply our method to RNA expression data from Drosophila mutants in the insulin pathway that alter lifespan, in a foxo dependent fashion. We use this information to deduce how the pathway influences lifespan in the mutant animals. We also develop a method for inferring the largest common sub-paths within each of our signalling predictions. Our comparisons reveal consistent homeostatic mechanisms across both long- and short-lived mutants. The transcriptional changes observed in each mutation usually provide negative feedback to signalling predicted for that mutation. We also identify an S6K-mediated feedback in two long-lived mutants that suggests a crosstalk between these pathways in mutants of the insulin pathway, in vivo. By formulating the problem as a logic-based theory in a qualitative fashion, we are able to use the efficient search facilities of Answer Set Programming, allowing us to explore larger pathways, combine molecular changes with pathways and phenotype and infer effects on signalling in in vivo, whole-organism, mutants, where direct signalling stimulation assays are difficult to perform. Our methods are available in the web-service NetEffects: http://www.ebi.ac.uk/thornton-srv/software/NetEffects.
Matriculation Research Report: Incomplete Grades; Data & Analysis.
ERIC Educational Resources Information Center
Gerda, Joe
The policy on incomplete grades at California's College of the Canyons states that incompletes may only be given under circumstances beyond students' control and that students must make arrangements with faculty prior to the end of the semester to clear the incomplete. Failure to complete an incomplete may result in an "F" grade. While…
Perlman, David C; Friedmann, Patricia; Horn, Leslie; Nugent, Anne; Schoeb, Veronika; Carey, Jeanne; Salomon, Nadim; Des Jarlais, Don C
2003-09-01
Syringe-exchange programs (SEPs) have proven to be valuable sites to conduct tuberculin skin testing among active injection drug users. Chest x-rays (CXRs) are needed to exclude active tuberculosis prior to initiating treatment for latent tuberculosis infection. Adherence of drug users to referral for off-site chest x-rays has been incomplete. Previous cost modeling demonstrated that a monetary incentive to promote adherence could be justified on the cost basis if it had even a modest effect on adherence. We compared adherence to referral for chest x-rays among injection drug users undergoing syringe exchange-based tuberculosis screening in New York City before and after the implementation of monetary incentives. From 1995 to 1998, there were 119 IDUs referred for CXRs based on tuberculin skin testing at the SEP. From 1999 to 2001, there were 58 IDUs referred for CXRs with a $25 incentive based on adherence. Adherence to CXR referral within 7 days was 46/58 (79%) among individuals who received the monetary incentive versus 17/119 (14%) prior to the implementation of the monetary incentive (P<.0001; odds ratio [OR]=23; 95% confidence interval [CI]=9.5-57). The median time to obtaining a CXR was significantly shorter among those given the incentive than among those referred without the incentive (2 vs. 11 days, P<.0001). In multivariate logistic regression analysis, use of the incentive was highly independently associated with increased adherence (OR=22.9; 95% CI=10-52). Monetary incentives are highly effective in increasing adherence to referral for screening CXRs to exclude active tuberculosis after syringe exchange-based tuberculin skin testing. Prior cost modeling demonstrated that monetary incentives could be justified on the cost basis if they had even a modest effect on adherence. The current data demonstrated that monetary incentives are highly effective at increasing adherence in this setting and therefore are justifiable on a cost basis. When health care interventions for drug users require referral off site, monetary incentives may be particularly valuable in promoting adherence.
Teaching of transcendence in physics
NASA Astrophysics Data System (ADS)
Jaki, Stanley L.
1987-10-01
Efforts aimed at showing that modern physics points to a truly transcendental factor as the explanation of the universe should be welcomed by those who have urged the teaching of physics in a broad cultural context. Those efforts may profit from the following guidelines: avoid the antiontological basis of the Copenhagen interpretation of quantum mechanics; make much of the reality of the universe and its enormous degree of specificity as revealed by general relativity and the cosmic background radiation; exploit Gödel's incompleteness theorems against any grand unified theory proposed as if it were true a priori and necessarily; and realize that the design argument always presupposes the validity of the cosmological argument.
Border-ownership-dependent tilt aftereffect in incomplete figures
NASA Astrophysics Data System (ADS)
Sugihara, Tadashi; Tsuji, Yoshihisa; Sakai, Ko
2007-01-01
A recent physiological finding of neural coding for border ownership (BO) that defines the direction of a figure with respect to the border has provided a possible basis for figure-ground segregation. To explore the underlying neural mechanisms of BO, we investigated stimulus configurations that activate BO circuitry through psychophysical investigation of the BO-dependent tilt aftereffect (BO-TAE). Specifically, we examined robustness of the border ownership signal by determining whether the BO-TAE is observed when gestalt factors are broken. The results showed significant BO-TAEs even when a global shape was not explicitly given due to the ambiguity of the contour, suggesting a contour-independent mechanism for BO coding.
NASA Astrophysics Data System (ADS)
Bailey, Jon A.; Jang, Yong-Chull; Lee, Weonjong; Leem, Jaehoon
2018-03-01
The CKM matrix element |Vcb| can be extracted by combining data from experiments with lattice QCD results for the semileptonic form factors for the B̅ → D(*)lv̅ decays. The Oktay-Kronfeld (OK) action was designed to reduce heavy-quark discretization errors to below 1%, or through O(λ3) in HQET power counting. Here we describe recent progress on bottom-to-charm currents improved to the same order in HQET as the OK action, and correct formerly reported results of our matching calculations, in which the operator basis was incomplete.
Simmons, David
2011-01-01
This article explores the utility of ethnography in accounting for healers’ understandings of HIV/AIDS—and more generally sexually transmitted infections—and the planning of HIV/AIDS education interventions targeting healers in urban Zimbabwe. I argue that much of the information utilized for planning and implementing such programs is actually based on rapid research procedures (usually single-method survey-based approaches) that do not fully capture healers’ explanatory frameworks. This incomplete information then becomes authoritative knowledge about local ‘traditions' and forms the basis for the design and implementation of training programs. Such decontextualization may, in turn, affect program effectiveness. PMID:21343161
Border-ownership-dependent tilt aftereffect in incomplete figures.
Sugihara, Tadashi; Tsuji, Yoshihisa; Sakai, Ko
2007-01-01
A recent physiological finding of neural coding for border ownership (BO) that defines the direction of a figure with respect to the border has provided a possible basis for figure-ground segregation. To explore the underlying neural mechanisms of BO, we investigated stimulus configurations that activate BO circuitry through psychophysical investigation of the BO-dependent tilt aftereffect (BO-TAE). Specifically, we examined robustness of the border ownership signal by determining whether the BO-TAE is observed when gestalt factors are broken. The results showed significant BO-TAEs even when a global shape was not explicitly given due to the ambiguity of the contour, suggesting a contour-independent mechanism for BO coding.
NASA Astrophysics Data System (ADS)
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
Quantum mechanical reality according to Copenhagen 2.0
NASA Astrophysics Data System (ADS)
Din, Allan M.
2016-05-01
The long-standing conceptual controversies concerning the interpretation of nonrelativistic quantum mechanics are argued, on one hand, to be due to its incompleteness, as affirmed by Einstein. But on the other hand, it appears to be possible to complete it at least partially, as Bohr might have appreciated it, in the framework of its standard mathematical formalism with observables as appropriately defined self-adjoint operators. This completion of quantum mechanics is based on the requirement on laboratory physics to be effectively confined to a bounded space region and on the application of the von Neumann deficiency theorem to properly define a set of self-adjoint extensions of standard observables, e.g. the momenta and the Hamiltonian, in terms of certain isometries on the region boundary. This is formalized mathematically in the setting of a boundary ontology for the so-called Qbox in which the wave function acquires a supplementary dependence on a set of Additional Boundary Variables (ABV). It is argued that a certain geometric subset of the ABV parametrizing Quasi-Periodic Translational Isometries (QPTI) has a particular physical importance by allowing for the definition of an ontic wave function, which has the property of epitomizing the spatial wave function “collapse.” Concomitantly the standard wave function in an unbounded geometry is interpreted as an epistemic wave function, which together with the ontic QPTI wave function gives rise to the notion of two-wave duality, replacing the standard concept of wave-particle duality. More generally, this approach to quantum physics in a bounded geometry provides a novel analytical basis for a better understanding of several conceptual notions of quantum mechanics, including reality, nonlocality, entanglement and Heisenberg’s uncertainty relation. The scope of this analysis may be seen as a foundational update of the multiple versions 1.x of the Copenhagen interpretation of quantum mechanics, which is sufficiently incremental so as to be appropriately characterized as Copenhagen 2.0.
Code of Federal Regulations, 2013 CFR
2013-04-01
... chapter), which shall set forth the nature and effective date of the action taken and shall provide any...), within: (i) Ten days after any action is taken that renders inaccurate, or that causes to be incomplete, any information filed on the Execution Page of Form 1-N (§ 249.10 of this chapter), or amendment...
Code of Federal Regulations, 2014 CFR
2014-04-01
... chapter), which shall set forth the nature and effective date of the action taken and shall provide any...), within: (i) Ten days after any action is taken that renders inaccurate, or that causes to be incomplete, any information filed on the Execution Page of Form 1-N (§ 249.10 of this chapter), or amendment...
Learning in data-limited multimodal scenarios: Scandent decision forests and tree-based features.
Hor, Soheil; Moradi, Mehdi
2016-12-01
Incomplete and inconsistent datasets often pose difficulties in multimodal studies. We introduce the concept of scandent decision trees to tackle these difficulties. Scandent trees are decision trees that optimally mimic the partitioning of the data determined by another decision tree, and crucially, use only a subset of the feature set. We show how scandent trees can be used to enhance the performance of decision forests trained on a small number of multimodal samples when we have access to larger datasets with vastly incomplete feature sets. Additionally, we introduce the concept of tree-based feature transforms in the decision forest paradigm. When combined with scandent trees, the tree-based feature transforms enable us to train a classifier on a rich multimodal dataset, and use it to classify samples with only a subset of features of the training data. Using this methodology, we build a model trained on MRI and PET images of the ADNI dataset, and then test it on cases with only MRI data. We show that this is significantly more effective in staging of cognitive impairments compared to a similar decision forest model trained and tested on MRI only, or one that uses other kinds of feature transform applied to the MRI data. Copyright © 2016. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less
Michelessi, Manuele; Lucenteforte, Ersilia; Miele, Alba; Oddone, Francesco; Crescioli, Giada; Fameli, Valeria; Korevaar, Daniël A; Virgili, Gianni
2017-01-01
Research has shown a modest adherence of diagnostic test accuracy (DTA) studies in glaucoma to the Standards for Reporting of Diagnostic Accuracy Studies (STARD). We have applied the updated 30-item STARD 2015 checklist to a set of studies included in a Cochrane DTA systematic review of imaging tools for diagnosing manifest glaucoma. Three pairs of reviewers, including one senior reviewer who assessed all studies, independently checked the adherence of each study to STARD 2015. Adherence was analyzed on an individual-item basis. Logistic regression was used to evaluate the effect of publication year and impact factor on adherence. We included 106 DTA studies, published between 2003-2014 in journals with a median impact factor of 2.6. Overall adherence was 54.1% for 3,286 individual rating across 31 items, with a mean of 16.8 (SD: 3.1; range 8-23) items per study. Large variability in adherence to reporting standards was detected across individual STARD 2015 items, ranging from 0 to 100%. Nine items (1: identification as diagnostic accuracy study in title/abstract; 6: eligibility criteria; 10: index test (a) and reference standard (b) definition; 12: cut-off definitions for index test (a) and reference standard (b); 14: estimation of diagnostic accuracy measures; 21a: severity spectrum of diseased; 23: cross-tabulation of the index and reference standard results) were adequately reported in more than 90% of the studies. Conversely, 10 items (3: scientific and clinical background of the index test; 11: rationale for the reference standard; 13b: blinding of index test results; 17: analyses of variability; 18; sample size calculation; 19: study flow diagram; 20: baseline characteristics of participants; 28: registration number and registry; 29: availability of study protocol; 30: sources of funding) were adequately reported in less than 30% of the studies. Only four items showed a statistically significant improvement over time: missing data (16), baseline characteristics of participants (20), estimates of diagnostic accuracy (24) and sources of funding (30). Adherence to STARD 2015 among DTA studies in glaucoma research is incomplete, and only modestly increasing over time.
Risk factor assessment of endoscopically removed malignant colorectal polyps
Netzer, P; Forster, C; Biral, R; Ruchti, C; Neuweiler, J; Stauffer, E; Schonegg, R; Maurer, C; Husler, J; Halter, F; Schmassmann, A
1998-01-01
Background—Malignant colorectal polyps are defined as endoscopically removed polyps with cancerous tissue which has invaded the submucosa. Various histological criteria exist for managing these patients. Aims—To determine the significance of histological findings of patients with malignant polyps. Methods—Five pathologists reviewed the specimens of 85 patients initially diagnosed with malignant polyps. High risk malignant polyps were defined as having one of the following: incomplete polypectomy, a margin not clearly cancer-free, lymphatic or venous invasion, or grade III carcinoma. Adverse outcome was defined as residual cancer in a resection specimen and local or metastatic recurrence in the follow up period (mean 67months). Results—Malignant polyps were confirmed in 70 cases. In the 32 low risk malignant polyps, no adverse outcomes occurred; 16(42%) of the 38 patients with high risk polyps had adverse outcomes (p<0.001). Independent adverse risk factors were incomplete polypectomy and a resected margin not clearly cancer-free; all other risk factors were only associated with adverse outcome when in combination. Conclusion—As no patients with low risk malignant polyps had adverse outcomes, polypectomy alone seems sufficient for these cases. In the high risk group, surgery is recommended when either of the two independent risk factors, incomplete polypectomy or a resection margin not clearly cancer-free, is present or if there is a combination of other risk factors. As lymphatic or venous invasion or grade III cancer did not have an adverse outcome when the sole risk factor, operations in such cases should be individually assessed on the basis of surgical risk. Keywords: malignant polyps; colon cancer; colonoscopy; polypectomy; histology PMID:9824349
Quantum Mechanical Calculations of Monoxides of Silicon Carbide Molecules
2003-03-01
Data for CO Final Energy Charge Mult Basis Set (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE 0 1 DVZ -112.6850703739 2.02121 -1 2 DVZ...Energy Charge Mult Basis Set (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE 0 1 DVZ -363.7341927429 0.617643 -1 2 DVZ -363.7114852831 0 3 DVZ...Input Geometry Output Geometry Basis Set Final Energy (hart) EA (eV) ZPE (hart) EA (eV) w/ ZPE -1 2 O-C-Si Linear O-C-Si Linear DZV -401.5363
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, S.; Shinada, M.; Matsuoka, O.
1990-10-01
A systematic calculation of new relativistic Gaussian basis sets is reported. The new basis sets are similar to the previously reported ones (J. Chem. Phys. {bold 91}, 4193 (1989)), but, in the calculation, the Breit interaction has been explicitly included besides the Dirac--Coulomb Hamiltonian. They have been adopted for the calculation of the self-consistent field effect on the Breit interaction energies and are expected to be useful for the studies on higher-order effects such as the electron correlations and other quantum electrodynamical effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Jong, Wibe A.; Harrison, Robert J.; Dixon, David A.
A parallel implementation of the spin-free one-electron Douglas-Kroll(-Hess) Hamiltonian (DKH) in NWChem is discussed. An efficient and accurate method to calculate DKH gradients is introduced. It is shown that the use of standard (non-relativistic) contracted basis set can produce erroneous results for elements beyond the first row elements. The generation of DKH contracted cc-pVXZ (X = D, T, Q, 5) basis sets for H, He, B - Ne, Al - Ar, and Ga - Br will be discussed.
NASA Astrophysics Data System (ADS)
Sanchez, Marina; Provasi, Patricio F.; Aucar, Gustavo A.; Sauer, Stephan P. A.
Locally dense basis sets (
NASA Astrophysics Data System (ADS)
Delvaux, Damien; Mulumba, Jean-Luc; Sebagenzi, Mwene Ntabwoba Stanislas; Bondo, Silvanos Fiama; Kervyn, François; Havenith, Hans-Balder
2017-10-01
In the frame of the Belgian GeoRisCA multi-risk assessment project focusing on the Kivu and northern Tanganyika rift region in Central Africa, a new probabilistic seismic hazard assessment has been performed for the Kivu rift segment in the central part of the western branch of the East African rift system. As the geological and tectonic setting of this region is incompletely known, especially the part lying in the Democratic Republic of the Congo, we compiled homogeneous cross-border tectonic and neotectonic maps. The seismic risk assessment is based on a new earthquake catalogue based on the ISC reviewed earthquake catalogue and supplemented by other local catalogues and new macroseismic epicenter data spanning 126 years, with 1068 events. The magnitudes have been homogenized to Mw and aftershocks removed. The final catalogue used for the seismic hazard assessment spans 60 years, from 1955 to 2015, with 359 events and a magnitude of completeness of 4.4. The seismotectonic zonation into 7 seismic source areas was done on the basis of the regional geological structure, neotectonic fault systems, basin architecture and distribution of thermal springs and earthquake epicenters. The Gutenberg-Richter seismic hazard parameters were determined by the least square linear fit and the maximum likelihood method. Seismic hazard maps have been computed using existing attenuation laws with the Crisis 2012 software. We obtained higher PGA values (475 years return period) for the Kivu rift region than the previous estimates. They also vary laterally in function of the tectonic setting, with the lowest value in the volcanically active Virunga - Rutshuru zone, highest in the currently non-volcanic parts of Lake Kivu, Rusizi valley and North Tanganyika rift zone, and intermediate in the regions flanking the axial rift zone.
Chung, Heaseung Sophia; Murray, Christopher I; Venkatraman, Vidya; Crowgey, Erin L; Rainer, Peter P; Cole, Robert N; Bomgarden, Ryan D; Rogers, John C; Balkan, Wayne; Hare, Joshua M; Kass, David A; Van Eyk, Jennifer E
2015-10-23
S-nitrosylation (SNO), an oxidative post-translational modification of cysteine residues, responds to changes in the cardiac redox-environment. Classic biotin-switch assay and its derivatives are the most common methods used for detecting SNO. In this approach, the labile SNO group is selectively replaced with a single stable tag. To date, a variety of thiol-reactive tags have been introduced. However, these methods have not produced a consistent data set, which suggests an incomplete capture by a single tag and potentially the presence of different cysteine subpopulations. To investigate potential labeling bias in the existing methods with a single tag to detect SNO, explore if there are distinct cysteine subpopulations, and then, develop a strategy to maximize the coverage of SNO proteome. We obtained SNO-modified cysteine data sets for wild-type and S-nitrosoglutathione reductase knockout mouse hearts (S-nitrosoglutathione reductase is a negative regulator of S-nitrosoglutathione production) and nitric oxide-induced human embryonic kidney cell using 2 labeling reagents: the cysteine-reactive pyridyldithiol and iodoacetyl based tandem mass tags. Comparison revealed that <30% of the SNO-modified residues were detected by both tags, whereas the remaining SNO sites were only labeled by 1 reagent. Characterization of the 2 distinct subpopulations of SNO residues indicated that pyridyldithiol reagent preferentially labels cysteine residues that are more basic and hydrophobic. On the basis of this observation, we proposed a parallel dual-labeling strategy followed by an optimized proteomics workflow. This enabled the profiling of 493 SNO sites in S-nitrosoglutathione reductase knockout hearts. Using a protocol comprising 2 tags for dual-labeling maximizes overall detection of SNO by reducing the previously unrecognized labeling bias derived from different cysteine subpopulations. © 2015 American Heart Association, Inc.
Expert opinion on landslide susceptibility elicted by probabilistic inversion from scenario rankings
NASA Astrophysics Data System (ADS)
Lee, Katy; Dashwood, Claire; Lark, Murray
2016-04-01
For many natural hazards the opinion of experts, with experience in assessing susceptibility under different circumstances, is a valuable source of information on which to base risk assessments. This is particularly important where incomplete process understanding, and limited data, limit the scope to predict susceptibility by mechanistic or statistical modelling. The expert has a tacit model of a system, based on their understanding of processes and their field experience. This model may vary in quality, depending on the experience of the expert. There is considerable interest in how one may elicit expert understanding by a process which is transparent and robust, to provide a basis for decision support. One approach is to provide experts with a set of scenarios, and then to ask them to rank small overlapping subsets of these with respect to susceptibility. Methods of probabilistic inversion have been used to compute susceptibility scores for each scenario, implicit in the expert ranking. It is also possible to model these scores as functions of measurable properties of the scenarios. This approach has been used to assess susceptibility of animal populations to invasive diseases, to assess risk to vulnerable marine environments and to assess the risk in hypothetical novel technologies for food production. We will present the results of a study in which a group of geologists with varying degrees of expertise in assessing landslide hazards were asked to rank sets of hypothetical simplified scenarios with respect to land slide susceptibility. We examine the consistency of their rankings and the importance of different properties of the scenarios in the tacit susceptibility model that their rankings implied. Our results suggest that this is a promising approach to the problem of how experts can communicate their tacit model of uncertain systems to those who want to make use of their expertise.
Near Hartree-Fock quality GTO basis sets for the first- and third-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry
1989-01-01
Energy-optimized Gaussian-type-orbital (GTO) basis sets of accuracy approaching that of numerical Hartree-Fock computations are compiled for the elements of the first and third rows of the periodic table. The methods employed in calculating the sets are explained; the applicability of the sets to electronic-structure calculations is discussed; and the results are presented in tables and briefly characterized.
Hahn, David K; RaghuVeer, Krishans; Ortiz, J V
2014-05-15
Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.
No need for external orthogonality in subsystem density-functional theory.
Unsleber, Jan P; Neugebauer, Johannes; Jacob, Christoph R
2016-08-03
Recent reports on the necessity of using externally orthogonal orbitals in subsystem density-functional theory (SDFT) [Annu. Rep. Comput. Chem., 8, 2012, 53; J. Phys. Chem. A, 118, 2014, 9182] are re-investigated. We show that in the basis-set limit, supermolecular Kohn-Sham-DFT (KS-DFT) densities can exactly be represented as a sum of subsystem densities, even if the subsystem orbitals are not externally orthogonal. This is illustrated using both an analytical example and in basis-set free numerical calculations for an atomic test case. We further show that even with finite basis sets, SDFT calculations using accurate reconstructed potentials can closely approach the supermolecular KS-DFT density, and that the deviations between SDFT and KS-DFT decrease as the basis-set limit is approached. Our results demonstrate that formally, there is no need to enforce external orthogonality in SDFT, even though this might be a useful strategy when developing projection-based DFT embedding schemes.
Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth
2015-10-13
We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.
Core-core and core-valence correlation
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
The effect of (1s) core correlation on properties and energy separations was analyzed using full configuration-interaction (FCI) calculations. The Be 1 S - 1 P, the C 3 P - 5 S and CH+ 1 Sigma + or - 1 Pi separations, and CH+ spectroscopic constants, dipole moment and 1 Sigma + - 1 Pi transition dipole moment were studied. The results of the FCI calculations are compared to those obtained using approximate methods. In addition, the generation of atomic natural orbital (ANO) basis sets, as a method for contracting a primitive basis set for both valence and core correlation, is discussed. When both core-core and core-valence correlation are included in the calculation, no suitable truncated CI approach consistently reproduces the FCI, and contraction of the basis set is very difficult. If the (nearly constant) core-core correlation is eliminated, and only the core-valence correlation is included, CASSCF/MRCI approached reproduce the FCI results and basis set contraction is significantly easier.
Structural basis of the 9-fold symmetry of centrioles.
Kitagawa, Daiju; Vakonakis, Ioannis; Olieric, Natacha; Hilbert, Manuel; Keller, Debora; Olieric, Vincent; Bortfeld, Miriam; Erat, Michèle C; Flückiger, Isabelle; Gönczy, Pierre; Steinmetz, Michel O
2011-02-04
The centriole, and the related basal body, is an ancient organelle characterized by a universal 9-fold radial symmetry and is critical for generating cilia, flagella, and centrosomes. The mechanisms directing centriole formation are incompletely understood and represent a fundamental open question in biology. Here, we demonstrate that the centriolar protein SAS-6 forms rod-shaped homodimers that interact through their N-terminal domains to form oligomers. We establish that such oligomerization is essential for centriole formation in C. elegans and human cells. We further generate a structural model of the related protein Bld12p from C. reinhardtii, in which nine homodimers assemble into a ring from which nine coiled-coil rods radiate outward. Moreover, we demonstrate that recombinant Bld12p self-assembles into structures akin to the central hub of the cartwheel, which serves as a scaffold for centriole formation. Overall, our findings establish a structural basis for the universal 9-fold symmetry of centrioles. Copyright © 2011 Elsevier Inc. All rights reserved.
Klooster, D C W; de Louw, A J A; Aldenkamp, A P; Besseling, R M H; Mestrom, R M C; Carrette, S; Zinger, S; Bergmans, J W M; Mess, W H; Vonck, K; Carrette, E; Breuer, L E M; Bernas, A; Tijhuis, A G; Boon, P
2016-06-01
Neuromodulation is a field of science, medicine, and bioengineering that encompasses implantable and non-implantable technologies for the purpose of improving quality of life and functioning of humans. Brain neuromodulation involves different neurostimulation techniques: transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), vagus nerve stimulation (VNS), and deep brain stimulation (DBS), which are being used both to study their effects on cognitive brain functions and to treat neuropsychiatric disorders. The mechanisms of action of neurostimulation remain incompletely understood. Insight into the technical basis of neurostimulation might be a first step towards a more profound understanding of these mechanisms, which might lead to improved clinical outcome and therapeutic potential. This review provides an overview of the technical basis of neurostimulation focusing on the equipment, the present understanding of induced electric fields, and the stimulation protocols. The review is written from a technical perspective aimed at supporting the use of neurostimulation in clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cystic fibrosis: myths. mistakes, and dogma.
Rubin, Bruce K
2014-03-01
As a student I recall being told that half of what we would learn in medical school would be proven to be wrong. The challenges were to identify the incorrect half and, often more challenging, be willing to give up our entrenched ideas. Myths have been defined as traditional concepts or practice with no basis in fact. A misunderstanding is a mistaken approach or incomplete knowledge that can be resolved with better evidence, while firmly established misunderstandings can become dogma; a point of view put forth as authoritative without basis in fact. In this paper, I explore a number of myths, mistakes, and dogma related to cystic fibrosis disease and care. Many of these are myths that have long been vanquished and even forgotten, while others are controversial. In the future, many things taken as either fact or "clinical experience" today will be proven wrong. Let us examine these myths with an open mind and willingness to change our beliefs when justified. Copyright © 2013 Elsevier Ltd. All rights reserved.
Numerical judgments by chimpanzees (Pan troglodytes) in a token economy.
Beran, Michael J; Evans, Theodore A; Hoyle, Daniel
2011-04-01
We presented four chimpanzees with a series of tasks that involved comparing two token sets or comparing a token set to a quantity of food. Selected tokens could be exchanged for food items on a one-to-one basis. Chimpanzees successfully selected the larger numerical set for comparisons of 1 to 5 items when both sets were visible and when sets were presented through one-by-one addition of tokens into two opaque containers. Two of four chimpanzees used the number of tokens and food items to guide responding in all conditions, rather than relying on token color, size, total amount, or duration of set presentation. These results demonstrate that judgments of simultaneous and sequential sets of stimuli are made by some chimpanzees on the basis of the numerousness of sets rather than other non-numerical dimensions. The tokens were treated as equivalent to food items on the basis of their numerousness, and the chimpanzees maximized reward by choosing the larger number of items in all situations.
Correlation consistent basis sets for actinides. I. The Th and U atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Kirk A., E-mail: kipeters@wsu.edu
New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Bothmore » series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2 value by ∼1 kcal/mol.« less
Segmented all-electron Gaussian basis sets of double and triple zeta qualities for Fr, Ra, and Ac
NASA Astrophysics Data System (ADS)
Campos, C. T.; de Oliveira, A. Z.; Ferreira, I. B.; Jorge, F. E.; Martins, L. S. C.
2017-05-01
Segmented all-electron basis sets of valence double and triple zeta qualities plus polarization functions for the elements Fr, Ra, and Ac are generated using non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. The sets are augmented with diffuse functions with the purpose to describe appropriately the electrons far from the nuclei. At the DKH-B3LYP level, first atomic ionization energies and bond lengths, dissociation energies, and polarizabilities of a sample of diatomics are calculated. Comparison with theoretical and experimental data available in the literature is carried out. It is verified that despite the small sizes of the basis sets, they are yet reliable.
NASA Astrophysics Data System (ADS)
Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi
2005-03-01
The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.
Rational Density Functional Selection Using Game Theory.
McAnanama-Brereton, Suzanne; Waller, Mark P
2018-01-22
Theoretical chemistry has a paradox of choice due to the availability of a myriad of density functionals and basis sets. Traditionally, a particular density functional is chosen on the basis of the level of user expertise (i.e., subjective experiences). Herein we circumvent the user-centric selection procedure by describing a novel approach for objectively selecting a particular functional for a given application. We achieve this by employing game theory to identify optimal functional/basis set combinations. A three-player (accuracy, complexity, and similarity) game is devised, through which Nash equilibrium solutions can be obtained. This approach has the advantage that results can be systematically improved by enlarging the underlying knowledge base, and the deterministic selection procedure mathematically justifies the density functional and basis set selections.
Student Use of Physics to Make Sense of Incomplete but Functional VPython Programs in a Lab Setting
NASA Astrophysics Data System (ADS)
Weatherford, Shawn A.
2011-12-01
Computational activities in Matter & Interactions, an introductory calculus-based physics course, have the instructional goal of providing students with the experience of applying the same set of a small number of fundamental principles to model a wide range of physical systems. However there are significant instructional challenges for students to build computer programs under limited time constraints, especially for students who are unfamiliar with programming languages and concepts. Prior attempts at designing effective computational activities were successful at having students ultimately build working VPython programs under the tutelage of experienced teaching assistants in a studio lab setting. A pilot study revealed that students who completed these computational activities had significant difficultly repeating the exact same tasks and further, had difficulty predicting the animation that would be produced by the example program after interpreting the program code. This study explores the interpretation and prediction tasks as part of an instructional sequence where students are asked to read and comprehend a functional, but incomplete program. Rather than asking students to begin their computational tasks with modifying program code, we explicitly ask students to interpret an existing program that is missing key lines of code. The missing lines of code correspond to the algebraic form of fundamental physics principles or the calculation of forces which would exist between analogous physical objects in the natural world. Students are then asked to draw a prediction of what they would see in the simulation produced by the VPython program and ultimately run the program to evaluate the students' prediction. This study specifically looks at how the participants use physics while interpreting the program code and creating a whiteboard prediction. This study also examines how students evaluate their understanding of the program and modification goals at the beginning of the modification task. While working in groups over the course of a semester, study participants were recorded while they completed three activities using these incomplete programs. Analysis of the video data showed that study participants had little difficulty interpreting physics quantities, generating a prediction, or determining how to modify the incomplete program. Participants did not base their prediction solely from the information from the incomplete program. When participants tried to predict the motion of the objects in the simulation, many turned to their knowledge of how the system would evolve if it represented an analogous real-world physical system. For example, participants attributed the real-world behavior of springs to helix objects even though the program did not include calculations for the spring to exert a force when stretched. Participants rarely interpreted lines of code in the computational loop during the first computational activity, but this changed during latter computational activities with most participants using their physics knowledge to interpret the computational loop. Computational activities in the Matter & Interactions curriculum were revised in light of these findings to include an instructional sequence of tasks to build a comprehension of the example program. The modified activities also ask students to create an additional whiteboard prediction for the time-evolution of the real-world phenomena which the example program will eventually model. This thesis shows how comprehension tasks identified by Palinscar and Brown (1984) as effective in improving reading comprehension are also effective in helping students apply their physics knowledge to interpret a computer program which attempts to model a real-world phenomena and identify errors in their understanding of the use, or omission, of fundamental physics principles in a computational model.
Plasser, Felix; Mewes, Stefanie A; Dreuw, Andreas; González, Leticia
2017-11-14
High-level multireference computations on electronically excited and charged states of tetracene are performed, and the results are analyzed using an extensive wave function analysis toolbox that has been newly implemented in the Molcas program package. Aside from verifying the strong effect of dynamic correlation, this study reveals an unexpected critical influence of the atomic orbital basis set. It is shown that different polarized double-ζ basis sets produce significantly different results for energies, densities, and overall wave functions, with the best performance obtained for the atomic natural orbital (ANO) basis set by Pierloot et al. Strikingly, the ANO basis set not only reproduces the energies but also performs exceptionally well in terms of describing the diffuseness of the different states and of their attachment/detachment densities. This study, thus, not only underlines the fact that diffuse basis functions are needed for an accurate description of the electronic wave functions but also shows that, at least for the present example, it is enough to include them implicitly in the contraction scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, James P.; Fortman, Sarah M.; Neese, Christopher F.
2016-05-20
Because methyl formate (HCOOCH{sub 3}) is abundant in the interstellar medium and has a strong, complex spectrum, it is a major contributor to the list of identified astrophysical lines. Because of its spectral complexity, with many low lying torsional and vibrational states, the quantum mechanical (QM) analysis of its laboratory spectrum is challenging and thus incomplete. As a result it is assumed that methyl formate is also one of the major contributors to the lists of unassigned lines in astrophysical spectra. This paper provides a characterization, without the need for QM analysis, of the spectrum of methyl formate between 214.6more » and 265.4 GHz for astrophysically significant temperatures. The experimental basis for this characterization is a set of 425 spectra, with absolute intensity calibration, recorded between 248 and 408 K. Analysis of these spectra makes possible the calculation of the Complete Experimental Spectrum of methyl formate as a function of temperature. Of the 7132 strongest lines reported in this paper, 2523 are in the QM catalogs. Intensity differences of 5%–10% from those calculated via QM models were also found. Results are provided in a frequency point-by-point catalog that is well suited for the simulation of overlapped spectra. The common astrophysical line frequency, line strength, and lower state energy catalog is also provided.« less
NASA Astrophysics Data System (ADS)
McMillan, James P.; Fortman, Sarah M.; Neese, Christopher F.; De Lucia, Frank C.
2016-05-01
Because methyl formate (HCOOCH3) is abundant in the interstellar medium and has a strong, complex spectrum, it is a major contributor to the list of identified astrophysical lines. Because of its spectral complexity, with many low lying torsional and vibrational states, the quantum mechanical (QM) analysis of its laboratory spectrum is challenging and thus incomplete. As a result it is assumed that methyl formate is also one of the major contributors to the lists of unassigned lines in astrophysical spectra. This paper provides a characterization, without the need for QM analysis, of the spectrum of methyl formate between 214.6 and 265.4 GHz for astrophysically significant temperatures. The experimental basis for this characterization is a set of 425 spectra, with absolute intensity calibration, recorded between 248 and 408 K. Analysis of these spectra makes possible the calculation of the Complete Experimental Spectrum of methyl formate as a function of temperature. Of the 7132 strongest lines reported in this paper, 2523 are in the QM catalogs. Intensity differences of 5%-10% from those calculated via QM models were also found. Results are provided in a frequency point-by-point catalog that is well suited for the simulation of overlapped spectra. The common astrophysical line frequency, line strength, and lower state energy catalog is also provided.
Kerr, I D; Sansom, M S
1997-01-01
Although there is a large body of site-directed mutagenesis data that identify the pore-lining sequence of the voltage-gated potassium channel, the structure of this region remains unknown. We have interpreted the available biochemical data as a set of topological and orientational restraints and employed these restraints to produce molecular models of the potassium channel pore region, H5. The H5 sequence has been modeled either as a tetramer of membrane-spanning beta-hairpins, thus producing an eight-stranded beta-barrel, or as a tetramer of incompletely membrane-spanning alpha-helical hairpins, thus producing an eight-staved alpha-helix bundle. In total, restraints-directed modeling has produced 40 different configurations of the beta-barrel model, each configuration comprising an ensemble of 20 structures, and 24 different configurations of the alpha-helix bundle model, each comprising an ensemble of 24 structures. Thus, over 1300 model structures for H5 have been generated. Configurations have been ranked on the basis of their predicted pore properties and on the extent of their agreement with the biochemical data. This ranking is employed to identify particular configurations of H5 that may be explored further as models of the pore-lining region of the voltage-gated potassium channel pore. Images FIGURE 7 FIGURE 12 PMID:9251779
Hydraulics calculation in drilling simulator
NASA Astrophysics Data System (ADS)
Malyugin, Aleksey A.; Kazunin, Dmitry V.
2018-05-01
The modeling of drilling hydraulics in the simulator system is discussed. This model is based on the previously developed quasi-steady model of an incompressible fluid flow. The model simulates the operation of all parts of the hydraulic drilling system. Based on the principles of creating a common hydraulic model, a set of new elements for well hydraulics was developed. It includes elements that correspond to the in-drillstring and annular space. There are elements controlling the inflow from the reservoir into the well and simulating the lift of gas along the annulus. New elements of the hydrosystem take into account the changing geometry of the well, loss in the bit, characteristics of the fluids including viscoplasticity. There is an opportunity specify the complications, the main one of which is gas, oil and water inflow. Correct work of models in cases of complications makes it possible to work out various methods for their elimination. The coefficients of the model are adjusted on the basis of incomplete experimental data provided by operators of drilling platforms. At the end of the article the results of modeling the elimination of gas inflow by a continuous method are presented. The values displayed in the simulator (drill pipe pressure, annulus pressure, input and output flow rates) are in good agreement with the experimental data. This exercise took one hour, which is less than the time on a real rig with the same configuration of equipment and well.
Michael G. Shelton
1995-01-01
Five forest floor weights (0, 10, 20, 30, and 40 MgJha), three forest floor compositions (pine, pine-hardwood, and hardwood), and two seed placements (forest floor and soil surface) were tested in a three-factorial. split-plot design with four incomplete, randomized blocks. The experiment was conducted in a nursery setting and used wooden frames to define 0.145-m
WHOI Hawaii Ocean Timeseries Station (WHOTS): WHOTS-6 2009 Mooring Turnaround Cruise Report
2010-01-01
pyranometers . This report describes the set-up on the ship, the procedures adopted, and some preliminary, and necessarily incomplete, results from...discrepancy in the net energy budget. The collection of recently calibrated pyranometers on this cruise, from two manufacturers and different...the bow tower. To complete the PSD air-sea flux system, pyranometers and pyrgeometers (Eppley and Kipp&Zonen) were mounted on top of pole on the 03
2008-10-01
is theoretically similar to the concept of “partial or compact polarimetry”, yields comparable results to full or quadrature-polarized systems by...to the emerging “compact polarimetry” methodology [9]-[13] that exploits scattering system response to an incomplete set of input EM field components...a scattering operator or matrix. Although as theoretically discussed earlier, performance of such fully-polarized radar system (i.e., quadrature
On six-dimensional pseudo-Riemannian almost g.o. spaces
NASA Astrophysics Data System (ADS)
Dušek, Zdeněk; Kowalski, Oldřich
2007-09-01
We modify the "Kaplan example" (a six-dimensional nilpotent Lie group which is a Riemannian g.o. space) and we obtain two pseudo-Riemannian homogeneous spaces with noncompact isotropy group. These examples have the property that all geodesics are homogeneous up to a set of measure zero. We also show that the (incomplete) geodesic graphs are strongly discontinuous at the boundary, i.e., the limits along certain curves are always infinite.
Convergence of third order correlation energy in atoms and molecules.
Kahn, Kalju; Granovsky, Alex A; Noga, Jozef
2007-01-30
We have investigated the convergence of third order correlation energy within the hierarchies of correlation consistent basis sets for helium, neon, and water, and for three stationary points of hydrogen peroxide. This analysis confirms that singlet pair energies converge much slower than triplet pair energies. In addition, singlet pair energies with (aug)-cc-pVDZ and (aug)-cc-pVTZ basis sets do not follow a converging trend and energies with three basis sets larger than aug-cc-pVTZ are generally required for reliable extrapolations of third order correlation energies, making so the explicitly correlated R12 calculations preferable.
Sensitivity of the Properties of Ruthenium “Blue Dimer” to Method, Basis Set, and Continuum Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozkanlar, Abdullah; Clark, Aurora E.
2012-05-23
The ruthenium “blue dimer” [(bpy)2RuIIIOH2]2O4+ is best known as the first well-defined molecular catalyst for water oxidation. It has been subject to numerous computational studies primarily employing density functional theory. However, those studies have been limited in the functionals, basis sets, and continuum models employed. The controversy in the calculated electronic structure and the reaction energetics of this catalyst highlights the necessity of benchmark calculations that explore the role of density functionals, basis sets, and continuum models upon the essential features of blue-dimer reactivity. In this paper, we report Kohn-Sham complete basis set (KS-CBS) limit extrapolations of the electronic structuremore » of “blue dimer” using GGA (BPW91 and BP86), hybrid-GGA (B3LYP), and meta-GGA (M06-L) density functionals. The dependence of solvation free energy corrections on the different cavity types (UFF, UA0, UAHF, UAKS, Bondi, and Pauling) within polarizable and conductor-like polarizable continuum model has also been investigated. The most common basis sets of double-zeta quality are shown to yield results close to the KS-CBS limit; however, large variations are observed in the reaction energetics as a function of density functional and continuum cavity model employed.« less
NASA Astrophysics Data System (ADS)
Beloy, Kyle; Derevianko, Andrei
2008-09-01
The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V.M. Shabaev et al., Phys. Rev. Lett. 93 (2004) 130405] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely-employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37 (1988) 307-315]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s-7s transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach. In addition, we present a strategy for optimizing the size of the basis sets by choosing progressively smaller number of basis functions for increasingly higher partial waves. This strategy exploits suppression of contributions of high partial waves to typical many-body correlation corrections.
Jankowska, Marzena; Kupka, Teobald; Stobiński, Leszek; Faber, Rasmus; Lacerda, Evanildo G; Sauer, Stephan P A
2016-02-05
Hartree-Fock and density functional theory with the hybrid B3LYP and general gradient KT2 exchange-correlation functionals were used for nonrelativistic and relativistic nuclear magnetic shielding calculations of helium, neon, argon, krypton, and xenon dimers and free atoms. Relativistic corrections were calculated with the scalar and spin-orbit zeroth-order regular approximation Hamiltonian in combination with the large Slater-type basis set QZ4P as well as with the four-component Dirac-Coulomb Hamiltonian using Dyall's acv4z basis sets. The relativistic corrections to the nuclear magnetic shieldings and chemical shifts are combined with nonrelativistic coupled cluster singles and doubles with noniterative triple excitations [CCSD(T)] calculations using the very large polarization-consistent basis sets aug-pcSseg-4 for He, Ne and Ar, aug-pcSseg-3 for Kr, and the AQZP basis set for Xe. For the dimers also, zero-point vibrational (ZPV) corrections are obtained at the CCSD(T) level with the same basis sets were added. Best estimates of the dimer chemical shifts are generated from these nuclear magnetic shieldings and the relative importance of electron correlation, ZPV, and relativistic corrections for the shieldings and chemical shifts is analyzed. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.
2010-12-01
One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher-order correlation and then used to compute highly accurate spectroscopic data for all isotopologues. Agreement with high-resolution experiment for 14N2H+ and 14N2D+ was excellent, but for 14N16O2+ agreement for the two stretching fundamentals is outside the expected residual uncertainty in the theoretical values, and it is concluded that there is an error in the experimental quantities. It is hoped that the highly accurate spectroscopic data presented for the minor isotopologues of N2H+ and NO2+ will be useful in the interpretation of future laboratory or astronomical observations.
NASA Astrophysics Data System (ADS)
Srivastava, H. M.; Saxena, R. K.; Parmar, R. K.
2018-01-01
Our present investigation is inspired by the recent interesting extensions (by Srivastava et al. [35]) of a pair of the Mellin-Barnes type contour integral representations of their incomplete generalized hypergeometric functions p γ q and p Γ q by means of the incomplete gamma functions γ( s, x) and Γ( s, x). Here, in this sequel, we introduce a family of the relatively more general incomplete H-functions γ p,q m,n ( z) and Γ p,q m,n ( z) as well as their such special cases as the incomplete Fox-Wright generalized hypergeometric functions p Ψ q (γ) [ z] and p Ψ q (Γ) [ z]. The main object of this paper is to study and investigate several interesting properties of these incomplete H-functions, including (for example) decomposition and reduction formulas, derivative formulas, various integral transforms, computational representations, and so on. We apply some substantially general Riemann-Liouville and Weyl type fractional integral operators to each of these incomplete H-functions. We indicate the easilyderivable extensions of the results presented here that hold for the corresponding incomplete \\overline H -functions as well. Potential applications of many of these incomplete special functions involving (for example) probability theory are also indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sylvetsky, Nitai, E-mail: gershom@weizmann.ac.il; Martin, Jan M. L., E-mail: gershom@weizmann.ac.il; Peterson, Kirk A., E-mail: kipeters@wsu.edu
2016-06-07
In the context of high-accuracy computational thermochemistry, the valence coupled cluster with all singles and doubles (CCSD) correlation component of molecular atomization energies presents the most severe basis set convergence problem, followed by the (T) component. In the present paper, we make a detailed comparison, for an expanded version of the W4-11 thermochemistry benchmark, between, on the one hand, orbital-based CCSD/AV{5,6}Z + d and CCSD/ACV{5,6}Z extrapolation, and on the other hand CCSD-F12b calculations with cc-pVQZ-F12 and cc-pV5Z-F12 basis sets. This latter basis set, now available for H–He, B–Ne, and Al–Ar, is shown to be very close to the basis setmore » limit. Apparent differences (which can reach 0.35 kcal/mol for systems like CCl{sub 4}) between orbital-based and CCSD-F12b basis set limits disappear if basis sets with additional radial flexibility, such as ACV{5,6}Z, are used for the orbital calculation. Counterpoise calculations reveal that, while total atomization energies with V5Z-F12 basis sets are nearly free of BSSE, orbital calculations have significant BSSE even with AV(6 + d)Z basis sets, leading to non-negligible differences between raw and counterpoise-corrected extrapolated limits. This latter problem is greatly reduced by switching to ACV{5,6}Z core-valence basis sets, or simply adding an additional zeta to just the valence orbitals. Previous reports that all-electron approaches like HEAT (high-accuracy extrapolated ab-initio thermochemistry) lead to different CCSD(T) limits than “valence limit + CV correction” approaches like Feller-Peterson-Dixon and Weizmann-4 (W4) theory can be rationalized in terms of the greater radial flexibility of core-valence basis sets. For (T) corrections, conventional CCSD(T)/AV{Q,5}Z + d calculations are found to be superior to scaled or extrapolated CCSD(T)-F12b calculations of similar cost. For a W4-F12 protocol, we recommend obtaining the Hartree-Fock and valence CCSD components from CCSD-F12b/cc-pV{Q,5}Z-F12 calculations, but the (T) component from conventional CCSD(T)/aug’-cc-pV{Q,5}Z + d calculations using Schwenke’s extrapolation; post-CCSD(T), core-valence, and relativistic corrections are to be obtained as in the original W4 theory. W4-F12 is found to agree slightly better than W4 with ATcT (active thermochemical tables) data, at a substantial saving in computation time and especially I/O overhead. A W4-F12 calculation on benzene is presented as a proof of concept.« less
Cress, Jill J.; Riegle, Jodi L.
2007-01-01
According to the United Nations Environment Programme World Conservation Monitoring Centre (UNEP-WCMC) approximately 60 percent of the data contained in the World Database on Protected Areas (WDPA) has missing or incomplete boundary information. As a result, global analyses based on the WDPA can be inaccurate, and professionals responsible for natural resource planning and priority setting must rely on incomplete geospatial data sets. To begin to address this problem the World Data Center for Biodiversity and Ecology, in cooperation with the U. S. Geological Survey (USGS) Rocky Mountain Geographic Science Center (RMGSC), the National Biological Information Infrastructure (NBII), the Global Earth Observation System, and the Inter-American Biodiversity Information Network (IABIN) sponsored a Protected Area (PA) workshop in Asuncion, Paraguay, in November 2007. The primary goal of this workshop was to train representatives from eight South American countries on the use of the Global Data Toolset (GDT) for reviewing and editing PA data. Use of the GDT will allow PA experts to compare their national data to other data sets, including non-governmental organization (NGO) and WCMC data, in order to highlight inaccuracies or gaps in the data, and then to apply any needed edits, especially in the delineation of the PA boundaries. In addition, familiarizing the participants with the web-enabled GDT will allow them to maintain and improve their data after the workshop. Once data edits have been completed the GDT will also allow the country authorities to perform any required review and validation processing. Once validated, the data can be used to update the global WDPA and IABIN databases, which will enhance analysis on global and regional levels.
NASA Astrophysics Data System (ADS)
Resende, Stella M.; De Almeida, Wagner B.; van Duijneveldt-van de Rijdt, Jeanne G. C. M.; van Duijneveldt, Frans B.
2001-08-01
Geometrical parameters for the equilibrium (MIN) and lowest saddle-point (TS) geometries of the C2H4⋯SO2 dimer, and the corresponding binding energies, were calculated using the Hartree-Fock and correlated levels of ab initio theory, in basis sets ranging from the D95(d,p) double-zeta basis set to the aug-cc-pVQZ correlation consistent basis set. An assessment of the effect of the basis set superposition error (BSSE) on these results was made. The dissociation energy from the lowest vibrational state was estimated to be 705±100 cm-1 at the basis set limit, which is well within the range expected from experiment. The barrier to internal rotation was found to be 53±5 cm-1, slightly higher than the (revised) experimental result of 43 cm-1, probably due to zero-point vibrational effects. Our results clearly show that, in direct contrast with recent ideas, the BSSE correction affects differentially the MIN and TS binding energies and so has to be included in the calculation of small energy barriers such as that in the C2H4⋯SO2 dimer. Previous reports of positive MP2 frozen-core binding energies for this complex in basis D95(d,p) are confirmed. The anomalies are shown to be an artifact arising from an incorrect removal of virtual orbitals by the default frozen-core option in the GAUSSIAN program.
The application and use of chemical space mapping to interpret crystallization screening results
Snell, Edward H.; Nagel, Ray M.; Wojtaszcyk, Ann; O’Neill, Hugh; Wolfley, Jennifer L.; Luft, Joseph R.
2008-01-01
Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually. PMID:19018100
The application and use of chemical space mapping to interpret crystallization screening results.
Snell, Edward H; Nagel, Ray M; Wojtaszcyk, Ann; O'Neill, Hugh; Wolfley, Jennifer L; Luft, Joseph R
2008-12-01
Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually.
A Review On Missing Value Estimation Using Imputation Algorithm
NASA Astrophysics Data System (ADS)
Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina
2017-09-01
The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.
Failsafe modes in incomplete minority game
NASA Astrophysics Data System (ADS)
Yao, Xiaobo; Wan, Shaolong; Chen, Wen
2009-09-01
We make a failsafe extension to the incomplete minority game model, give a brief analysis on how incompleteness will effect system efficiency. Simulations that limited incompleteness in strategies can improve the system efficiency. Among three failsafe modes, the “Back-to-Best” mode brings most significant improvement and keeps the system efficiency in a long range of incompleteness. A simple analytic formula has a trend which matches simulation results. The IMMG model is used to study the effect of distribution, and we find that there is one junction point in each series of curves, at which system efficiency is not influenced by the distribution of incompleteness. When pIbar > the concentration of incompleteness weakens the effect. On the other side of , concentration will be helpful. When pI is close to zero agents using incomplete strategies have on average better profits than those using standard strategies, and the “Back-to-Best” agents have a wider range of pI to win.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.
2016-12-01
In this article we consider a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding linear or nonlinear discrete-time recurrent vector relations and its control system consist from two levels: basic level (control level I) that is dominating level and auxiliary level (control level II) that is subordinate level. Both levels have different criterions of functioning and united by information and control connections which defined in advance. In this article we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks vectors. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final states of this system with incomplete information and the general scheme for its solving.
Statistical methods for incomplete data: Some results on model misspecification.
McIsaac, Michael; Cook, R J
2017-02-01
Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure
Hill, Mary C.
1990-01-01
The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.
Theory and experimental evidence of phonon domains and their roles in pre-martensitic phenomena
NASA Astrophysics Data System (ADS)
Jin, Yongmei M.; Wang, Yu U.; Ren, Yang
2015-12-01
Pre-martensitic phenomena, also called martensite precursor effects, have been known for decades while yet remain outstanding issues. This paper addresses pre-martensitic phenomena from new theoretical and experimental perspectives. A statistical mechanics-based Grüneisen-type phonon theory is developed. On the basis of deformation-dependent incompletely softened low-energy phonons, the theory predicts a lattice instability and pre-martensitic transition into elastic-phonon domains via 'phonon spinodal decomposition.' The phase transition lifts phonon degeneracy in cubic crystal and has a nature of phonon pseudo-Jahn-Teller lattice instability. The theory and notion of phonon domains consistently explain the ubiquitous pre-martensitic anomalies as natural consequences of incomplete phonon softening. The phonon domains are characterised by broken dynamic symmetry of lattice vibrations and deform through internal phonon relaxation in response to stress (a particular case of Le Chatelier's principle), leading to previously unexplored new domain phenomenon. Experimental evidence of phonon domains is obtained by in situ three-dimensional phonon diffuse scattering and Bragg reflection using high-energy synchrotron X-ray single-crystal diffraction, which observes exotic domain phenomenon fundamentally different from usual ferroelastic domain switching phenomenon. In light of the theory and experimental evidence of phonon domains and their roles in pre-martensitic phenomena, currently existing alternative opinions on martensitic precursor phenomena are revisited.
Stereochemical analysis of (+)-limonene using theoretical and experimental NMR and chiroptical data
NASA Astrophysics Data System (ADS)
Reinscheid, F.; Reinscheid, U. M.
2016-02-01
Using limonene as test molecule, the success and the limitations of three chiroptical methods (optical rotatory dispersion (ORD), electronic and vibrational circular dichroism, ECD and VCD) could be demonstrated. At quite low levels of theory (mpw1pw91/cc-pvdz, IEFPCM (integral equation formalism polarizable continuum model)) the experimental ORD values differ by less than 10 units from the calculated values. The modelling in the condensed phase still represents a challenge so that experimental NMR data were used to test for aggregation and solvent-solute interactions. After establishing a reasonable structural model, only the ECD spectra prediction showed a decisive dependence on the basis set: only augmented (in the case of Dunning's basis sets) or diffuse (in the case of Pople's basis sets) basis sets predicted the position and shape of the ECD bands correctly. Based on these result we propose a procedure to assign the absolute configuration (AC) of an unknown compound using the comparison between experimental and calculated chiroptical data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noguchi, Yoshifumi, E-mail: y.noguchi@issp.u-tokyo.ac.jp; Hiyama, Miyabi; Akiyama, Hidefumi
2014-07-28
The optical properties of an isolated firefly luciferin anion are investigated by using first-principles calculations, employing the many-body perturbation theory to take into account the excitonic effect. The calculated photoabsorption spectra are compared with the results obtained using the time-dependent density functional theory (TDDFT) employing the localized atomic orbital (AO) basis sets and a recent experiment in vacuum. The present method well reproduces the line shape at the photon energy corresponding to the Rydberg and resonance excitations but overestimates the peak positions by about 0.5 eV. However, the TDDFT-calculated positions of some peaks are closer to those of the experiment.more » We also investigate the basis set dependency in describing the free electron states above vacuum level and the excitons involving the transitions to the free electron states and conclude that AO-only basis sets are inaccurate for free electron states and the use of a plane wave basis set is required.« less
NASA Technical Reports Server (NTRS)
Tredoux, Marian; Hart, Rodger J.; Lindsay, Nicholas M.; De Wit, Maarten J.; Armstrong, Richard A.
1989-01-01
This paper reports the results of new field observations and the geochemical analyses for the area of the Bon Accord (BA) (the Kaapvaal craton, South Africa) Ni-Fe deposit, with particular consideration given to the trace element, platinum-group element, and isotopic (Pb, Nd, and Os) compositions. On the basis of these data, an interpretation of BA is suggested, according to which the BA deposit is a siderophile-rich heterogeneity remaining in the deep mantle after a process of incomplete core formation. The implications of such a model for the study of core-mantle segregation and the geochemistry of the lowermost mantle are discussed.
The government of chronic poverty: from exclusion to citizenship?
Hickey, Sam
2010-01-01
Development trustees have increasingly sought to challenge chronic poverty by promoting citizenship amongst poor people, a move that frames citizenship formation as central to overcoming the exclusions and inequalities associated with uneven development. For sceptics, this move within inclusive neoliberalism is inevitably depoliticising and disempowering, and our cases do suggest that citizenship-based strategies rarely alter the underlying basis of poverty. However, our evidence also offers some support to those optimists who suggest that progressive moves towards poverty reduction and citizenship formation have become more rather than less likely at the current juncture. The promotion of citizenship emerges here as a significant but incomplete effort to challenge poverty that persists over time.
In search of a consensus model of the resting state of a voltage-sensing domain.
Vargas, Ernesto; Bezanilla, Francisco; Roux, Benoît
2011-12-08
Voltage-sensing domains (VSDs) undergo conformational changes in response to the membrane potential and are the critical structural modules responsible for the activation of voltage-gated channels. Structural information about the key conformational states underlying voltage activation is currently incomplete. Through the use of experimentally determined residue-residue interactions as structural constraints, we determine and refine a model of the Kv channel VSD in the resting conformation. The resulting structural model is in broad agreement with results that originate from various labs using different techniques, indicating the emergence of a consensus for the structural basis of voltage sensing. Copyright © 2011 Elsevier Inc. All rights reserved.
49 CFR 568.4 - Requirements for incomplete vehicle manufacturers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... manufacturing operation on the incomplete vehicle. (3) Identification of the incomplete vehicle(s) to which the document applies. The identification shall be by vehicle identification number (VIN) or groups of VINs to... 49 Transportation 6 2014-10-01 2014-10-01 false Requirements for incomplete vehicle manufacturers...
49 CFR 568.4 - Requirements for incomplete vehicle manufacturers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... manufacturing operation on the incomplete vehicle. (3) Identification of the incomplete vehicle(s) to which the document applies. The identification shall be by vehicle identification number (VIN) or groups of VINs to... 49 Transportation 6 2011-10-01 2011-10-01 false Requirements for incomplete vehicle manufacturers...
49 CFR 568.4 - Requirements for incomplete vehicle manufacturers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... manufacturing operation on the incomplete vehicle. (3) Identification of the incomplete vehicle(s) to which the document applies. The identification shall be by vehicle identification number (VIN) or groups of VINs to... 49 Transportation 6 2010-10-01 2010-10-01 false Requirements for incomplete vehicle manufacturers...
49 CFR 529.4 - Requirements for incomplete automobile manufacturers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 6 2012-10-01 2012-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...
49 CFR 529.4 - Requirements for incomplete automobile manufacturers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 6 2010-10-01 2010-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...
49 CFR 529.4 - Requirements for incomplete automobile manufacturers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...
49 CFR 529.4 - Requirements for incomplete automobile manufacturers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 6 2013-10-01 2013-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...
49 CFR 529.4 - Requirements for incomplete automobile manufacturers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 6 2011-10-01 2011-10-01 false Requirements for incomplete automobile... AUTOMOBILES § 529.4 Requirements for incomplete automobile manufacturers. (a) Except as provided in paragraph (c) of this section, §§ 529.5 and 529.6, each incomplete automobile manufacturer is considered, with...
Basis set study of classical rotor lattice dynamics.
Witkoskie, James B; Wu, Jianlan; Cao, Jianshu
2004-03-22
The reorientational relaxation of molecular systems is important in many phenomenon and applications. In this paper, we explore the reorientational relaxation of a model Brownian rotor lattice system with short range interactions in both the high and low temperature regimes. In this study, we use a basis set expansion to capture collective motions of the system. The single particle basis set is used in the high temperature regime, while the spin wave basis is used in the low temperature regime. The equations of motion derived in this approach are analogous to the generalized Langevin equation, but the equations render flexibility by allowing nonequilibrium initial conditions. This calculation shows that the choice of projection operators in the generalized Langevin equation (GLE) approach corresponds to defining a specific inner-product space, and this inner-product space should be chosen to reveal the important physics of the problem. The basis set approach corresponds to an inner-product and projection operator that maintain the orthogonality of the spherical harmonics and provide a convenient platform for analyzing GLE expansions. The results compare favorably with numerical simulations, and the formalism is easily extended to more complex systems. (c) 2004 American Institute of Physics
Theoretical study of the XP3 (X = Al, B, Ga) clusters
NASA Astrophysics Data System (ADS)
Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.
2012-05-01
The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.
How to compute isomerization energies of organic molecules with quantum chemical methods.
Grimme, Stefan; Steinmetz, Marc; Korth, Martin
2007-03-16
The reaction energies for 34 typical organic isomerizations including oxygen and nitrogen heteroatoms are investigated with modern quantum chemical methods that have the perspective of also being applicable to large systems. The experimental reaction enthalpies are corrected for vibrational and thermal effects, and the thus derived "experimental" reaction energies are compared to corresponding theoretical data. A series of standard AO basis sets in combination with second-order perturbation theory (MP2, SCS-MP2), conventional density functionals (e.g., PBE, TPSS, B3-LYP, MPW1K, BMK), and new perturbative functionals (B2-PLYP, mPW2-PLYP) are tested. In three cases, obvious errors of the experimental values could be detected, and accurate coupled-cluster [CCSD(T)] reference values have been used instead. It is found that only triple-zeta quality AO basis sets provide results close enough to the basis set limit and that sets like the popular 6-31G(d) should be avoided in accurate work. Augmentation of small basis sets with diffuse functions has a notable effect in B3-LYP calculations that is attributed to intramolecular basis set superposition error and covers basic deficiencies of the functional. The new methods based on perturbation theory (SCS-MP2, X2-PLYP) are found to be clearly superior to many other approaches; that is, they provide mean absolute deviations of less than 1.2 kcal mol-1 and only a few (<10%) outliers. The best performance in the group of conventional functionals is found for the highly parametrized BMK hybrid meta-GGA. Contrary to accepted opinion, hybrid density functionals offer no real advantage over simple GGAs. For reasonably large AO basis sets, results of poor quality are obtained with the popular B3-LYP functional that cannot be recommended for thermochemical applications in organic chemistry. The results of this study are complementary to often used benchmarks based on atomization energies and should guide chemists in their search for accurate and efficient computational thermochemistry methods.
Setting conservation priorities.
Wilson, Kerrie A; Carwardine, Josie; Possingham, Hugh P
2009-04-01
A generic framework for setting conservation priorities based on the principles of classic decision theory is provided. This framework encapsulates the key elements of any problem, including the objective, the constraints, and knowledge of the system. Within the context of this framework the broad array of approaches for setting conservation priorities are reviewed. While some approaches prioritize assets or locations for conservation investment, it is concluded here that prioritization is incomplete without consideration of the conservation actions required to conserve the assets at particular locations. The challenges associated with prioritizing investments through time in the face of threats (and also spatially and temporally heterogeneous costs) can be aided by proper problem definition. Using the authors' general framework for setting conservation priorities, multiple criteria can be rationally integrated and where, how, and when to invest conservation resources can be scheduled. Trade-offs are unavoidable in priority setting when there are multiple considerations, and budgets are almost always finite. The authors discuss how trade-offs, risks, uncertainty, feedbacks, and learning can be explicitly evaluated within their generic framework for setting conservation priorities. Finally, they suggest ways that current priority-setting approaches may be improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Mads, E-mail: mads.brandbyge@nanotech.dtu.dk
2014-05-07
In a recent paper Reuter and Harrison [J. Chem. Phys. 139, 114104 (2013)] question the widely used mean-field electron transport theories, which employ nonorthogonal localized basis sets. They claim these can violate an “implicit decoupling assumption,” leading to wrong results for the current, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent of whether or not the chosen basis set is nonorthogonal, andmore » that the current for a given basis set is consistent with divisions in real space. The ambiguity known from charge population analysis for nonorthogonal bases does not carry over to calculations of charge flux.« less
Spurious changes in the ISCCP dataset
NASA Technical Reports Server (NTRS)
Klein, Stephen A.; Hartmann, Dennis L.
1993-01-01
The International Satellite Cloud Climatology Project (ISCCP) data set for July 1983-December 1990 exhibits a long term decrease in mean cloud optical depth and mean cloudtop temperature which is large compared to the mean for this period. It is here suggested that this decrease is an artifact of incomplete normalization of the visible channel on successive polar orbiters employed as the ISCCP's calibration standard; more accurate calibration techniques are required for the establishment of long-term climate trends.
Evidence-based librarianship: what might we expect in the years ahead?
Eldredge, Jonathan D
2002-06-01
To predict the possible accomplishments of the Evidence-Based Librarianship (EBL) movement by the years 2005, 2010, 2015 and 2020. Predictive. The author draws upon recent events, relevant historical events and anecdotal accounts to detect evidence of predictable trends. The author develops a set of probable predictions for the development of EBL. Although incomplete evidence exists, some trends still seem discernible. By 2020, EBL will have become indistinguishable from mainstream health sciences librarianship/informatics practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro
We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.
Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi
2006-06-01
This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.
Wassif, Christopher A; Cross, Joanna L; Iben, James; Sanchez-Pulido, Luis; Cougnoux, Antony; Platt, Frances M; Ory, Daniel S; Ponting, Chris P; Bailey-Wilson, Joan E; Biesecker, Leslie G; Porter, Forbes D
2016-01-01
Niemann-Pick disease type C (NPC) is a recessive, neurodegenerative, lysosomal storage disease caused by mutations in either NPC1 or NPC2. The diagnosis is difficult and frequently delayed. Ascertainment is likely incomplete because of both these factors and because the full phenotypic spectrum may not have been fully delineated. Given the recent development of a blood-based diagnostic test and the development of potential therapies, understanding the incidence of NPC and defining at-risk patient populations are important. We evaluated data from four large, massively parallel exome sequencing data sets. Variant sequences were identified and classified as pathogenic or nonpathogenic based on a combination of literature review and bioinformatic analysis. This methodology provided an unbiased approach to determining the allele frequency. Our data suggest an incidence rate for NPC1 and NPC2 of 1/92,104 and 1/2,858,998, respectively. Evaluation of common NPC1 variants, however, suggests that there may be a late-onset NPC1 phenotype with a markedly higher incidence, on the order of 1/19,000-1/36,000. We determined a combined incidence of classical NPC of 1/89,229, or 1.12 affected patients per 100,000 conceptions, but predict incomplete ascertainment of a late-onset phenotype of NPC1. This finding strongly supports the need for increased screening of potential patients.
Some considerations about Gaussian basis sets for electric property calculations
NASA Astrophysics Data System (ADS)
Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.
Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.
ERIC Educational Resources Information Center
Lee, Liangshiu
2010-01-01
The basis sets for symmetry operations of d[superscript 1] to d[superscript 9] complexes in an octahedral field and the resulting terms are derived for the ground states and spin-allowed excited states. The basis sets are of fundamental importance in group theory. This work addresses such a fundamental issue, and the results are pedagogically…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhov, Dmitry V.; Shalashilin, Dmitrii V.; Glover, William J.
We present a new algorithm for ab initio quantum nonadiabatic molecular dynamics that combines the best features of ab initio Multiple Spawning (AIMS) and Multiconfigurational Ehrenfest (MCE) methods. In this new method, ab initio multiple cloning (AIMC), the individual trajectory basis functions (TBFs) follow Ehrenfest equations of motion (as in MCE). However, the basis set is expanded (as in AIMS) when these TBFs become sufficiently mixed, preventing prolonged evolution on an averaged potential energy surface. We refer to the expansion of the basis set as “cloning,” in analogy to the “spawning” procedure in AIMS. This synthesis of AIMS and MCEmore » allows us to leverage the benefits of mean-field evolution during periods of strong nonadiabatic coupling while simultaneously avoiding mean-field artifacts in Ehrenfest dynamics. We explore the use of time-displaced basis sets, “trains,” as a means of expanding the basis set for little cost. We also introduce a new bra-ket averaged Taylor expansion (BAT) to approximate the necessary potential energy and nonadiabatic coupling matrix elements. The BAT approximation avoids the necessity of computing electronic structure information at intermediate points between TBFs, as is usually done in saddle-point approximations used in AIMS. The efficiency of AIMC is demonstrated on the nonradiative decay of the first excited state of ethylene. The AIMC method has been implemented within the AIMS-MOLPRO package, which was extended to include Ehrenfest basis functions.« less
Treatment of Intravenous Leiomyomatosis with Cardiac Extension following Incomplete Resection.
Doyle, Mathew P; Li, Annette; Villanueva, Claudia I; Peeceeyen, Sheen C S; Cooper, Michael G; Hanel, Kevin C; Fermanis, Gary G; Robertson, Greg
2015-01-01
Aim. Intravenous leiomyomatosis (IVL) with cardiac extension (CE) is a rare variant of benign uterine leiomyoma. Incomplete resection has a recurrence rate of over 30%. Different hormonal treatments have been described following incomplete resection; however no standard therapy currently exists. We review the literature for medical treatments options following incomplete resection of IVL with CE. Methods. Electronic databases were searched for all studies reporting IVL with CE. These studies were then searched for reports of patients with inoperable or incomplete resection and any further medical treatments. Our database was searched for patients with medical therapy following incomplete resection of IVL with CE and their results were included. Results. All studies were either case reports or case series. Five literature reviews confirm that surgery is the only treatment to achieve cure. The uses of progesterone, estrogen modulation, gonadotropin-releasing hormone antagonism, and aromatase inhibition have been described following incomplete resection. Currently no studies have reviewed the outcomes of these treatments. Conclusions. Complete surgical resection is the only means of cure for IVL with CE, while multiple hormonal therapies have been used with varying results following incomplete resection. Aromatase inhibitors are the only reported treatment to prevent tumor progression or recurrence in patients with incompletely resected IVL with CE.
Treatment of Intravenous Leiomyomatosis with Cardiac Extension following Incomplete Resection
Doyle, Mathew P.; Li, Annette; Villanueva, Claudia I.; Peeceeyen, Sheen C. S.; Cooper, Michael G.; Hanel, Kevin C.; Fermanis, Gary G.; Robertson, Greg
2015-01-01
Aim. Intravenous leiomyomatosis (IVL) with cardiac extension (CE) is a rare variant of benign uterine leiomyoma. Incomplete resection has a recurrence rate of over 30%. Different hormonal treatments have been described following incomplete resection; however no standard therapy currently exists. We review the literature for medical treatments options following incomplete resection of IVL with CE. Methods. Electronic databases were searched for all studies reporting IVL with CE. These studies were then searched for reports of patients with inoperable or incomplete resection and any further medical treatments. Our database was searched for patients with medical therapy following incomplete resection of IVL with CE and their results were included. Results. All studies were either case reports or case series. Five literature reviews confirm that surgery is the only treatment to achieve cure. The uses of progesterone, estrogen modulation, gonadotropin-releasing hormone antagonism, and aromatase inhibition have been described following incomplete resection. Currently no studies have reviewed the outcomes of these treatments. Conclusions. Complete surgical resection is the only means of cure for IVL with CE, while multiple hormonal therapies have been used with varying results following incomplete resection. Aromatase inhibitors are the only reported treatment to prevent tumor progression or recurrence in patients with incompletely resected IVL with CE. PMID:26783463
Are cosmological data sets consistent with each other within the Λ cold dark matter model?
NASA Astrophysics Data System (ADS)
Raveri, Marco
2016-02-01
We use a complete and rigorous statistical indicator to measure the level of concordance between cosmological data sets, without relying on the inspection of the marginal posterior distribution of some selected parameters. We apply this test to state of the art cosmological data sets, to assess their agreement within the Λ cold dark matter model. We find that there is a good level of concordance between all the experiments with one noticeable exception. There is substantial evidence of tension between the cosmic microwave background temperature and polarization measurements of the Planck satellite and the data from the CFHTLenS weak lensing survey even when applying ultraconservative cuts. These results robustly point toward the possibility of having unaccounted systematic effects in the data, an incomplete modeling of the cosmological predictions or hints toward new physical phenomena.
D'Agostino, Fabio; Vellone, Ercole; Tontini, Francesco; Zega, Maurizio; Alvaro, Rosaria
2012-01-01
The aim of a nursing data set is to provide useful information for assessing the level of care and the state of health of the population. Currently, both in Italy and in other countries, this data is incomplete due to the lack of a structured nursing documentation , making it indispensible to develop a Nursing Minimum Data Set (NMDS) using standard nursing language to evaluate care, costs and health requirements. The aim of the project described , is to create a computer system using standard nursing terms with a dedicated software which will aid the decision-making process and provide the relative documentation. This will make it possible to monitor nursing activity and costs and their impact on patients' health : adequate training and involvement of nursing staff will play a fundamental role.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang
2017-01-01
Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939
NASA Astrophysics Data System (ADS)
Boffi, Nicholas M.; Jain, Manish; Natan, Amir
2016-02-01
A real-space high order finite difference method is used to analyze the effect of spherical domain size on the Hartree-Fock (and density functional theory) virtual eigenstates. We show the domain size dependence of both positive and negative virtual eigenvalues of the Hartree-Fock equations for small molecules. We demonstrate that positive states behave like a particle in spherical well and show how they approach zero. For the negative eigenstates, we show that large domains are needed to get the correct eigenvalues. We compare our results to those of Gaussian basis sets and draw some conclusions for real-space, basis-sets, and plane-waves calculations.
NASA Astrophysics Data System (ADS)
Martínez-Sánchez, Michael-Adán; Aquino, Norberto; Vargas, Rubicelia; Garza, Jorge
2017-12-01
The Schrödinger equation associated to the hydrogen atom confined by a dielectric continuum is solved exactly and suggests the appropriate basis set to be used when an atom is immersed in a dielectric continuum. Exact results show that this kind of confinement spread the electron density, which is confirmed through the Shannon entropy. The basis set suggested by the exact results is similar to Slater type orbitals and it was applied on two-electron atoms, where the H- ion ejects one electron for moderate confinements for distances much larger than those commonly used to generate cavities in solvent models.
Buryak, Ilya; Lokshtanov, Sergei; Vigasin, Andrey
2012-09-21
The present work aims at ab initio characterization of the integrated intensity temperature variation of collision-induced absorption (CIA) in N(2)-H(2)(D(2)). Global fits of potential energy surface (PES) and induced dipole moment surface (IDS) were made on the basis of CCSD(T) (coupled cluster with single and double and perturbative triple excitations) calculations with aug-cc-pV(T,Q)Z basis sets. Basis set superposition error correction and extrapolation to complete basis set (CBS) limit techniques were applied to both energy and dipole moment. Classical second cross virial coefficient calculations accounting for the first quantum correction were employed to prove the quality of the obtained PES. The CIA temperature dependence was found in satisfactory agreement with available experimental data.
Normalization of relative and incomplete temporal expressions in clinical narratives.
Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem
2015-09-01
To improve the normalization of relative and incomplete temporal expressions (RI-TIMEXes) in clinical narratives. We analyzed the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis. We annotated the RI-TIMEXes in three corpora to study the characteristics of RI-TMEXes in different domains. This informed the design of our RI-TIMEX normalization system for the clinical domain, which consists of an anchor point classifier, an anchor relation classifier, and a rule-based RI-TIMEX text span parser. We experimented with different feature sets and performed an error analysis for each system component. The annotation confirmed the hypotheses that we can simplify the RI-TIMEXes normalization task using two multi-label classifiers. Our system achieves anchor point classification, anchor relation classification, and rule-based parsing accuracy of 74.68%, 87.71%, and 57.2% (82.09% under relaxed matching criteria), respectively, on the held-out test set of the 2012 i2b2 temporal relation challenge. Experiments with feature sets reveal some interesting findings, such as: the verbal tense feature does not inform the anchor relation classification in clinical narratives as much as the tokens near the RI-TIMEX. Error analysis showed that underrepresented anchor point and anchor relation classes are difficult to detect. We formulate the RI-TIMEX normalization problem as a pair of multi-label classification problems. Considering only RI-TIMEX extraction and normalization, the system achieves statistically significant improvement over the RI-TIMEX results of the best systems in the 2012 i2b2 challenge. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fairchild, Mallika; Kim, Seung-Jae; Iarkov, Alex; Abbas, James J.; Jung, Ranu
2010-01-01
The long-term objective of this work is to understand the mechanisms by which electrical stimulation based movement therapies may harness neural plasticity to accelerate and enhance sensorimotor recovery after incomplete spinal cord injury (iSCI). An adaptive neuromuscular electrical stimulation (aNMES) paradigm was implemented in adult Long Evans rats with thoracic contusion injury (T8 vertebral level, 155±2 Kdyne). In lengthy sessions with lightly anesthetized animals, hip flexor and extensor muscles were stimulated using an aNMES control system in order to generate desired hip movements. The aNMES control system, which used a pattern generator/pattern shaper structure, adjusted pulse amplitude to modulate muscle force in order to control hip movement. An intermittent stimulation paradigm was used (5-cycles/set; 20-second rest between sets; 100 sets). In each cycle, hip rotation caused the foot plantar surface to contact a stationary brush for appropriately timed cutaneous input. Sessions were repeated over several days while the animals recovered from injury. Results indicated that aNMES automatically and reliably tracked the desired hip trajectory with low error and maintained range of motion with only gradual increase in stimulation during the long sessions. Intermittent aNMES thus accounted for the numerous factors that can influence the response to NMES: electrode stability, excitability of spinal neural circuitry, non-linear muscle recruitment, fatigue, spinal reflexes due to cutaneous input, and the endogenous recovery of the animals. This novel aNMES application in the iSCI rodent model can thus be used in chronic stimulation studies to investigate the mechanisms of neuroplasticity targeted by NMES-based repetitive movement therapy. PMID:20206164
Hazes, Bart
2014-02-28
Protein-coding DNA sequences and their corresponding amino acid sequences are routinely used to study relationships between sequence, structure, function, and evolution. The rapidly growing size of sequence databases increases the power of such comparative analyses but it makes it more challenging to prepare high quality sequence data sets with control over redundancy, quality, completeness, formatting, and labeling. Software tools for some individual steps in this process exist but manual intervention remains a common and time consuming necessity. CDSbank is a database that stores both the protein-coding DNA sequence (CDS) and amino acid sequence for each protein annotated in Genbank. CDSbank also stores Genbank feature annotation, a flag to indicate incomplete 5' and 3' ends, full taxonomic data, and a heuristic to rank the scientific interest of each species. This rich information allows fully automated data set preparation with a level of sophistication that aims to meet or exceed manual processing. Defaults ensure ease of use for typical scenarios while allowing great flexibility when needed. Access is via a free web server at http://hazeslab.med.ualberta.ca/CDSbank/. CDSbank presents a user-friendly web server to download, filter, format, and name large sequence data sets. Common usage scenarios can be accessed via pre-programmed default choices, while optional sections give full control over the processing pipeline. Particular strengths are: extract protein-coding DNA sequences just as easily as amino acid sequences, full access to taxonomy for labeling and filtering, awareness of incomplete sequences, and the ability to take one protein sequence and extract all synonymous CDS or identical protein sequences in other species. Finally, CDSbank can also create labeled property files to, for instance, annotate or re-label phylogenetic trees.
A Study of Incomplete Abortion Following Medical Method of Abortion (MMA).
Pawde, Anuya A; Ambadkar, Arun; Chauhan, Anahita R
2016-08-01
Medical method of abortion (MMA) is a safe, efficient, and affordable method of abortion. However, incomplete abortion is a known side effect. To study incomplete abortion due to medication abortion and compare to spontaneous incomplete abortion and to study referral practices and prescriptions in cases of incomplete abortion following MMA. Prospective observational study of 100 women with first trimester incomplete abortion, divided into two groups (spontaneous or following MMA), was administered a questionnaire which included information regarding onset of bleeding, treatment received, use of medications for abortion, its prescription, and administration. Comparison of two groups was done using Fisher exact test (SPSS 21.0 software). Thirty percent of incomplete abortions were seen following MMA; possible reasons being self-administration or prescription by unregistered practitioners, lack of examination, incorrect dosage and drugs, and lack of follow-up. Complications such as collapse, blood requirement, and fever were significantly higher in these patients compared to spontaneous abortion group. The side effects of incomplete abortions following MMA can be avoided by the following standard guidelines. Self medication, over- the-counter use, and prescription by unregistered doctors should be discouraged and reported, and need of follow-up should be emphasized.
Estimate of true incomplete exchanges using fluorescence in situ hybridization with telomere probes
NASA Technical Reports Server (NTRS)
Wu, H.; George, K.; Yang, T. C.
1998-01-01
PURPOSE: To study the frequency of true incomplete exchanges in radiation-induced chromosome aberrations. MATERIALS AND METHODS: Human lymphocytes were exposed to 2 Gy and 5 Gy of gamma-rays. Chromosome aberrations were studied using the fluorescence in situ hybridization (FISH) technique with whole chromosome-specific probes, together with human telomere probes. Chromosomes 2 and 4 were chosen in the present study. RESULTS: The percentage of incomplete exchanges was 27% when telomere signals were not considered. After excluding false incomplete exchanges identified by the telomere signals, the percentage of incomplete exchanges decreased to 11%. Since telomere signals appear on about 82% of the telomeres, the percentage of true incomplete exchanges should be even lower and was estimated to be 3%. This percentage was similar for chromosomes 2 and 4 and for doses of both 2 Gy and 5 Gy. CONCLUSIONS: The percentage of true incomplete exchanges is significantly lower in gamma-irradiated human lymphocytes than the frequencies reported in the literature.
Fiori, Simone
2007-01-01
Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641
The Impact of Missing Data on Species Tree Estimation.
Xi, Zhenxiang; Liu, Liang; Davis, Charles C
2016-03-01
Phylogeneticists are increasingly assembling genome-scale data sets that include hundreds of genes to resolve their focal clades. Although these data sets commonly include a moderate to high amount of missing data, there remains no consensus on their impact to species tree estimation. Here, using several simulated and empirical data sets, we assess the effects of missing data on species tree estimation under varying degrees of incomplete lineage sorting (ILS) and gene rate heterogeneity. We demonstrate that concatenation (RAxML), gene-tree-based coalescent (ASTRAL, MP-EST, and STAR), and supertree (matrix representation with parsimony [MRP]) methods perform reliably, so long as missing data are randomly distributed (by gene and/or by species) and that a sufficiently large number of genes are sampled. When data sets are indecisive sensu Sanderson et al. (2010. Phylogenomics with incomplete taxon coverage: the limits to inference. BMC Evol Biol. 10:155) and/or ILS is high, however, high amounts of missing data that are randomly distributed require exhaustive levels of gene sampling, likely exceeding most empirical studies to date. Moreover, missing data become especially problematic when they are nonrandomly distributed. We demonstrate that STAR produces inconsistent results when the amount of nonrandom missing data is high, regardless of the degree of ILS and gene rate heterogeneity. Similarly, concatenation methods using maximum likelihood can be misled by nonrandom missing data in the presence of gene rate heterogeneity, which becomes further exacerbated when combined with high ILS. In contrast, ASTRAL, MP-EST, and MRP are more robust under all of these scenarios. These results underscore the importance of understanding the influence of missing data in the phylogenomics era. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan
2015-06-21
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less
2014-06-17
100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with
Finney, Christopher
2015-02-13
Banks-Leite et al. (Reports, 29 August 2014, p. 1041) conclude that a large-scale program to restore the Brazilian Atlantic Forest using payments for environmental services (PES) is economically feasible. They do not analyze transaction costs, which are quantified infrequently and incompletely in the literature. Transaction costs can exceed 20% of total project costs and should be included in future research. Copyright © 2015, American Association for the Advancement of Science.
Ngoc, Nguyen Thi Nhu; Shochet, Tara; Blum, Jennifer; Hai, Pham Thanh; Dung, Duong Lan; Nhan, Tran Thanh; Winikoff, Beverly
2013-05-22
Complications following spontaneous or induced abortion are a major cause of maternal morbidity. To manage these complications, post-abortion care (PAC) services should be readily available and easy to access. Standard PAC treatment includes surgical interventions that are highly effective but require surgical providers and medical centers that have the necessary space and equipment. Misoprostol has been shown to be an effective alternative to surgical evacuation and can be offered by lower level clinicians. This study sought to assess whether 400 mcg sublingual misoprostol could effectively evacuate the uterus after incomplete abortion and to confirm its applicability for use at lower level settings. All women presenting with incomplete abortion at one of three hospitals in Vietnam were enrolled. Providers were not asked to record if the abortion was spontaneous or induced. It is likely that all were spontaneous given the legal status and easy access to abortion services in Vietnam. Participants were given 400 mcg sublingual misoprostol and instructed to hold the pills under their tongue for 30 minutes and then swallow any remaining fragments. They were then asked to return one week later to confirm their clinical status. Study clinicians were instructed to confirm a complete expulsion clinically. All women were asked to complete a questionnaire regarding satisfaction with the treatment. Three hundred and two women were enrolled between September 2009 and May 2010. Almost all participants (96.3%) had successful completions using a single dose of 400 mcg misoprostol. The majority of women (87.2%) found the side effects to be tolerable or easily tolerable. Most women (84.3%) were satisfied or very satisfied with the treatment they received; only one was dissatisfied (0.3%). Nine out of ten women would select this method again and recommend it to a friend (91.0% and 90.0%, respectively). This study confirms that 400 mcg sublingual misoprostol effectively evacuates the uterus for most women experiencing incomplete abortion. The high levels of satisfaction and side effect tolerability also attest to the ease of use of this method. From these data and given the international consensus around the effectiveness of misoprostol for incomplete abortion care, it seems timely that use of the drug for this indication be widely expanded both throughout Vietnam and wherever access to abortion care is limited. ClinicalTrials.gov, NCT00670761.
Noël, A; Xiao, R; Perveen, Z; Zaman, H M; Rouse, R L; Paulsen, D B; Penn, A L
2016-02-24
Particulate matter (PM) is one of the six criteria pollutant classes for which National Ambient Air Quality Standards have been set by the United States Environmental Protection Agency. Exposures to PM have been correlated with increased cardio-pulmonary morbidity and mortality. Butadiene soot (BDS), generated from the incomplete combustion of 1,3-butadiene (BD), is both a model PM mixture and a real-life example of a petrochemical product of incomplete combustion. There are numerous events, including wildfires, accidents at refineries and tank car explosions that result in sub-acute exposure to high levels of airborne particles, with the people exposed facing serious health problems. These real-life events highlight the need to investigate the health effects induced by short-term exposure to elevated levels of PM, as well as to assess whether, and if so, how well these adverse effects are resolved over time. In the present study, we investigated the extent of recovery of mouse lungs 10 days after inhalation exposures to environmentally-relevant levels of BDS aerosols had ended. Female BALB/c mice exposed to either HEPA-filtered air or to BDS (5 mg/m(3) in HEPA filtered air, 4 h/day, 21 consecutive days) were sacrificed immediately, or 10 days after the final BDS exposure. Bronchoalveolar lavage fluid (BALF) was collected for cytology and cytokine analysis. Lung proteins and RNA were extracted for protein and gene expression analysis. Lung histopathology evaluation also was performed. Sub-acute exposures of mice to hydrocarbon-rich ultrafine particles induced: (1) BALF neutrophil elevation; (2) lung mucosal inflammation, and (3) increased BALF IL-1β concentration; with all three outcomes returning to baseline levels 10 days post-exposure. In contrast, (4) lung connective tissue inflammation persisted 10 days post-exposure; (5) we detected time-dependent up-regulation of biotransformation and oxidative stress genes, with incomplete return to baseline levels; and (6) we observed persistent particle alveolar load following 10 days of recovery. These data show that 10 days after a 21-day exposure to 5 mg/m(3) of BDS has ended, incomplete lung recovery promotes a pro-biotransformation, pro-oxidant, and pro-inflammatory milieu, which may be a starting point for potential long-term cardio-pulmonary effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
NASA Astrophysics Data System (ADS)
Petersson, George A.; Malick, David K.; Frisch, Michael J.; Braunstein, Matthew
2006-07-01
Examination of the convergence of full valence complete active space self-consistent-field configuration interaction including all single and double excitation (CASSCF-CISD) energies with expansion of the one-electron basis set reveals a pattern very similar to the convergence of single determinant energies. Calculations on the lowest four singlet states and the lowest four triplet states of N2 with the sequence of n-tuple-ζ augmented polarized (nZaP) basis sets (n =2, 3, 4, 5, and 6) are used to establish the complete basis set limits. Full configuration-interaction (CI) and core electron contributions must be included for very accurate potential energy surfaces. However, a simple extrapolation scheme that has no adjustable parameters and requires nothing more demanding than CAS(10e -,8orb)-CISD/3ZaP calculations gives the Re, ωe, ωeXe, Te, and De for these eight states with rms errors of 0.0006Å, 4.43cm-1, 0.35cm-1, 0.063eV, and 0.018eV, respectively.
48 CFR 25.504-4 - Group award basis.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...
48 CFR 25.504-4 - Group award basis.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...
48 CFR 25.504-4 - Group award basis.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American statute applies and the acquisition cannot be set aside for...
48 CFR 25.504-4 - Group award basis.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...
48 CFR 25.504-4 - Group award basis.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Group award basis. 25.504... SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504-4 Group award basis... a group basis. Assume the Buy American Act applies and the acquisition cannot be set aside for small...
NASA Astrophysics Data System (ADS)
Kruse, Holger; Grimme, Stefan
2012-04-01
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.
Ethanol production from food waste at high solids content with vacuum recovery technology.
Huang, Haibo; Qureshi, Nasib; Chen, Ming-Hsu; Liu, Wei; Singh, Vijay
2015-03-18
Ethanol production from food wastes does not only solve environmental issues but also provides renewable biofuels. This study investigated the feasibility of producing ethanol from food wastes at high solids content (35%, w/w). A vacuum recovery system was developed and applied to remove ethanol from fermentation broth to reduce yeast ethanol inhibition. A high concentration of ethanol (144 g/L) was produced by the conventional fermentation of food waste without a vacuum recovery system. When the vacuum recovery is applied to the fermentation process, the ethanol concentration in the fermentation broth was controlled below 100 g/L, thus reducing yeast ethanol inhibition. At the end of the conventional fermentation, the residual glucose in the fermentation broth was 5.7 g/L, indicating incomplete utilization of glucose, while the vacuum fermentation allowed for complete utilization of glucose. The ethanol yield for the vacuum fermentation was found to be 358 g/kg of food waste (dry basis), higher than that for the conventional fermentation at 327 g/kg of food waste (dry basis).
Eoh, Hyungjin; Rhee, Kyu Y.
2014-01-01
Few mutations attenuate Mycobacterium tuberculosis (Mtb) more profoundly than deletion of its isocitrate lyases (ICLs). However, the basis for this attenuation remains incompletely defined. Mtb’s ICLs are catalytically bifunctional isocitrate and methylisocitrate lyases required for growth on even and odd chain fatty acids. Here, we report that Mtb’s ICLs are essential for survival on both acetate and propionate because of its methylisocitrate lyase (MCL) activity. Lack of MCL activity converts Mtb’s methylcitrate cycle into a “dead end” pathway that sequesters tricarboxylic acid (TCA) cycle intermediates into methylcitrate cycle intermediates, depletes gluconeogenic precursors, and results in defects of membrane potential and intrabacterial pH. Activation of an alternative vitamin B12-dependent pathway of propionate metabolism led to selective corrections of TCA cycle activity, membrane potential, and intrabacterial pH that specifically restored survival, but not growth, of ICL-deficient Mtb metabolizing acetate or propionate. These results thus resolve the biochemical basis of essentiality for Mtb’s ICLs and survival on fatty acids. PMID:24639517
Bilateral phacoemulsification and intraocular lens implantation in a great horned owl.
Carter, Renee T; Murphy, Christopher J; Stuhr, Charles M; Diehl, Kathryn A
2007-02-15
A great horned owl of estimated age < 1 year that was captured by wildlife rehabilitators was evaluated because of suspected cataracts. Nuclear and incomplete cortical cataracts were evident in both eyes. Ocular ultrasonography revealed no evidence of retinal detachment, and electroretinography revealed normal retinal function. For visual rehabilitation, cataract surgery was planned and intraocular lens design was determined on the basis of values obtained from the schematic eye, which is a mathematical model representing a normal eye for a species. Cataract surgery and intraocular lens placement were performed in both eyes. After surgery, refraction was within -0.75 diopters in the right eye and -0.25 diopters in the left eye. Visual rehabilitation was evident on the basis of improved tracking and feeding behavior, and the owl was eventually released into the wild. In raptors with substantial visual compromise, euthanasia or placement in a teaching facility is a typical outcome because release of such a bird is unacceptable. Successful intraocular lens implantation for visual rehabilitation and successful release into the wild are achievable.
Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F
2015-10-01
Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.
Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamini, Vittorino
2010-02-15
Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less
Spectroscopic properties of Arx-Zn and Arx-Ag+ (x = 1,2) van der Waals complexes
NASA Astrophysics Data System (ADS)
Oyedepo, Gbenga A.; Peterson, Charles; Schoendorff, George; Wilson, Angela K.
2013-03-01
Potential energy curves have been constructed using coupled cluster with singles, doubles, and perturbative triple excitations (CCSD(T)) in combination with all-electron and pseudopotential-based multiply augmented correlation consistent basis sets [m-aug-cc-pV(n + d)Z; m = singly, doubly, triply, n = D,T,Q,5]. The effect of basis set superposition error on the spectroscopic properties of Ar-Zn, Ar2-Zn, Ar-Ag+, and Ar2-Ag+ van der Waals complexes was examined. The diffuse functions of the doubly and triply augmented basis sets have been constructed using the even-tempered expansion. The a posteriori counterpoise scheme of Boys and Bernardi and its generalized variant by Valiron and Mayer has been utilized to correct for basis set superposition error (BSSE) in the calculated spectroscopic properties for diatomic and triatomic species. It is found that even at the extrapolated complete basis set limit for the energetic properties, the pseudopotential-based calculations still suffer from significant BSSE effects unlike the all-electron basis sets. This indicates that the quality of the approximations used in the design of pseudopotentials could have major impact on a seemingly valence-exclusive effect like BSSE. We confirm the experimentally determined equilibrium internuclear distance (re), binding energy (De), harmonic vibrational frequency (ωe), and C1Π ← X1Σ transition energy for ArZn and also predict the spectroscopic properties for the low-lying excited states of linear Ar2-Zn (X1Σg, 3Πg, 1Πg), Ar-Ag+ (X1Σ, 3Σ, 3Π, 3Δ, 1Σ, 1Π, 1Δ), and Ar2-Ag+ (X1Σg, 3Σg, 3Πg, 3Δg, 1Σg, 1Πg, 1Δg) complexes, using the CCSD(T) and MR-CISD + Q methods, to aid in their experimental characterizations.
NASA Astrophysics Data System (ADS)
Rees, S. J.; Jones, Bryan F.
1992-11-01
Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.
A projection-free method for representing plane-wave DFT results in an atom-centered basis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunnington, Benjamin D.; Schmidt, J. R., E-mail: schmidt@chem.wisc.edu
2015-09-14
Plane wave density functional theory (DFT) is a powerful tool for gaining accurate, atomic level insight into bulk and surface structures. Yet, the delocalized nature of the plane wave basis set hinders the application of many powerful post-computation analysis approaches, many of which rely on localized atom-centered basis sets. Traditionally, this gap has been bridged via projection-based techniques from a plane wave to atom-centered basis. We instead propose an alternative projection-free approach utilizing direct calculation of matrix elements of the converged plane wave DFT Hamiltonian in an atom-centered basis. This projection-free approach yields a number of compelling advantages, including strictmore » orthonormality of the resulting bands without artificial band mixing and access to the Hamiltonian matrix elements, while faithfully preserving the underlying DFT band structure. The resulting atomic orbital representation of the Kohn-Sham wavefunction and Hamiltonian provides a gateway to a wide variety of analysis approaches. We demonstrate the utility of the approach for a diverse set of chemical systems and example analysis approaches.« less
Influence of incomplete fusion on complete fusion at energies above the Coulomb barrier
NASA Astrophysics Data System (ADS)
Shuaib, Mohd; Sharma, Vijay R.; Yadav, Abhishek; Sharma, Manoj Kumar; Singh, Pushpendra P.; Singh, Devendra P.; Kumar, R.; Singh, R. P.; Muralithar, S.; Singh, B. P.; Prasad, R.
2017-10-01
In the present work, excitation functions of several reaction residues in the system 19F+169Tm, populated via the complete and incomplete fusion processes, have been measured using off-line γ-ray spectroscopy. The analysis of excitation functions has been done within the framework of statistical model code pace4. The excitation functions of residues populated via xn and pxn channels are found to be in good agreement with those estimated by the theoretical model code, which confirms the production of these residues solely via complete fusion process. However, a significant enhancement has been observed in the cross-sections of residues involving α-emitting channels as compared to the theoretical predictions. The observed enhancement in the cross-sections has been attributed to the incomplete fusion processes. In order to have a better insight into the onset and strength of incomplete fusion, the incomplete fusion strength function has been deduced. At present, there is no theoretical model available which can satisfactorily explain the incomplete fusion reaction data at energies ≈4-6 MeV/nucleon. In the present work, the influence of incomplete fusion on complete fusion in the 19F+169Tm system has also been studied. The measured cross-section data may be important for the development of reactor technology as well. It has been found that the incomplete fusion strength function strongly depends on the α-Q value of the projectile, which is found to be in good agreement with the existing literature data. The analysis strongly supports the projectile-dependent mass-asymmetry systematics. In order to study the influence of Coulomb effect ({Z}{{P}}{Z}{{T}}) on incomplete fusion, the deduced strength function for the present work is compared with the nearby projectile-target combinations. The incomplete fusion strength function is found to increase linearly with {Z}{{P}}{Z}{{T}}, indicating a strong influence of Coulomb effect in the incomplete fusion reactions.
NASA Astrophysics Data System (ADS)
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-01
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol-1. The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-28
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol(-1). The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).
Zhao, Chunyu; Burge, James H
2007-12-24
Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.
Wilkins, Emma L; Morris, Michelle A; Radley, Duncan; Griffiths, Claire
2017-03-01
Geographic Information Systems (GIS) are widely used to measure retail food environments. However the methods used are hetrogeneous, limiting collation and interpretation of evidence. This problem is amplified by unclear and incomplete reporting of methods. This discussion (i) identifies common dimensions of methodological diversity across GIS-based food environment research (data sources, data extraction methods, food outlet construct definitions, geocoding methods, and access metrics), (ii) reviews the impact of different methodological choices, and (iii) highlights areas where reporting is insufficient. On the basis of this discussion, the Geo-FERN reporting checklist is proposed to support methodological reporting and interpretation. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Cardiorespiratory coupling in health and disease.
Garcia, Alfredo J; Koschnitzky, Jenna E; Dashevskiy, Tatiana; Ramirez, Jan-Marino
2013-04-01
Cardiac and respiratory activities are intricately linked both functionally as well as anatomically through highly overlapping brainstem networks controlling these autonomic physiologies that are essential for survival. Cardiorespiratory coupling (CRC) has many potential benefits creating synergies that promote healthy physiology. However, when such coupling deteriorates autonomic dysautonomia may ensue. Unfortunately there is still an incomplete mechanistic understanding of both normal and pathophysiological interactions that respectively give rise to CRC and cardiorespiratory dysautonomia. Moreover, there is also a need for better quantitative methods to assess CRC. This review addresses the current understanding of CRC by discussing: (1) the neurobiological basis of respiratory sinus arrhythmia (RSA); (2) various disease states involving cardiorespiratory dysautonomia; and (3) methodologies measuring heart rate variability and RSA. Copyright © 2013 Elsevier B.V. All rights reserved.
Cardiorespiratory Coupling in Health and Disease
Garcia, Alfredo J.; Koschnitzky, Jenna E.; Dashevskiy, Tatiana; Ramirez, Jan-Marino
2013-01-01
Cardiac and respiratory activities are intricately linked both functionally as well as anatomically through highly overlapping brainstem networks controlling these autonomic physiologies that are essential for survival. Cardiorespiratory coupling (CRC) has many potential benefits creating synergies that promote healthy physiology. However, when such coupling deteriorates autonomic dysautonomia may ensue. Unfortunately there is still an incomplete mechanistic understanding of both normal and pathophysiological interactions that respectively give rise to CRC and cardiorespiratory dysautonomia. Moreover, there is also a need for better quantitative methods to assess CRC. This review addresses the current understanding of CRC by discussing: (1) the neurobiological basis of respiratory sinus arrhythmia (RSA); (2) various disease states involving cardiorespiratory dysautonomia; and (3) methodologies measuring heart rate variability and RSA. PMID:23497744
Corporate Social Responsibility in Aviation
NASA Technical Reports Server (NTRS)
Phillips, Edwin D.
2006-01-01
The dialog within aviation management education regarding ethics is incomplete without a discussion of corporate social responsibility (CSR). CSR research requires discussion involving: (a) the current emphasis on CSR in business in general and aviation specifically; (b) business and educational theory that provide a basis for aviation companies to engage in socially responsible actions; (c) techniques used by aviation and aerospace companies to fulfill this responsibility; and (d) a glimpse of teaching approaches used in university aviation management classes. The summary of this research suggests educators explain CSR theory and practice to students in industry and collegiate aviation management programs. Doing so extends the discussion of ethical behavior and matches the current high level of interest and activity within the aviation industry toward CSR.
NASA Astrophysics Data System (ADS)
Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove
2018-02-01
We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.
Scheepers, P T; Bos, R P
1992-01-01
Since the use of diesel engines is still increasing, the contribution of their incomplete combustion products to air pollution is becoming ever more important. The presence of irritating and genotoxic substances in both the gas phase and the particulate phase constituents is considered to have significant health implications. The quantity of soot particles and the particle-associated organics emitted from the tail pipe of a diesel-powered vehicle depend primarily on the engine type and combustion conditions but also on fuel properties. The quantity of soot particles in the emissions is determined by the balance between the rate of formation and subsequent oxidation. Organics are absorbed onto carbon cores in the cylinder, in the exhaust system, in the atmosphere and even on the filter during sample collection. Diesel fuel contains polycyclic aromatic hydrocarbons (PAHs) and some alkyl derivatives. Both groups of compounds may survive the combustion process. PAHs are formed by the combustion of crankcase oil or may be resuspended from engine and/or exhaust deposits. The conversion of parent PAHs to oxygenated and nitrated PAHs in the combustion chamber or in the exhaust system is related to the vast amount of excess combustion air that is supplied to the engine and the high combustion temperature. Whether the occurrence of these derivatives is characteristic for the composition of diesel engine exhaust remains to be ascertained. After the emission of the particles, their properties may change because of atmospheric processes such as aging and resuspension. The particle-associated organics may also be subject to (photo)chemical conversions or the components may change during sampling and analysis. Measurement of emissions of incomplete combustion products as determined on a chassis dynamometer provides knowledge of the chemical composition of the particle-associated organics. This knowledge is useful as a basis for a toxicological evaluation of the health hazards of diesel engine emissions.
Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.
Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K
2011-01-01
Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.
NASA Astrophysics Data System (ADS)
Győrffy, Werner; Knizia, Gerald; Werner, Hans-Joachim
2017-12-01
We present the theory and algorithms for computing analytical energy gradients for explicitly correlated second-order Møller-Plesset perturbation theory (MP2-F12). The main difficulty in F12 gradient theory arises from the large number of two-electron integrals for which effective two-body density matrices and integral derivatives need to be calculated. For efficiency, the density fitting approximation is used for evaluating all two-electron integrals and their derivatives. The accuracies of various previously proposed MP2-F12 approximations [3C, 3C(HY1), 3*C(HY1), and 3*A] are demonstrated by computing equilibrium geometries for a set of molecules containing first- and second-row elements, using double-ζ to quintuple-ζ basis sets. Generally, the convergence of the bond lengths and angles with respect to the basis set size is strongly improved by the F12 treatment, and augmented triple-ζ basis sets are sufficient to closely approach the basis set limit. The results obtained with the different approximations differ only very slightly. This paper is the first step towards analytical gradients for coupled-cluster singles and doubles with perturbative treatment of triple excitations, which will be presented in the second part of this series.
The hearing-impaired child in the hearing society.
Burton, M H
1983-11-01
This paper sets out to describe a method of educating the hearing-impaired which has been operating successfully for the past 18 years. The underlying tenet of our approach is that considerable communicative skills can be developed with children who have marked hearing loss. Even if the child is profoundly deaf he or she has some sensory input which can be used as the basis for training in language development. The attempt to make the most of the minimal hearing of the hearing-impaired child has proved to be successful in the vast majority of cases. The profoundly hearing-impaired child can learn to listen and to produce the spoken word. This is demonstrated by use of video-tape. The interaction of teacher with child is heard and the regional accent can be identified. The prosodic features of the speech are retained although articulation may be incomplete. Intelligibility of utterance is shown to be a combination of rhythm stress and intonation based on previously heard patterns rather than on perfectly articulated sounds. The social consequence of this approach is that child is not relegated to a minority subculture where only the deaf can communicate with the deaf but is allowed to enter into the world of normal relationships and expectations. Deaf children can be taught to listen and to use imperfectly heard patterns in order to interpret the meaning of language. This input of speech follows the natural language normally used by the child who is not deaf.
Genetic therapy for vein bypass graft disease: current perspectives.
Simosa, Hector F; Conte, Michael S
2004-01-01
Although continued progress in endovascular technology holds promise for less invasive approaches to arterial diseases, surgical bypass grafting remains the mainstay of therapy for patients with advanced coronary and peripheral ischemia. In the United States, nearly 400,000 coronary and 100,000 lower extremity bypass procedures are performed annually. The autogenous vein, particularly the greater saphenous vein, has proven to be a durable and versatile arterial substitute, with secondary patency rates at 5 years of 70 to 80% in the extremity. However, vein graft failure is a common occurrence that incurs significant morbidity and mortality, and, to date, pharmacologic approaches to prolong vein graft patency have produced limited results. Dramatic advances in genetics, coupled with a rapidly expanding knowledge of the molecular basis of vascular diseases, have set the stage for genetic interventions. The attraction of a genetic approach to vein graft failure is based on the notion that the tissue at risk is readily accessible to the clinician prior to the onset of the pathologic process and the premise that genetic reprogramming of cells in the wall of the vein can lead to an improved healing response. Although the pathophysiology of vein graft failure is incompletely understood, numerous relevant molecular targets have been elucidated. Interventions designed to influence cell proliferation, thrombosis, inflammation, and matrix remodeling at the genetic level have been described, and many have been tested in animal models. Both gene delivery and gene blockade strategies have been investigated, with the latter reaching the stage of advanced clinical trials.
Drug wastage contributes significantly to the cost of routine anesthesia care.
Weinger, M B
2001-11-01
To complement previous studies that employed indirect methods of measuring anesthesia drug waste. Prospective, blinded observational study. Operating rooms of a single university hospital. Anesthesia providers practicing in this setting who were completely unaware of the conduct of the study. All opened and unused or unusable intravenous (IV) anesthesia drugs left over at the end of each workday were collected over a randomly selected typical 2-week period. 166 weekday cases were performed. Thirty different drugs were represented in the 157 syringes and 139 ampoules collected. Opioid waste as well as opened vials that became outdated were counted in the tally. Based on actual hospital drug acquisition costs, $1,802 of drugs were wasted during this 2-week period ($300/OR), amounting to an average cost per case of $10.86. On a cost basis, six drugs accounted for three quarters of the total wastage: phenylephrine (20.8%), propofol (14.5%), vecuronium (12.2%), midazolam (11.4%), labetalol (9.1%), and ephedrine (8.6%). Because incompletely used syringes or vials that were discarded in the trash were not measured in this analysis, the results may underestimate the total cost of drug wastage at this institution by up to 40%. The results of this study are similar to those of previous studies that employed electronic record keeping techniques to calculate drug waste. Intravenous drugs that are prepared but unused may be a significant cost of intraoperative anesthesia care. Methods to reduce the amount of drug wasted are proposed.
Lipid-induced metabolic dysfunction in skeletal muscle.
Muoio, Deborah M; Koves, Timothy R
2007-01-01
Insulin resistance is a hallmark of type 2 diabetes and commonly observed in other energy-stressed settings such as obesity, starvation, inactivity and ageing. Dyslipidaemia and 'lipotoxicity'--tissue accumulation of lipid metabolites-are increasingly recognized as important drivers of insulin resistant states. Mounting evidence suggests that lipid-induced metabolic dysfunction in skeletal muscle is mediated in large part by stress-activated serine kinases that interfere with insulin signal transduction. However, the metabolic and molecular events that connect lipid oversupply to stress kinase activation and glucose intolerance are as yet unclear. Application of transcriptomics and targeted mass spectrometry-based metabolomics tools has led to our finding that insulin resistance is a condition in which muscle mitochondria are persistently burdened with a heavy lipid load. As a result, high rates of beta-oxidation outpace metabolic flux through the TCA cycle, leading to accumulation of incompletely oxidized acyl-carnitine intermediates. In contrast, exercise training enhances mitochondrial performance, favouring tighter coupling between beta-oxidation and the TCA cycle, and concomitantly restores insulin sensitivity in animals fed a chronic high fat diet. The exercise-activated transcriptional co-activator, PGC1alpha, plays a key role in co-ordinating metabolic flux through these two intersecting metabolic pathways, and its suppression by overfeeding may contribute to obesity-associated mitochondrial dysfunction. Our emerging model predicts that muscle insulin resistance arises from mitochondrial lipid stress and a resultant disconnect between beta-oxidation and TCA cycle activity. Understanding this 'disconnect' and its molecular basis may lead to new therapeutic targets for combating metabolic disease.
Simplified DFT methods for consistent structures and energies of large systems
NASA Astrophysics Data System (ADS)
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
A practical radial basis function equalizer.
Lee, J; Beach, C; Tepedelenlioglu, N
1999-01-01
A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.
Effects of unconventional breakup modes on incomplete fusion of weakly bound nuclei
NASA Astrophysics Data System (ADS)
Diaz-Torres, Alexis; Quraishi, Daanish
2018-02-01
The incomplete fusion dynamics of 6Li+209Bi collisions at energies above the Coulomb barrier is investigated. The classical dynamical model implemented in the platypus code is used to understand and quantify the impact of both 6Li resonance states and transfer-triggered breakup modes (involving short-lived projectile-like nuclei such as 8Be and 5Li) on the formation of incomplete fusion products. Model calculations explain the experimental incomplete-fusion excitation function fairly well, indicating that (i) delayed direct breakup of 6Li reduces the incomplete fusion cross sections and (ii) the neutron-stripping channel practically determines those cross sections.
NASA Astrophysics Data System (ADS)
Yang, Qi; Cao, Yue; Chen, Shiyin; Teng, Yue; Meng, Yanli; Wang, Gangcheng; Sun, Chunfang; Xue, Kang
2018-03-01
In this paper, we construct a new set of orthonormal topological basis states for six qubits with the topological single loop d = 2. By acting on the subspace, we get a new five-dimensional (5D) reduced matrix. In addition, it is shown that the Heisenberg XXX spin-1/2 chain of six qubits can be constructed from the Temperley-Lieb algebra (TLA) generator, both the energy ground state and the spin singlet states of the system can be described by the set of topological basis states.
Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method
NASA Astrophysics Data System (ADS)
Fedorov, Dmitri G.; Kitaura, Kazuo
2014-03-01
We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.
NASA Astrophysics Data System (ADS)
Yang, Qi; Cao, Yue; Chen, Shiyin; Teng, Yue; Meng, Yanli; Wang, Gangcheng; Sun, Chunfang; Xue, Kang
2018-06-01
In this paper, we construct a new set of orthonormal topological basis states for six qubits with the topological single loop d = 2. By acting on the subspace, we get a new five-dimensional (5 D) reduced matrix. In addition, it is shown that the Heisenberg XXX spin-1/2 chain of six qubits can be constructed from the Temperley-Lieb algebra (TLA) generator, both the energy ground state and the spin singlet states of the system can be described by the set of topological basis states.
Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E
2013-07-01
To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.
Takegahara, Noriko; Kim, Hyunsoo; Mizuno, Hiroki; Sakaue-Sawano, Asako; Miyawaki, Atsushi; Tomura, Michio; Kanagawa, Osami; Ishii, Masaru; Choi, Yongwon
2016-02-12
Osteoclasts are specialized polyploid cells that resorb bone. Upon stimulation with receptor activator of nuclear factor-κB ligand (RANKL), myeloid precursors commit to becoming polyploid, largely via cell fusion. Polyploidization of osteoclasts is necessary for their bone-resorbing activity, but the mechanisms by which polyploidization is controlled remain to be determined. Here, we demonstrated that in addition to cell fusion, incomplete cytokinesis also plays a role in osteoclast polyploidization. In in vitro cultured osteoclasts derived from mice expressing the fluorescent ubiquitin-based cell cycle indicator (Fucci), RANKL induced polyploidy by incomplete cytokinesis as well as cell fusion. Polyploid cells generated by incomplete cytokinesis had the potential to subsequently undergo cell fusion. Nuclear polyploidy was also observed in osteoclasts in vivo, suggesting the involvement of incomplete cytokinesis in physiological polyploidization. Furthermore, RANKL-induced incomplete cytokinesis was reduced by inhibition of Akt, resulting in impaired multinucleated osteoclast formation. Taken together, these results reveal that RANKL-induced incomplete cytokinesis contributes to polyploidization of osteoclasts via Akt activation. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Takegahara, Noriko; Kim, Hyunsoo; Mizuno, Hiroki; Sakaue-Sawano, Asako; Miyawaki, Atsushi; Tomura, Michio; Kanagawa, Osami; Ishii, Masaru; Choi, Yongwon
2016-01-01
Osteoclasts are specialized polyploid cells that resorb bone. Upon stimulation with receptor activator of nuclear factor-κB ligand (RANKL), myeloid precursors commit to becoming polyploid, largely via cell fusion. Polyploidization of osteoclasts is necessary for their bone-resorbing activity, but the mechanisms by which polyploidization is controlled remain to be determined. Here, we demonstrated that in addition to cell fusion, incomplete cytokinesis also plays a role in osteoclast polyploidization. In in vitro cultured osteoclasts derived from mice expressing the fluorescent ubiquitin-based cell cycle indicator (Fucci), RANKL induced polyploidy by incomplete cytokinesis as well as cell fusion. Polyploid cells generated by incomplete cytokinesis had the potential to subsequently undergo cell fusion. Nuclear polyploidy was also observed in osteoclasts in vivo, suggesting the involvement of incomplete cytokinesis in physiological polyploidization. Furthermore, RANKL-induced incomplete cytokinesis was reduced by inhibition of Akt, resulting in impaired multinucleated osteoclast formation. Taken together, these results reveal that RANKL-induced incomplete cytokinesis contributes to polyploidization of osteoclasts via Akt activation. PMID:26670608
Priority setting for health in emerging markets.
Glassman, Amanda; Giedion, Ursula; McQueston, Kate
2013-05-01
The use of health technology assessment research in emerging economies is becoming an increasingly important tool to determine the uses of health spending. As low- and middle-income countries' gross domestic product grows, the funding available for health has increased in tandem. There is growing evidence that comparative effectiveness research and cost-effectiveness can be used to improve health outcomes within a predefined financial space. The use of these evaluation tools, combined with a systematized process of priority setting, can help inform national and global health payers. This review of country institutions for health technology assessment illustrates two points: the efforts underway to use research to inform priorities are widespread and not confined to wealthier countries; and many countries' efforts to create evidence-based policy are incomplete and more country-specific research will be needed. Further evidence shows that there is scope to reduce these gaps and opportunity to support better incorporation of data through better-defined priority-setting processes.
Estimation of the left ventricular shape and motion with a limited number of slices
NASA Astrophysics Data System (ADS)
Robert, Anne; Schmitt, Francis J. M.; Mousseaux, Elie
1996-04-01
In this paper, we describe a method for the reconstruction of the surface of the left ventricle from a set of lacunary data (that is an incomplete, unevenly sampled and unstructured data set). Global models, because they compress the properties of a surface into a small set of parameters, have a strong regularizing power and are therefore very well suited to lacunary data. Globally deformable superquadrics are particularly attractive, because of their simplicity. This model can be fitted to the data using the Levenberg-Marquardt algorithm for non-linear optimization. However, the difficulties we experienced to get temporally consistent solutions as well as the intrinsic 4D character of the data led us to generalize the classical 3D superquadric model to 4D. We present results on a 4D sequence from the Dynamic Spatial Reconstructor of the Mayo Clinic, and on a 4D IRM sequence.
Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun
2018-03-13
Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.
Russell, Doug
2015-01-01
The sustained interdisciplinary debate about neovitalism between two Johns Hopkins University colleagues, philosopher Arthur O. Lovejoy and experimental geneticist H. S. Jennings, in the period 1911-1914, was the basis for their theoretical reconceptualization of scientific knowledge as contingent and necessarily incomplete in its account of nature. Their response to Hans Driesch's neovitalist concept of entelechy, and his challenge to the continuity between biology and the inorganic sciences, resulted in a historically significant articulation of genetics and philosophy. This study traces the debate's shift of problem-focus away from neovitalism's threat to the unity of science - "organic autonomy," as Lovejoy put it - and toward the potential for development of a nonmechanististic, nonrationalist theory of scientific knowledge. The result was a new pragmatist epistemology, based on Lovejoy's and Jennings's critiques of the inadequacy of pragmatism's account of scientific knowledge. The first intellectual move, drawing on naturalism and pragmatism, was based on a reinterpretation of science as organized experience. The second, sparked by Henri Bergson's theory of creative evolution, and drawing together elements of Dewey's and James's pragmatisms, produced a new account of the contingency and necessary incompleteness of scientific knowledge. Prompted by the neovitalists' mix of a priori concepts and, in Driesch's case, and adherence to empiricism, Lovejoy's and Jennings's developing pragmatist epistemologies of science explored the interrelation between rationalism and empiricism.
Feed-forward neural network model for hunger and satiety related VAS score prediction.
Krishnan, Shaji; Hendriks, Henk F J; Hartvigsen, Merete L; de Graaf, Albert A
2016-07-07
An artificial neural network approach was chosen to model the outcome of the complex signaling pathways in the gastro-intestinal tract and other peripheral organs that eventually produce the satiety feeling in the brain upon feeding. A multilayer feed-forward neural network was trained with sets of experimental data relating concentration-time courses of plasma satiety hormones to Visual Analog Scales (VAS) scores. The network successfully predicted VAS responses from sets of satiety hormone data obtained in experiments using different food compositions. The correlation coefficients for the predicted VAS responses for test sets having i) a full set of three satiety hormones, ii) a set of only two satiety hormones, and iii) a set of only one satiety hormone were 0.96, 0.96, and 0.89, respectively. The predicted VAS responses discriminated the satiety effects of high satiating food types from less satiating food types both in orally fed and ileal infused forms. From this application of artificial neural networks, one may conclude that neural network models are very suitable to describe situations where behavior is complex and incompletely understood. However, training data sets that fit the experimental conditions need to be available.
Guidance for Avoiding Incomplete Premanufacture Notices or Bona Fides in the New Chemicals Program
This page contains documents to help you avoid having an incomplete Premanufacture notice or Bona Fide . The documents go over the chemical identity requirements and common errors that result in incompletes.
NASA Technical Reports Server (NTRS)
Wu, H.; George, K.; Yang, T. C.
1999-01-01
PURPOSE: To study the frequency of true incomplete exchanges induced by high-LET radiation. MATERIALS AND METHODS: Human lymphocytes were exposed to 1 GeV/u Fe ions (LET = 140 keV/microm). Chromosome aberrations were analysed by a fluorescence in situ hybridization using a combination of whole-chromosome-specific probes and human telomere probes. Chromosomes 1, 3 and 4 were investigated. RESULTS: The percentage of incomplete exchanges was between 23 and 29% if telomere signals were not considered. The percentage decreased to approximately 10% after ruling out false incomplete exchanges containing telomere signals. The final estimation of true incomplete exchanges was <10%. CONCLUSION: Within a degree of uncertainty, the percentage of true incomplete exchanges in 1 GeV/u Fe ion-irradiated human lymphocytes was similar to that induced by gamma rays.
3D Printed Models of Cleft Palate Pathology for Surgical Education.
Lioufas, Peter A; Quayle, Michelle R; Leong, James C; McMenamin, Paul G
2016-09-01
To explore the potential viability and limitations of 3D printed models of children with cleft palate deformity. The advantages of 3D printed replicas of normal anatomical specimens have previously been described. The creation of 3D prints displaying patient-specific anatomical pathology for surgical planning and interventions is an emerging field. Here we explored the possibility of taking rare pediatric radiographic data sets to create 3D prints for surgical education. Magnetic resonance imaging data of 2 children (8 and 14 months) were segmented, colored, and anonymized, and stereolothographic files were prepared for 3D printing on either multicolor plastic or powder 3D printers and multimaterial 3D printers. Two models were deemed of sufficient quality and anatomical accuracy to print unamended. One data set was further manipulated digitally to artificially extend the length of the cleft. Thus, 3 models were printed: 1 incomplete soft-palate deformity, 1 incomplete anterior palate deformity, and 1 complete cleft palate. All had cleft lip deformity. The single-material 3D prints are of sufficient quality to accurately identify the nature and extent of the deformities. Multimaterial prints were subsequently created, which could be valuable in surgical training. Improvements in the quality and resolution of radiographic imaging combined with the advent of multicolor multiproperty printer technology will make it feasible in the near future to print 3D replicas in materials that mimic the mechanical properties and color of live human tissue making them potentially suitable for surgical training.
Topology and incompleteness for 2+1-dimensional cosmological spacetimes
NASA Astrophysics Data System (ADS)
Fajman, David
2017-06-01
We study the long-time behavior of the Einstein flow coupled to matter on 2-dimensional surfaces. We consider massless matter models such as collisionless matter composed of massless particles, massless scalar fields and radiation fluids and show that the maximal globally hyperbolic development of homogeneous and isotropic initial data on the 2-sphere is geodesically incomplete in both time directions, i.e. the spacetime recollapses. This behavior also holds for open sets of initial data. In particular, we construct classes of recollapsing 2+1-dimensional spacetimes with spherical spatial topology which provide evidence for a closed universe recollapse conjecture for massless matter models in 2+1 dimensions. Furthermore, we construct solutions with toroidal and higher genus topology for the massless matter fields, which in both cases are future complete. The spacetimes with toroidal topology are 2+1-dimensional analogies of the Einstein-de Sitter model. In addition, we point out a general relation between the energy-momentum tensor and the Kretschmann scalar in 2+1 dimensions and use it to infer strong cosmic censorship for all these models. In view of this relation, we also recall corresponding models containing massive particles, constructed in a previous work and determine the nature of their initial singularities. We conclude that the global structure of non-vacuum cosmological spacetimes in 2+1 dimensions is determined by the mass of particles and—in the homogeneous and isotropic setting studied here—verifies strong cosmic censorship.
[Several mechanisms of visual gnosis disorders in local brain lesions].
Meerson, Ia A
1981-01-01
The object of the studies were peculiarities of recognizing visual images by patients with local cerebral lesions under conditions of incomplete sets of the image features, disjunction of the latter, distortion of their spatial arrangement, and unusual spatial orientation of the image as a whole. It was found that elimination of even one essential feature sharply hampered the recognition of the image both by healthy individuals (control), and patients with extraoccipital lesions, whereas elimination of several nonessential features only slowed down the process. In distinction from this the difficulties of the recognition of incomplete images by patients with occipital lesions were directly proportional to the number of the eliminated features irrespective of the latters' significance, i.e. these patients were unable to evaluate the hierarchy of the features. The recognition process in these patients were followed the way of scanning individual features. The reaccumulation and summation. The recognition of the fragmental, spatially distorted and unusually oriented images was found to be affected selectively in patients with parietal lobe affections. The patients with occipital lesions recognized such images practically as good as the ordinary ones.
NASA Astrophysics Data System (ADS)
Brenner, Howard
2011-10-01
Linear irreversible thermodynamic principles are used to demonstrate, by counterexample, the existence of a fundamental incompleteness in the basic pre-constitutive mass, momentum, and energy equations governing fluid mechanics and transport phenomena in continua. The demonstration is effected by addressing the elementary case of steady-state heat conduction (and transport processes in general) occurring in quiescent fluids. The counterexample questions the universal assumption of equality of the four physically different velocities entering into the basic pre-constitutive mass, momentum, and energy conservation equations. Explicitly, it is argued that such equality is an implicit constitutive assumption rather than an established empirical fact of unquestioned authority. Such equality, if indeed true, would require formal proof of its validity, currently absent from the literature. In fact, our counterexample shows the assumption of equality to be false. As the current set of pre-constitutive conservation equations appearing in textbooks are regarded as applicable both to continua and noncontinua (e.g., rarefied gases), our elementary counterexample negating belief in the equality of all four velocities impacts on all aspects of fluid mechanics and transport processes, continua and noncontinua alike.
Lemieux, Paul M; Ryan, Jeffrey V; Bass, Charles; Barat, Robert
1996-04-01
Experiments were performed on a 73 kW rotary kiln incinerator simulator equipped with a 73 kW secondary combustion chamber (SCC) to examine emissions of products of incomplete combustion (PICs) resulting from incineration of carbon tetrachloride (CC14) and dichloromethane (CH2C12). Species were measured using an on-line gas chromatograph (GC) system capable of measuring concentrations of eight species of volatile organic compounds (VOCs) in a near-realtime fashion. Samples were taken at several points within the SCC, to generate species profiles with respect to system residence time. For the experiments, the afterburner on the SCC was operated at conditions ranging from fuel-rich to fuellean, while the kiln was operated at a constant set of conditions. Results indicate that combustion of CH2C12 produces higher levels of measured PICs than combustion of CC14, particularly 1, 2 dichlorobenzene, and to a lesser extent, monochlorobenzene. Benzene emissions were predominantly affected by the afterburner air/fuel ratio regardless of whether or not a surrogate waste was being fed.
Westerdahl, Christina; Bergenfelz, Anders; Isaksson, Anders; Wihl, Anders; Nerbrand, Christina; Valdemarsson, Stig
2006-09-01
To search for primary hyperaldosteronism (PHA) among previously known hypertensive patients in primary care, using the aldosterone/renin ratio (ARR), and to evaluate clinical and biochemical characteristics in patients with high or normal ratio. Patient survey study. The study population was recruited by written invitation among hypertensive patients in two primary care areas in Sweden. A total of 200 patients met the criteria and were included in the study. The ARR was calculated from serum aldosterone and plasma renin concentrations. The cut-off level for ARR was set to 100, as confirmed in 28 healthy subjects. Patients with increased ARR were considered for a confirmatory test, using the fludrocortisone suppression test. Of 200 patients, 50 patients had ARR > 100; 26 patients were further evaluated by fludrocortisone suppression test. Seventeen of these patients had an incomplete aldosterone inhibition. In total 17 of 200 evaluated patients (8.5%) had an incomplete suppression with fludrocortisone. This confirms previous reports on a high frequency of PHA. No significant biochemical or clinical differences were found among hypertensive patients with PHA compared with the whole sample.
Teichman, Sam L; Maisel, Alan S; Storrow, Alan B
2015-03-01
Acute heart failure is a common condition associated with considerable morbidity, mortality, and cost. However, evidence-based data on treating heart failure in the acute setting are limited, and current individual treatment options have variable efficacy. The healthcare team must often individualize patient care in ways that may extend beyond available clinical guidelines. In this review, we address the question, "How do you do the best you can clinically with incomplete evidence and imperfect drugs?" Expert opinion is provided to supplement guideline-based recommendations and help address the typical challenges that are involved in the management of patients with acute heart failure. Specifically, we discuss 4 key areas that are important in the continuum of patient care: differential diagnosis and risk stratification; choice and implementation of initial therapy; assessment of the adequacy of therapy during hospitalization or observation; and considerations for discharge/transition of care. A case study is presented to highlight the decision-making process throughout each of these areas. Evidence is accumulating that should help guide patients and healthcare providers on a path to better quality of care.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
2014-01-01
Background Over the last years, evidence has accumulated in support of bracing as an effective treatment option in patients with idiopathic scoliosis. Yet, little information is available on the impact of compliance on the outcome of conservative treatment in scoliotic subjects. The aim of the present study was to prospectively evaluate the association between compliance to brace treatment and the progression of scoliotic curve in patients with idiopathic adolescent (AIS) or juvenile scoliosis (JIS). Methods Among 1.424 patients treated for idiopathic scoliosis, 645 were eligible for inclusion criteria. Three outcomes were distinguished in agreement with the SRS criteria: curve correction, curve stabilization and curve progression. Brace wearing was assessed by one orthopaedic surgeon (LA) and scored on a standardized form. Compliance to treatment was categorized as complete (brace worn as prescribed), incomplete A (brace removed for 1 month), incomplete B (brace removed for 2 months), incomplete C (brace removed during school hours), and incomplete D (brace worn overnight only). Chi square test, T test or ANOVA and ANOVA for repeated measures tests were used as statistical tests. Results The results from our study showed that at follow-up the compliance was: Complete 61.1%; Incomplete A 5.2%; Incomplete B 10.7%; Incomplete C 14.2%; Incomplete D 8.8%. Curve correction was accomplished in 301/319 of Complete, 19/27 Incomplete A, 25/56 Incomplete B, 52/74 Incomplete C, 27/46 Incomplete D. Cobb mean value was 29.8 ± 7.5 SD at beginning and 17.1 ± 10.9 SD at follow-up. Both Cobb and Perdriolle degree amelioration was significantly higher in patients with complete compliance over all other groups, both in juvenile, both in adolescent scoliosis. In the intention-to-treat analysis, the rate of surgical treatment was 2.1% among patients with definite outcome and 12.1% among those with drop-out. Treatment compliance showed significant interactions with time. Conclusion Curve progression and referral to surgery are lower in patients with high brace compliance. Bracing discontinuation up to 1 month does not impact on the treatment outcome. Conversely, wearing the brace only overnight is associated with a high rate of curve progression. PMID:24995038
Dimensional analysis using toric ideals: primitive invariants.
Atherton, Mark A; Bates, Ronald A; Wynn, Henry P
2014-01-01
Classical dimensional analysis in its original form starts by expressing the units for derived quantities, such as force, in terms of power products of basic units [Formula: see text] etc. This suggests the use of toric ideal theory from algebraic geometry. Within this the Graver basis provides a unique primitive basis in a well-defined sense, which typically has more terms than the standard Buckingham approach. Some textbook examples are revisited and the full set of primitive invariants found. First, a worked example based on convection is introduced to recall the Buckingham method, but using computer algebra to obtain an integer [Formula: see text] matrix from the initial integer [Formula: see text] matrix holding the exponents for the derived quantities. The [Formula: see text] matrix defines the dimensionless variables. But, rather than this integer linear algebra approach it is shown how, by staying with the power product representation, the full set of invariants (dimensionless groups) is obtained directly from the toric ideal defined by [Formula: see text]. One candidate for the set of invariants is a simple basis of the toric ideal. This, although larger than the rank of [Formula: see text], is typically not unique. However, the alternative Graver basis is unique and defines a maximal set of invariants, which are primitive in a simple sense. In addition to the running example four examples are taken from: a windmill, convection, electrodynamics and the hydrogen atom. The method reveals some named invariants. A selection of computer algebra packages is used to show the considerable ease with which both a simple basis and a Graver basis can be found.
Breaking the indexing ambiguity in serial crystallography.
Brehm, Wolfgang; Diederichs, Kay
2014-01-01
In serial crystallography, a very incomplete partial data set is obtained from each diffraction experiment (a `snapshot'). In some space groups, an indexing ambiguity exists which requires that the indexing mode of each snapshot needs to be established with respect to a reference data set. In the absence of such re-indexing information, crystallographers have thus far resorted to a straight merging of all snapshots, yielding a perfectly twinned data set of higher symmetry which is poorly suited for structure solution and refinement. Here, two algorithms have been designed for assembling complete data sets by clustering those snapshots that are indexed in the same way, and they have been tested using 15,445 snapshots from photosystem I [Chapman et al. (2011), Nature (London), 470, 73-77] and with noisy model data. The results of the clustering are unambiguous and enabled the construction of complete data sets in the correct space group P63 instead of (twinned) P6322 that researchers have been forced to use previously in such cases of indexing ambiguity. The algorithms thus extend the applicability and reach of serial crystallography.
42 CFR 457.700 - Basis, scope, and applicability.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Strategic Planning, Reporting, and Evaluation § 457.700 Basis, scope, and applicability. (a) Statutory basis... strategic planning, reports, and program budgets; and (2) Section 2108 of the Act, which sets forth... strategic planning, monitoring, reporting and evaluation under title XXI. (c) Applicability. The...
42 CFR 457.700 - Basis, scope, and applicability.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Strategic Planning, Reporting, and Evaluation § 457.700 Basis, scope, and applicability. (a) Statutory basis... strategic planning, reports, and program budgets; and (2) Section 2108 of the Act, which sets forth... strategic planning, monitoring, reporting and evaluation under title XXI. (c) Applicability. The...
Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.
Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R
2008-02-14
The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.
50 CFR 403.04 - Determinations and hearings under section 109(c) of the MMPA.
Code of Federal Regulations, 2010 CFR
2010-10-01
... management program the state must provide for a process, consistent with section 109(c) of the Act, to... must include the elements set forth below. (b) Basis, purpose, and scope. The process set forth in this... made solely on the basis of the record developed at the hearing. The state agency in making its final...
Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method
NASA Astrophysics Data System (ADS)
Lombardini, Richard; Nowara, Ewa; Johnson, Bruce
2015-03-01
The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.
Factors Affecting Formation of Incomplete Vi Antibody in Mice
Gaines, Sidney; Currie, Julius A.; Tully, Joseph G.
1965-01-01
Gaines, Sidney (Walter Reed Army Institute of Research, Washington, D.C.), Julius A. Currie, and Joseph G. Tully. Factors affecting formation of incomplete Vi antibody in mice. J. Bacteriol. 90:635–642. 1965.—Single immunizing doses of purified Vi antigen elicited complete and incomplete Vi antibodies in BALB/c mice, but only incomplete antibody in Cinnamon mice. Three of six other mouse strains tested responded like BALB/c mice; the remaining three, like Cinnamon mice. Varying the quantity of antigen injected or the route of administration failed to stimulate the production of detectable complete Vi antibody in Cinnamon mice. Such antibody was evoked in these animals by multiple injections of Vi antigen or by inoculating them with Vi-containing bacilli or Vi-coated erythrocytes. The early protection afforded by serum from Vi-immunized BALB/c mice coincided with the appearance of incomplete Vi antibody, 1 day prior to the advent of complete antibody. Persistence of incomplete as well as complete antibody in the serum of immunized mice was demonstrated for at least 56 days after injection of 10 μg of Vi antigen. Incomplete Vi antibody was shown to have blocking ability, in vitro bactericidal activity, and the capability of protecting mice against intracerebral as well as intraperitoneal challenge with virulent typhoid bacilli. Production of incomplete and complete Vi antibodies was adversely affected by immunization with partially depolymerized Vi antigens. PMID:16562060
Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna
2018-06-05
Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.
Yunus, Muhammad B; Aldag, Jean C
2012-03-01
The 1990 American College of Rheumatology (ACR) classification criteria for fibromyalgia/fibromyalgia syndrome (FMS) has 2 components: (a) widespread pain (WSP) and (b) presence of 11 or more tender points (TP) among possible 18 sites. Some clinic patients fulfill 1 component but not the other. We have considered these patients to have incomplete FMS (IFMS). The purpose of this study was to examine the clinical and psychological differences between IFMS and FMS (by 1990 ACR criteria) because such comparison may be helpful to diagnose patients in the clinic. Six hundred consecutive patients referred to our rheumatology clinic with a diagnosis of FMS were examined by a standard protocol to determine whether they fulfilled the 1990 criteria for FMS. Both IFMS and FMS groups were compared in demographic, clinical, and psychological variables using appropriate statistical methods. One hundred twelve (18.7%) patients did not satisfy the 1990 ACR criteria and were classified as IFMS. Symptoms in IFMS and FMS were similar, generally with less frequent and less severe symptoms in the IFMS group. In IFMS, no significant difference was found among the WSP and TP component subgroups. Both TP and WSP were correlated with important features of FMS. Fulfillment of the ACR 1990 criteria is not necessary for a diagnosis of FMS in the clinic. For diagnosis and management of FMS in the clinical setting, IFMS patients, along with consideration of the total clinical picture, may be considered to have FMS, albeit generally mild.