NASA Astrophysics Data System (ADS)
Luo, Yi
2002-03-01
We have developed a new theoretical approach to characterize the electron transport process in molecular devices based on the elastic-scattering Green's function theory in connection with the hybrid density functional theory without using any fitting parameters. Two molecular devices with benzene-1,4-dithiol and octanedithiol molecules embedded between two gold electrodes have been studied. The calculated current-voltage characteristics are in very good agreement with existing experimental results reported by Reed et. al for benzene-1,4-dithiol [Science, 278(1997) 252] and by Cui et al. for octanedithiol [Science, 294(2001) 571]. Our approach is very straightforward and can apply to quite large systems. Most importantly, it provides a reliable way to design and optimize molecular devices theoretically, thereby avoiding extremely difficult, time consuming laboratory tests.
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Accurate Variational Description of Adiabatic Quantum Optimization
NASA Astrophysics Data System (ADS)
Carleo, Giuseppe; Bauer, Bela; Troyer, Matthias
Adiabatic quantum optimization (AQO) is a quantum computing protocol where a system is driven by a time-dependent Hamiltonian. The initial Hamiltonian has an easily prepared ground-state and the final Hamiltonian encodes some desired optimization problem. An adiabatic time evolution then yields a solution to the optimization problem. Several challenges emerge in the theoretical description of this protocol: on one hand, the exact simulation of quantum dynamics is exponentially complex in the size of the optimization problem. On the other hand, approximate approaches such as tensor network states (TNS) are limited to small instances by the amount of entanglement that can be encoded. I will present here an extension of the time-dependent Variational Monte Carlo approach to problems in AQO. This approach is based on a general class of (Jastrow-Feenberg) entangled states, whose parameters are evolved in time according to a stochastic variational principle. We demonstrate this approach for optimization problems of the Ising spin-glass type. A very good accuracy is achieved when compared to exact time-dependent TNS on small instances. We then apply this approach to larger problems, and discuss the efficiency of the quantum annealing scheme in comparison with its classical counterpart.
Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.
Ganyecz, Ádám; Kállay, Mihály; Csontos, József
2017-02-09
An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.
A new and accurate continuum description of moving fronts
NASA Astrophysics Data System (ADS)
Johnston, S. T.; Baker, R. E.; Simpson, M. J.
2017-03-01
Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.
Theoretical description of metabolism using queueing theory.
Evstigneev, Vladyslav P; Holyavka, Marina G; Khrapatiy, Sergii V; Evstigneev, Maxim P
2014-09-01
A theoretical description of the process of metabolism has been developed on the basis of the Pachinko model (see Nicholson and Wilson in Nat Rev Drug Discov 2:668-676, 2003) and the queueing theory. The suggested approach relies on the probabilistic nature of the metabolic events and the Poisson distribution of the incoming flow of substrate molecules. The main focus of the work is an output flow of metabolites or the effectiveness of metabolism process. Two simplest models have been analyzed: short- and long-living complexes of the source molecules with a metabolizing point (Hole) without queuing. It has been concluded that the approach based on queueing theory enables a very broad range of metabolic events to be described theoretically from a single probabilistic point of view.
Theoretical Description of the Fission Process
Witold Nazarewicz
2003-07-01
The main goals of the project can be summarized as follows: Development of effective energy functionals that are appropriate for the description of heavy nuclei. Our goal is to improve the existing energy density (Skyrme) functionals to develop a force that will be used in calculations of fission dynamics. Systematic self-consistent calculations of binding energies and fission barriers of actinide and trans-actinide nuclei using modern density functionals. This will be followed by calculations of spontaneous fission lifetimes and mass and charge divisions using dynamic adiabatic approaches based on the WKB approximation. Investigate novel microscopic (non-adiabatic) methods to study the fission process.
Theoretical description of RESPIRATION-CP
NASA Astrophysics Data System (ADS)
Nielsen, Anders B.; Tan, Kong Ooi; Shankar, Ravi; Penzel, Susanne; Cadalbert, Riccardo; Samoson, Ago; Meier, Beat H.; Ernst, Matthias
2016-02-01
We present a quintuple-mode operator-based Floquet approach to describe arbitrary amplitude modulated cross polarization experiments under magic-angle spinning (MAS). The description is used to analyze variants of the RESPIRATION approach (RESPIRATIONCP) where recoupling conditions and the corresponding first-order effective Hamiltonians are calculated, validated numerically and compared to experimental results for 15N-13C coherence transfer in uniformly 13C,15N-labeled alanine and in uniformly 2H,13C,15N-labeled (deuterated and 100% back-exchanged) ubiquitin at spinning frequencies of 16.7 and 90.9 kHz. Similarities and differences between different implementations of the RESPIRATIONCP sequence using either CW irradiation or small flip-angle pulses are discussed.
Towards a theoretical description of dense QCD
NASA Astrophysics Data System (ADS)
Philipsen, Owe
2017-03-01
The properties of matter at finite baryon densities play an important role for the astrophysics of compact stars as well as for heavy ion collisions or the description of nuclear matter. Because of the sign problem of the quark determinant, lattice QCD cannot be simulated by standard Monte Carlo at finite baryon densities. I review alternative attempts to treat dense QCD with an effective lattice theory derived by analytic strong coupling and hopping expansions, which close to the continuum is valid for heavy quarks only, but shows all qualitative features of nuclear physics emerging from QCD. In particular, the nuclear liquid gas transition and an equation of state for baryons can be calculated directly from QCD. A second effective theory based on strong coupling methods permits studies of the phase diagram in the chiral limit on coarse lattices.
Theoretical Description of the Fission Process
Witold Nazarewicz
2009-10-25
Advanced theoretical methods and high-performance computers may finally unlock the secrets of nuclear fission, a fundamental nuclear decay that is of great relevance to society. In this work, we studied the phenomenon of spontaneous fission using the symmetry-unrestricted nuclear density functional theory (DFT). Our results show that many observed properties of fissioning nuclei can be explained in terms of pathways in multidimensional collective space corresponding to different geometries of fission products. From the calculated collective potential and collective mass, we estimated spontaneous fission half-lives, and good agreement with experimental data was found. We also predicted a new phenomenon of trimodal spontaneous fission for some transfermium isotopes. Our calculations demonstrate that fission barriers of excited superheavy nuclei vary rapidly with particle number, pointing to the importance of shell effects even at large excitation energies. The results are consistent with recent experiments where superheavy elements were created by bombarding an actinide target with 48-calcium; yet even at high excitation energies, sizable fission barriers remained. Not only does this reveal clues about the conditions for creating new elements, it also provides a wider context for understanding other types of fission. Understanding of the fission process is crucial for many areas of science and technology. Fission governs existence of many transuranium elements, including the predicted long-lived superheavy species. In nuclear astrophysics, fission influences the formation of heavy elements on the final stages of the r-process in a very high neutron density environment. Fission applications are numerous. Improved understanding of the fission process will enable scientists to enhance the safety and reliability of the nation’s nuclear stockpile and nuclear reactors. The deployment of a fleet of safe and efficient advanced reactors, which will also minimize radiotoxic
Theoretical Description of Teaching-Learning Processes: A Multidisciplinary Approach
NASA Astrophysics Data System (ADS)
Bordogna, Clelia M.; Albano, Ezequiel V.
2001-09-01
A multidisciplinary approach based on concepts from sociology, educational psychology, statistical physics, and computational science is developed for the theoretical description of teaching-learning processes that take place in the classroom. The emerging model is consistent with well-established empirical results, such as the higher achievements reached working in collaborative groups and the influence of the structure of the group on the achievements of the individuals. Furthermore, another social learning process that takes place in massive interactions among individuals via the Internet is also investigated.
Theoretical description of teaching-learning processes: a multidisciplinary approach.
Bordogna, C M; Albano, E V
2001-09-10
A multidisciplinary approach based on concepts from sociology, educational psychology, statistical physics, and computational science is developed for the theoretical description of teaching-learning processes that take place in the classroom. The emerging model is consistent with well-established empirical results, such as the higher achievements reached working in collaborative groups and the influence of the structure of the group on the achievements of the individuals. Furthermore, another social learning process that takes place in massive interactions among individuals via the Internet is also investigated.
Accurate description of argon and water adsorption on surfaces of graphene-based carbon allotropes.
Kysilka, Jiří; Rubeš, Miroslav; Grajciar, Lukáš; Nachtigall, Petr; Bludský, Ota
2011-10-20
Accurate interaction energies of nonpolar (argon) and polar (water) adsorbates with graphene-based carbon allotropes were calculated by means of a combined density functional theory (DFT)-ab initio computational scheme. The calculated interaction energy of argon with graphite (-9.7 kJ mol(-1)) is in excellent agreement with the available experimental data. The calculated interaction energy of water with graphene and graphite is -12.8 and -14.6 kJ mol(-1), respectively. The accuracy of combined DFT-ab initio methods is discussed in detail based on a comparison with the highly precise interaction energies of argon and water with coronene obtained at the coupled-cluster CCSD(T) level extrapolated to the complete basis set (CBS) limit. A new strategy for a reliable estimate of the CBS limit is proposed for systems where numerical instabilities occur owing to basis-set near-linear dependence. The most accurate estimate of the argon and water interaction with coronene (-8.1 and -14.0 kJ mol(-1), respectively) is compared with the results of other methods used for the accurate description of weak intermolecular interactions.
Experimental and theoretical oscillator strengths of Mg i for accurate abundance analysis
NASA Astrophysics Data System (ADS)
Pehlivan Rhodin, A.; Hartman, H.; Nilsson, H.; Jönsson, P.
2017-02-01
Context. With the aid of stellar abundance analysis, it is possible to study the galactic formation and evolution. Magnesium is an important element to trace the α-element evolution in our Galaxy. For chemical abundance analysis, such as magnesium abundance, accurate and complete atomic data are essential. Inaccurate atomic data lead to uncertain abundances and prevent discrimination between different evolution models. Aims: We study the spectrum of neutral magnesium from laboratory measurements and theoretical calculations. Our aim is to improve the oscillator strengths (f-values) of Mg i lines and to create a complete set of accurate atomic data, particularly for the near-IR region. Methods: We derived oscillator strengths by combining the experimental branching fractions with radiative lifetimes reported in the literature and computed in this work. A hollow cathode discharge lamp was used to produce free atoms in the plasma and a Fourier transform spectrometer recorded the intensity-calibrated high-resolution spectra. In addition, we performed theoretical calculations using the multiconfiguration Hartree-Fock program ATSP2K. Results: This project provides a set of experimental and theoretical oscillator strengths. We derived 34 experimental oscillator strengths. Except from the Mg i optical triplet lines (3p 3P°0,1,2-4s 3S1), these oscillator strengths are measured for the first time. The theoretical oscillator strengths are in very good agreement with the experimental data and complement the missing transitions of the experimental data up to n = 7 from even and odd parity terms. We present an evaluated set of oscillator strengths, gf, with uncertainties as small as 5%. The new values of the Mg i optical triplet line (3p 3P°0,1,2-4s 3S1) oscillator strength values are 0.08 dex larger than the previous measurements.
Maudlin, P.J.; Stout, M.G.
1996-09-01
Strength and fracture constitutive relationships containing strain rate dependence and thermal softening are important for accurate simulation of metal cutting. The mechanical behavior of a hardened 4340 steel was characterized using the von Mises yield function, the Mechanical Threshold Stress model and the Johnson- Cook fracture model. This constitutive description was implemented into the explicit Lagrangian FEM continuum-mechanics code EPIC, and orthogonal plane-strain metal cutting calculations were performed. Heat conduction and friction at the toolwork-piece interface were included in the simulations. These transient calculations were advanced in time until steady state machining behavior (force) was realized. Experimental cutting force data (cutting and thrust forces) were measured for a planning operation and compared to the calculations. 13 refs., 6 figs.
Accurate electronic-structure description of Mn complexes: a GGA+U approach
NASA Astrophysics Data System (ADS)
Li, Elise Y.; Kulik, Heather; Marzari, Nicola
2008-03-01
Conventional density-functional approach often fail in offering an accurate description of the spin-resolved energetics in transition metals complexes. We will focus here on Mn complexes, where many aspects of the molecular structure and the reaction mechanisms are still unresolved - most notably in the oxygen-evolving complex (OEC) of photosystem II and the manganese catalase (MC). We apply a self-consistent GGA + U approach [1], originally designed within the DFT framework for the treatment of strongly correlated materials, to describe the geometry, the electronic and the magnetic properties of various manganese oxide complexes, finding very good agreement with higher-order ab-initio calculations. In particular, the different oxidation states of dinuclear systems containing the [Mn2O2]^n+ (n= 2, 3, 4) core are investigated, in order to mimic the basic face unit of the OEC complex. [1]. H. J. Kulik, M. Cococcioni, D. A. Scherlis, N. Marzari, Phys. Rev. Lett., 2006, 97, 103001
Santolini, Marc; Mora, Thierry; Hakim, Vincent
2014-01-01
The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond
Theoretical description of biomolecular hydration - Application to A-DNA
Garcia, A.E.; Hummer, G.; Soumpasis, D.M.
1994-12-31
The local density of water molecules around a biomolecule is constructed from calculated two- and three-points correlation functions of polar solvents in water using a Potential-of-Mean-Force (PMF) expansion. As a simple approximation, the hydration of all polar (including charged) groups in a biomolecule is represented by the hydration of water oxygen in bulk water, and the effect of non-polar groups on hydration are neglected, except for excluded volume effects. Pair and triplet correlation functions are calculated by molecular dynamics simulations. We present calculations of the structural hydration for ideal A-DNA molecules with sequences [d(CG){sub 5}]{sub 2} and [d(C{sub 5}G{sub 5})]{sub 2}. We find that this method can accurately reproduce the hydration patterns of A-DNA observed in neutron diffraction experiments on oriented DNA fibers.
Theoretical Description of Microtubule Dynamics in Fission Yeast During Interphase
NASA Astrophysics Data System (ADS)
Oei, Yung-Chin; Jiménez-Dalmaroni, Andrea; Vilfan, Andrej; Duke, Thomas
2009-03-01
Fission yeast (S. pombe) is a unicellular organism with a characteristic cylindrical shape. Cell growth during interphase is strongly influenced by microtubule self-organization - a process that has been experimentally well characterised. The microtubules are organized in 3 to 4 bundles, called ``interphase microtubule assemblies'' (IMAs). Each IMA is composed of several microtubules, arranged with their dynamic ``plus'' ends facing the cell tips and their ``minus'' ends overlapping at the cell middle. Although the main protein factors involved in interphase microtubule organization have been identified, an understanding of how their collective interaction with microtubules leads to the organization and structures observed in vivo is lacking. We present a physical model of microtubule dynamics that aims to provide a quantitative description of the self-organization process. First, we solve equations for the microtubule length distribution in steady-state, taking into account the way that a limited tubulin pool affects the nucleation, growth and shrinkage of microtubules. Then we incorporate passive and active crosslinkers (the bundling factor Ase1 and molecular motor Klp2) and investigate the formation of IMA structures. Analytical results are complemented by a 3D stochastic simulation.
Spear, Jack; Fields, Lanny
2015-12-01
Interpreting and describing complex information shown in graphs are essential skills to be mastered by students in many disciplines; both are skills that are difficult to learn. Thus, interventions that produce these outcomes are of great value. Previous research showed that conditional discrimination training that established stimulus control by some elements of graphs and their printed descriptions produced some improvement in the accuracy of students' written descriptions of graphs. In the present experiment, students wrote nearly perfect descriptions of the information conveyed in interaction-based graphs after the establishment of conditional relations between graphs and their printed descriptions. This outcome was achieved with the use of special conditional discrimination training procedures that required participants to attend to many of the key elements of the graphs and the phrases in the printed descriptions that corresponded to the elements in the graphs. Thus, students learned to write full descriptions of the information represented by complex graphs by an automated training procedure that did not involve the direct training of writing.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei
2015-01-13
A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta
Baird, J.A.; Apostal, M.C.; Rotelli, R.L. Jr.; Tinianow, M.A.; Wormley, D.N.
1984-06-01
The Theoretical Description for the GEODYN interactive finite-element computer program is presented. The program is capable of performing the analysis of the three-dimensional transient dynamic response of a Polycrystalline Diamond Compact Bit-Bit Sub arising from the intermittent contact of the bit with the downhole rock formations. The program accommodates nonlinear, time-dependent, loading and boundary conditions.
ERIC Educational Resources Information Center
He, Yan
2008-01-01
The issue of meaning is undoubtedly significant in translation theory. Based on Catford's and Nida's view on meaning in translation, this paper aims at explore linguistic school's contribution to the theoretical description of meaning. With Nida's semantic studies as a focus, it argues that Nida's semantic studies represent an important stage and…
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects
Yuk, Simuck F.; Pitike, Krishna Chaitanya; Nakhmanson, Serge M.; Eisenbach, Markus; Li, Ying Wai; Cooper, Valentino R.
2017-01-01
Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes. PMID:28256544
Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects.
Yuk, Simuck F; Pitike, Krishna Chaitanya; Nakhmanson, Serge M; Eisenbach, Markus; Li, Ying Wai; Cooper, Valentino R
2017-03-03
Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes.
Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects
Yuk, Simuck F.; Pitike, Krishna Chaitanya; Nakhmanson, Serge M.; ...
2017-03-03
Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggestmore » the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes.« less
Accurate description of the electronic structure of organic semiconductors by GW methods
NASA Astrophysics Data System (ADS)
Marom, Noa
2017-03-01
Electronic properties associated with charged excitations, such as the ionization potential (IP), the electron affinity (EA), and the energy level alignment at interfaces, are critical parameters for the performance of organic electronic devices. To computationally design organic semiconductors and functional interfaces with tailored properties for target applications it is necessary to accurately predict these properties from first principles. Many-body perturbation theory is often used for this purpose within the GW approximation, where G is the one particle Green’s function and W is the dynamically screened Coulomb interaction. Here, the formalism of GW methods at different levels of self-consistency is briefly introduced and some recent applications to organic semiconductors and interfaces are reviewed.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi
2016-10-01
As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.
Lao, Ka Un; Schäffer, Rainer; Jansen, Georg; Herbert, John M
2015-06-09
Three new data sets for intermolecular interactions, AHB21 for anion-neutral dimers, CHB6 for cation-neutral dimers, and IL16 for ion pairs, are assembled here, with complete-basis CCSD(T) results for each. These benchmarks are then used to evaluate the accuracy of the single-exchange approximation that is used for exchange energies in symmetry-adapted perturbation theory (SAPT), and the accuracy of SAPT based on wave function and density-functional descriptions of the monomers is evaluated. High-level SAPT calculations afford poor results for these data sets, and this includes the recently proposed "gold", "silver", and "bronze standards" of SAPT, namely, SAPT2+(3)-δMP2/aug-cc-pVTZ, SAPT2+/aug-cc-pVDZ, and sSAPT0/jun-cc-pVDZ, respectively [ Parker , T. M. , et al. , J. Chem. Phys. 2014 , 140 , 094106 ]. Especially poor results are obtained for symmetric shared-proton systems of the form X(-)···H(+)···X(-), for X = F, Cl, or OH. For the anionic data set, the SAPT2+(CCD)-δMP2/aug-cc-pVTZ method exhibits the best performance, with a mean absolute error (MAE) of 0.3 kcal/mol and a maximum error of 0.7 kcal/mol. For the cationic data set, the highest-level SAPT method, SAPT2+3-δMP2/aug-cc-pVQZ, outperforms the rest of the SAPT methods, with a MAE of 0.2 kcal/mol and a maximum error of 0.4 kcal/mol. For the ion-pair data set, the SAPT2+3-δMP2/aug-cc-pVTZ performs the best among all SAPT methods with a MAE of 0.3 kcal/mol and a maximum error of 0.9 kcal/mol. Overall, SAPT2+3-δMP2/aug-cc-pVTZ affords a small and balanced MAE (<0.5 kcal/mol) for all three data sets, with an overall MAE of 0.4 kcal/mol. Despite the breakdown of perturbation theory for ionic systems at short-range, SAPT can still be saved given two corrections: a "δHF" correction, which requires a supermolecular Hartree-Fock calculation to incorporate polarization effects beyond second order, and a "δMP2" correction, which requires a supermolecular MP2 calculation to account for higher
Ballester, Pedro J; Schreyer, Adrian; Blundell, Tom L
2014-03-24
Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein-ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein-ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data.
NASA Astrophysics Data System (ADS)
Nold, Andreas; Goddard, Ben; Sibley, David; Kalliadasis, Serafim
2014-03-01
Multiscale effects play a predominant role in wetting phenomena such as the moving contact line. An accurate description is of paramount interest for a wide range of industrial applications, yet it is a matter of ongoing research, due to the difficulty of incorporating different physical effects in one model. Important small-scale phenomena are corrections to the attractive fluid-fluid and wall-fluid forces in inhomogeneous density distributions, which often previously have been accounted for by the disjoining pressure in an ad-hoc manner. We systematically derive a novel model for the description of a single-component liquid-vapor multiphase system which inherently incorporates these nonlocal effects. This derivation, which is inspired by statistical mechanics in the framework of colloidal density functional theory, is critically discussed with respect to its assumptions and restrictions. The model is then employed numerically to study a moving contact line of a liquid fluid displacing its vapor phase. We show how nonlocal physical effects are inherently incorporated by the model and describe how classical macroscopic results for the contact line motion are retrieved. We acknowledge financial support from ERC Advanced Grant No. 247031 and Imperial College through a DTG International Studentship.
NASA Astrophysics Data System (ADS)
Carroll, Natalie R.
There are vast numbers of organic compounds that could be considered for use in molecular electronics. Hence there is a need for efficient and economical screening tools. Here we develop theoretical methods to describe electron transport through individual molecules, the ultimate goal of which is to establish design tools for molecular electronic devices. To successfully screen a compound for its use as a device component requires a proper representation of the quantum mechanics of electron transmission. In this work we report the development of tools for the description of electron transmission that are: Charge self-consistent, valid in the presence of a finite applied potential field and (in some cases) explicitly time-dependent. In addition, the tools can be extended to any molecular system, including biosystems, because they are free of restrictive parameterizations. Two approaches are explored: (1) correlation of substituent parameter values (sigma), (commonly found in organic chemistry textbooks) to properties associated with electron transport, (2) explicit tracking of the time evolution of the wave function of a nonstationary electron. In (1) we demonstrate that the a correlate strongly with features of the charge migration process, establishing them as useful indicators of electronic properties. In (2) we employ a time-dependent description of electron transport through molecular junctions. To date, the great majority of theoretical treatments of electron transport in molecular junctions have been of the time-independent variety. Time dependence, however, is critical to such properties as switching speeds in binary computer components and alternating current conductance, so we explored methods based on time-dependent quantum mechanics. A molecular junction is modeled as a single molecule sandwiched between two clusters of close-packed metal atoms or other donor and acceptor groups. The time dependence of electron transport is investigated by initially
NASA Astrophysics Data System (ADS)
Varsano, Daniele; Caprasecca, Stefano; Coccia, Emanuele
2017-01-01
Photoinitiated phenomena play a crucial role in many living organisms. Plants, algae, and bacteria absorb sunlight to perform photosynthesis, and convert water and carbon dioxide into molecular oxygen and carbohydrates, thus forming the basis for life on Earth. The vision of vertebrates is accomplished in the eye by a protein called rhodopsin, which upon photon absorption performs an ultrafast isomerisation of the retinal chromophore, triggering the signal cascade. Many other biological functions start with the photoexcitation of a protein-embedded pigment, followed by complex processes comprising, for example, electron or excitation energy transfer in photosynthetic complexes. The optical properties of chromophores in living systems are strongly dependent on the interaction with the surrounding environment (nearby protein residues, membrane, water), and the complexity of such interplay is, in most cases, at the origin of the functional diversity of the photoactive proteins. The specific interactions with the environment often lead to a significant shift of the chromophore excitation energies, compared with their absorption in solution or gas phase. The investigation of the optical response of chromophores is generally not straightforward, from both experimental and theoretical standpoints; this is due to the difficulty in understanding diverse behaviours and effects, occurring at different scales, with a single technique. In particular, the role played by ab initio calculations in assisting and guiding experiments, as well as in understanding the physics of photoactive proteins, is fundamental. At the same time, owing to the large size of the systems, more approximate strategies which take into account the environmental effects on the absorption spectra are also of paramount importance. Here we review the recent advances in the first-principle description of electronic and optical properties of biological chromophores embedded in a protein environment. We show
Varsano, Daniele; Caprasecca, Stefano; Coccia, Emanuele
2017-01-11
Photoinitiated phenomena play a crucial role in many living organisms. Plants, algae, and bacteria absorb sunlight to perform photosynthesis, and convert water and carbon dioxide into molecular oxygen and carbohydrates, thus forming the basis for life on Earth. The vision of vertebrates is accomplished in the eye by a protein called rhodopsin, which upon photon absorption performs an ultrafast isomerisation of the retinal chromophore, triggering the signal cascade. Many other biological functions start with the photoexcitation of a protein-embedded pigment, followed by complex processes comprising, for example, electron or excitation energy transfer in photosynthetic complexes. The optical properties of chromophores in living systems are strongly dependent on the interaction with the surrounding environment (nearby protein residues, membrane, water), and the complexity of such interplay is, in most cases, at the origin of the functional diversity of the photoactive proteins. The specific interactions with the environment often lead to a significant shift of the chromophore excitation energies, compared with their absorption in solution or gas phase. The investigation of the optical response of chromophores is generally not straightforward, from both experimental and theoretical standpoints; this is due to the difficulty in understanding diverse behaviours and effects, occurring at different scales, with a single technique. In particular, the role played by ab initio calculations in assisting and guiding experiments, as well as in understanding the physics of photoactive proteins, is fundamental. At the same time, owing to the large size of the systems, more approximate strategies which take into account the environmental effects on the absorption spectra are also of paramount importance. Here we review the recent advances in the first-principle description of electronic and optical properties of biological chromophores embedded in a protein environment. We show
ERIC Educational Resources Information Center
Arens, A. Katrin; Yeung, Alexander Seeshing; Craven, Rhonda G.; Hasselhorn, Marcus
2013-01-01
This study aims to develop a short German version of the Self Description Questionnaire (SDQ I-GS) in order to present a robust economical instrument for measuring German preadolescents' multidimensional self-concept. A full German version of the SDQ I (SDQ I-G) that maintained the original structure and thus length of the English original SDQ I…
Minar, J.; Ebert, H.; De Nadaie, C.; Brookes, N.B.; Venturini, F.; Ghiringhelli, G.; Chioncel, L.; Katsnelson, M. I.; Lichtenstein, A. I.
2005-10-14
The pure Fano effect in angle-integrated valence-band photoemission of ferromagnets has been observed for the first time. A contribution of the intrinsic spin polarization to the spin polarization of the photoelectrons has been avoided by an appropriate choice of the experimental parameters. The theoretical description of the resulting spectra reveals a complete analogy to the Fano effect observed before for paramagnetic transition metals. While the theoretical photocurrent and spin-difference spectra are found in good quantitative agreement with experiment in the case of Fe and Co, only a qualitative agreement could be achieved in the case of Ni by calculations on the basis of plain local spin-density approximation. Agreement with experimental data could be improved in this case in a very substantial way by a treatment of correlation effects on the basis of dynamical mean field theory.
Theoretical description of structural and electronic properties of organic photovoltaic materials.
Zhugayevych, Andriy; Tretiak, Sergei
2015-04-01
We review recent progress in the modeling of organic solar cells and photovoltaic materials, as well as discuss the underlying theoretical methods with an emphasis on dynamical electronic processes occurring in organic semiconductors. The key feature of the latter is a strong electron-phonon interaction, making the evolution of electronic and structural degrees of freedom inseparable. We discuss commonly used approaches for first-principles modeling of this evolution, focusing on a multiscale framework based on the Holstein-Peierls Hamiltonian solved via polaron transformation. A challenge for both theoretical and experimental investigations of organic solar cells is the complex multiscale morphology of these devices. Nevertheless, predictive modeling of photovoltaic materials and devices is attainable and is rapidly developing, as reviewed here.
Theoretical Description of Structural and Electronic Properties of Organic Photovoltaic Materials
NASA Astrophysics Data System (ADS)
Zhugayevych, Andriy; Tretiak, Sergei
2015-04-01
We review recent progress in the modeling of organic solar cells and photovoltaic materials, as well as discuss the underlying theoretical methods with an emphasis on dynamical electronic processes occurring in organic semiconductors. The key feature of the latter is a strong electron-phonon interaction, making the evolution of electronic and structural degrees of freedom inseparable. We discuss commonly used approaches for first-principles modeling of this evolution, focusing on a multiscale framework based on the Holstein-Peierls Hamiltonian solved via polaron transformation. A challenge for both theoretical and experimental investigations of organic solar cells is the complex multiscale morphology of these devices. Nevertheless, predictive modeling of photovoltaic materials and devices is attainable and is rapidly developing, as reviewed here.
Sanz-Vicario, Jose Luis; Bachau, Henri; Martin, Fernando
2006-03-15
We present a nonperturbative time-dependent theoretical method to study H{sub 2} ionization with femtosecond laser pulses when the photon energy is large enough to populate the Q{sub 1} (25-28 eV) and Q{sub 2} (30-37 eV) doubly excited autoionizing states. We have investigated the role of these states in dissociative ionization of H{sub 2} and analyzed, in the time domain, the onset of the resonant peaks appearing in the proton kinetic energy distribution. Their dependence on photon frequency and pulse duration is also analyzed. The results are compared with available experimental data and with previous theoretical results obtained within a stationary perturbative approach. The method allows us as well to obtain dissociation yields corresponding to the decay of doubly excited states into two H atoms. The calculated H(n=2) yields are in good agreement with the experimental ones.
L'vov, Victor A.; Kosogor, Anna; Barandiaran, Jose M.
2016-01-07
A simple thermodynamic theory is proposed for the quantitative description of giant magnetocaloric effect observed in metamagnetic shape memory alloys. Both the conventional magnetocaloric effect at the Curie temperature and the inverse magnetocaloric effect at the transition from the ferromagnetic austenite to a weakly magnetic martensite are considered. These effects are evaluated from the Landau-type free energy expression involving exchange interactions in a system of a two magnetic sublattices. The findings of the thermodynamic theory agree with first-principles calculations and experimental results from Ni-Mn-In-Co and Ni-Mn-Sn alloys, respectively.
Schnell, S; Mendoza, C
1997-02-21
The enzymological principles of the polymerase chain reaction (PCR) and of the quantitative competitive PCR (QC-PCR) are developed, proposing a theoretical framework that will facilitate quantification in experimental methodologies. It is demonstrated that the specificity of the QC-PCR, i.e. the ratio of the target initial velocity to that of the competitor template, remains constant not only during a particular amplification but also for increasing initial competitor concentrations. Linear fitting procedures are thus recommended that will enable a quantitative estimate of the initial target concentration. Finally, expressions for the efficiency of the PCR and QC-PCR are derived that are in agreement with previous experimental inferences.
Brückner, Charlotte; Engels, Bernd
2016-06-05
Charge transport properties of materials composed of small organic molecules are important for numerous optoelectronic applications. A material's ability to transport charges is considerably influenced by the charge reorganization energies of the composing molecules. Hence, predictions about charge-transport properties of organic materials deserve reliable statements about these charge reorganization energies. However, using density functional theory which is mostly used for the predictions, the computed reorganization energies depend strongly on the chosen functional. To gain insight, a benchmark of various density functionals for the accurate calculation of charge reorganization energies is presented. A correlation between the charge reorganization energies and the ionization potentials is found which suggests applying IP-tuning to obtain reliable values for charge reorganization energies. According to benchmark investigations with IP-EOM-CCSD single-point calculations, the tuned functionals provide indeed more reliable charge reorganization energies. Among the standard functionals, ωB97X-D and SOGGA11X yield accurate charge reorganization energies in comparison with IP-EOM-CCSD values. © 2016 Wiley Periodicals, Inc.
Theoretical description of fine structure in the α decay of heavy odd-odd nuclei
NASA Astrophysics Data System (ADS)
Ni, Dongdong; Ren, Zhongzhou
2013-02-01
The newly developed multichannel cluster model (MCCM), based on the coupled-channel Schrödinger equation with outgoing wave boundary conditions, is extended to study the α-decay fine structure in heavy odd-odd nuclei. Calculations are performed for the α transitions to favored rotational bands where the unpaired nucleons remain unchanged. The simple WKB barrier penetration formula is also used to evaluate the branching ratios for various daughter states. It is found that the WKB formula seems to overestimate the branching ratios for the second and third members of the favored rotational band, while the MCCM gives a precise description of them without any adjustable parameters. Moreover, the experimental total α-decay half-lives are well reproduced within the MCCM.
Singh, Mahi R.; Najiminaini, Mohamadreza; Carson, Jeffrey J. L.; Balakrishnan, Shankar
2015-05-14
We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.
Phenomenological description of a three-center insertion reaction: an information-theoretic study.
Esquivel, Rodolfo O; Flores-Gallegos, Nelson; Dehesa, Jesús S; Angulo, Juan Carlos; Antolín, Juan; López-Rosa, Sheila; Sen, K D
2010-02-04
Information-theoretic measures are employed to describe the course of a three-center chemical reaction in terms of detecting the transition state and the stationary points unfolding the bond-forming and bond-breaking regions which are not revealed in the energy profile. The information entropy profiles for the selected reactions are generated by following the intrinsic-reaction-coordinate (IRC) path calculated at the MP2 level of theory from which Shannon entropies in position and momentum spaces at the QCISD(T)/6-311++G(3df,2p) level are determined. Several complementary reactivity descriptors are also determined, such as the dipole moment, the molecular electrostatic potential (MEP) obtained through a multipole expansion (DMA), the atomic charges and electric potentials fitted to the MEP, the hardness and softness DFT descriptors, and several geometrical parameters which support the information-theoretic analysis. New density-based structures related to the bond-forming and bond-breaking regions are proposed. Our results support the concept of a continuum of transient of Zewail and Polanyi for the transition state rather than a single state, which is also in agreement with reaction-force analyses.
NASA Astrophysics Data System (ADS)
Singh, Mahi R.; Najiminaini, Mohamadreza; Balakrishnan, Shankar; Carson, Jeffrey J. L.
2015-05-01
We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.
NASA Astrophysics Data System (ADS)
Ding, Feizhi
motion. All these developments and applications will open up new computational and theoretical tools to be applied to the development and understanding of chemical reactions, nonlinear optics, electromagnetism, and spintronics. Lastly, we present a new algorithm for large-scale MCSCF calculations that can utilize massively parallel machines while still maintaining optimal performance for each single processor. This will great improve the efficiency in the MCSCF calculations for studying chemical dissociation and high-accuracy quantum-mechanical simulations.
Theoretical description of effective heat transfer between two viscously coupled beads
NASA Astrophysics Data System (ADS)
Bérut, A.; Imparato, A.; Petrosyan, A.; Ciliberto, S.
2016-11-01
We analytically study the role of nonconservative forces, namely viscous couplings, on the statistical properties of the energy flux between two Brownian particles kept at different temperatures. From the dynamical model describing the system, we identify an energy flow that satisfies a fluctuation theorem both in the stationary and in transient states. In particular, for the specific case of a linear nonconservative interaction, we derive an exact fluctuation theorem that holds for any measurement time in the transient regime, and which involves the energy flux alone. Moreover, in this regime the system presents an interesting asymmetry between the hot and cold particles. The theoretical predictions are in good agreement with the experimental results already presented in our previous article [Imparato et al., Phys. Rev. Lett. 116, 068301 (2016), 10.1103/PhysRevLett.116.068301], where we investigated the thermodynamic properties of two Brownian particles, trapped with optical tweezers, interacting through a dissipative hydrodynamic coupling.
NASA Astrophysics Data System (ADS)
Herndon, Conner; Fenton, Flavio; Uzelac, Ilija
Much theoretical, experimental, and clinical research has been devoted to investigating the initiation of cardiac arrhythmias by alternans, the first period doubling bifurcation in the duration of cardiac action potentials. Although period doubling above alternans has been shown to exist in many mammalian hearts, little is understood about their emergence or behavior. There currently exists no physiologically correct theory or model that adequately describes and predicts their emergence in stimulated tissue. In this talk we present experimental data of period 2, 4, and 8 dynamics and a mathematical model that describes these bifurcations. This model extends current cell models through the addition of memory and includes spatiotemporal nonlinearities arising from cellular coupling by tissue heterogeneity.
Group-theoretical description of domain and phase boundaries in crystalline solids
NASA Astrophysics Data System (ADS)
Zieliński, Piotr
1990-06-01
The theory is reviewed of domains and domain boundaries arising in phase transitions accompanied by symmetry breaking. Conclusions concerning the number, the crystallographic type and the spatial orientation of coherent interfaces between crystals of the same structure (domain boundaries) and between different structures of the same material (interphase boundaries) are presented in terms of the space group theory and of the Landau theory of phase transitions. The application of the two-dimensional space groups and the diperiodic groups in three dimensions to the discussed objects is described. The conditions for the coexistence of domains and phases without macroscopic stress are given. An example of the group-theoretical analysis of domain structure is given for a real material: NaO 2.
NASA Technical Reports Server (NTRS)
Furlong, K. L.; Fearn, R. L.
1983-01-01
A method is proposed to combine a numerical description of a jet in a crossflow with a lifting surface panel code to calculate the jet/aerodynamic-surface interference effects on a V/STOL aircraft. An iterative technique is suggested that starts with a model for the properties of a jet/flat plate configuration and modifies these properties based on the flow field calculated for the configuration of interest. The method would estimate the pressures, forces, and moments on an aircraft out of ground effect. A first-order approximation to the method suggested is developed and applied to two simple configurations. The first-order approximation is a noniterative precedure which does not allow for interactions between multiple jets in a crossflow and also does not account for the influence of lifting surfaces on the jet properties. The jet/flat plate model utilized in the examples presented is restricted to a uniform round jet injected perpendicularly into a uniform crossflow for a range of jet-to-crossflow velocity ratios from three to ten.
Theoretical approach to description of time-dependent nitric oxide effects in the vasculature.
Seraya, I P; Nartsissov, Ya R
2002-01-01
Nitric oxide (NO) is one of the most important signal compounds in a living cell. As a typical free radical it has both toxic and physiological effects and their balance is determined by a spatial distribution of NO concentration. Moreover, some biological functions, especially NO-mediated relaxation of blood vessels, have to be time-limited. In order to circumscribe this phenomenon non steady-state mathematical model has been used for description of nitric oxide diffusion in vascular smooth muscle. It was shown that the microvascular relaxation could be observed even after a short time of NO production in the endothelium. This time is up to 3 times below that needed to reach the steady-state spatial NO gradient. However, the effect of nitric oxide essentially depends on the rate of NO production and blood vessel diameter. Furthermore, non steady-state nitric oxide concentration gradient was represented as an analytical function of time and coordinate. It is essential that this function describes a common case of one-dimensional diffusion of uncharged low-mass molecules. Thus, the results can be used for calculation of an upper estimation of experimental data.
NASA Astrophysics Data System (ADS)
Garde, Shekhar
The unique balance of forces underlying biological processes-such as protein folding, aggregation, molecular recognition, and the formation of biological membranes-owes its origin in large part to the surrounding aqueous medium. A quantitative description of fundamental noncovalent interactions, in particular hydrophobic and electrostatic interactions at molecular- scale separations, requires an accurate description of water structure. Thus, the primary goals of our research are to understand the role of water in mediating interactions between molecules and to incorporate this understanding into molecular theories for calculating water-mediated interactions. We have developed a molecular model of hydrophobic interactions that uses methods of information theory to relate hydrophobic effects to the density fluctuations in liquid water. This model provides a quantitative description of small-molecule hydration thermodynamics, as well as insights into the entropies of unfolding globular proteins. For larger molecular solutes, we relate the inhomogeneous water structure in their vicinity to their hydration thermodynamics. We find that the water structure in the vicinity of nonpolar solutes is only locally sensitive to the molecular details of the solute. Water structures predicted using this observation are used to study the association of two neopentane molecules and the conformational equilibria of n-pentane molecule. We have also studied the hydration of a model molecular ionic solute, a tetramethylammonium ion, over a wide range of charge states of the solute. We find that, although the charge dependence of the ion hydration free energy is quadratic, negative ions are more favorably hydrated compared to positive ions. Moreover, this asymmetry of hydration can be reconciled by considering the differences in water organization surrounding positive and negative ions. We have also developed methods for predicting water structure surrounding molecular ions and relating
Theoretical description of photo-doping in Mott and charge-transfer insulators
NASA Astrophysics Data System (ADS)
Eckstein, Martin
2012-02-01
Many aspects of photo-excited insulator-to-metal transitions in Mott and charge-transfer systems are theoretically not well understood: How is the photo-doped state related to a chemically doped state? On what timescale do we expect the formation of quasiparticles? To describe the electronic dynamics of Mott insulators, we have used nonequilibrium dynamical mean-field theory (DMFT) in combination with Quantum Monte Carlo and various weak and strong-coupling [1] techniques. In the talk, I will briefly present the current status of this approach and of related cluster approaches for nonequilibrium. I will then discuss results for the photo-doping in the Hubbard model, and in a in a p-d model for charge-transfer insulators. When the onsite Coulomb repulsion U is much larger than the hopping, rapid thermalization of the pump-excited Mott insulator is inhibited by the energetic stabilization of doublon-hole pairs [2], and various types of non-thermal states can arise. Immediately after the excitation process, the system of doublons and holes is too hot to form quasiparticle states, but coupling to a heat-bath of phonons can drive the system into a metallic state with well developed doublon and hole bands. Close to the metal-insulator transition, on the other hand, when U is of the order as the hopping, doublons and holes rapidly thermalize due to the electron-electron interaction, which makes the system a bad metal rather than a Fermi liquid. [4pt] [1] M. Eckstein and Ph. Werner, Phys. Rev. B 82, 115115 (2010).[0pt] [2] M. Eckstein and Ph. Werner, Phys. Rev. B 84, 035122 (2011).
NASA Astrophysics Data System (ADS)
Omiste, Juan J.; González-Férez, Rosario
2016-12-01
We present a theoretical study of the mixed-field-orientation of asymmetric-top molecules in tilted static electric field and nonresonant linearly polarized laser pulse by solving the time-dependent Schrödinger equation. Within this framework, we compute the mixed-field orientation of a state-selected molecular beam of benzonitrile (C7H5N ) and compare with the experimental observations [J. L. Hansen et al., Phys. Rev. A 83, 023406 (2011), 10.1103/PhysRevA.83.023406] and with our previous time-independent descriptions [J. J. Omiste et al., Phys. Chem. Chem. Phys. 13, 18815 (2011), 10.1039/c1cp21195a]. For an excited rotational state, we investigate the field-dressed dynamics for several field configurations as those used in the mixed-field experiments. The nonadiabatic phenomena and their consequences on the rotational dynamics are analyzed in detail.
Bespamyatnov, Igor O; Rowan, William L; Granetz, Robert S
2008-10-01
Charge exchange recombination spectroscopy on Alcator C-Mod relies on the use of the diagnostic neutral beam injector as a source of neutral particles which penetrate deep into the plasma. It employs the emission resulting from the interaction of the beam atoms with fully ionized impurity ions. To interpret the emission from a given point in the plasma as the density of emitting impurity ions, the density of beam atoms must be known. Here, an analysis of beam propagation is described which yields the beam density profile throughout the beam trajectory from the neutral beam injector to the core of the plasma. The analysis includes the effects of beam formation, attenuation in the neutral gas surrounding the plasma, and attenuation in the plasma. In the course of this work, a numerical simulation and an analytical approximation for beam divergence are developed. The description is made sufficiently compact to yield accurate results in a time consistent with between-shot analysis.
ERIC Educational Resources Information Center
Mapesela, Mabokang; Hay, H. R.
2005-01-01
This article provides a descriptive theoretical analysis of the most important higher education policies and initiatives which were developed by the democratically elected government of South Africa after 1994 to transform the South African higher education system. The article sheds light on the rationale for the policies under scrutiny; how they…
NASA Astrophysics Data System (ADS)
Perry, Angela; Neipert, Christine; Kasprzyk, Christina Ridley; Green, Tony; Space, Brian; Moore, Preston B.
2005-10-01
An improved time correlation function (TCF) description of sum frequency generation (SFG) spectroscopy was developed and applied to theoretically describing the spectroscopy of the ambient water/vapor interface. A more general TCF expression than was published previously is presented—it is valid over the entire vibrational spectrum for both the real and imaginary parts of the signal. Computationally, earlier time correlation function approaches were limited to short correlation times that made signal processing challenging. Here, this limitation is overcome, and well-averaged spectra are presented for the three independent polarization conditions that are possible for electronically nonresonant SFG. The theoretical spectra compare quite favorably in shape and relative magnitude to extant experimental results in the O H stretching region of water for all polarization geometries. The methodological improvements also allow the calculation of intermolecular SFG spectra. While the intermolecular spectrum of bulk water shows relatively little structure, the interfacial spectra (for polarizations that are sensitive to dipole derivatives normal to the interface—SSP and PPP) show a well-defined intermolecular mode at 875cm-1 that is comparable in intensity to the rest of the intermolecular structure, and has an intensity that is approximately one-sixth of the magnitude of the intense free OH stretching peak. Using instantaneous normal mode methods, the resonance is shown to be due to a wagging mode localized on a single water molecule, almost parallel to the interface, with two hydrogens displaced normal to the interface, and the oxygen anchored in the interface. We have also uncovered the origin of another intermolecular mode at 95cm-1 for the SSP and PPP spectra, and at 220cm-1 for the SPS spectra. These resonances are due to hindered translations perpendicular to the interface for the SSP and PPP spectra, and translations parallel to the interface for the SPS
Boyé-Péronne, Séverine; Gauyacq, Dolores; Liévin, Jacques
2014-11-07
The first quantitative description of the Rydberg and valence singlet electronic states of vinylidene lying in the 0–10 eV region is performed by using large scale ab initio calculations. A deep analysis of Rydberg-valence interactions has been achieved thanks to the comprehensive information contained in the accurate Multi-Reference Configuration Interaction wavefunctions and an original population analysis highlighting the respective role played by orbital and state mixing in such interactions. The present theoretical approach is thus adequate for dealing with larger than diatomic Rydberg systems. The nine lowest singlet valence states have been optimized. Among them, some are involved in strong Rydberg-valence interactions in the region of the Rydberg state equilibrium geometry. The Rydberg states of vinylidene present a great similarity with the acetylene isomer, concerning their quantum defects and Rydberg molecular orbital character. As in acetylene, strong s-d mixing is revealed in the n = 3 s-d supercomplex. Nevertheless, unlike in acetylene, the close-energy of the two vinylidene ionic cores {sup 2}A{sub 1} and {sup 2}B{sub 1} results into two overlapped Rydberg series. These Rydberg series exhibit local perturbations when an accidental degeneracy occurs between them and results in avoided crossings. In addition, some Δl = 1 (s-p and p-d) mixings arise for some Rydberg states and are rationalized in term of electrostatic interaction from the electric dipole moment of the ionic core. The strongest dipole moment of the {sup 2}B{sub 1} cationic state also stabilizes the lowest members of the n = 3 Rydberg series converging to this excited state, as compared to the adjacent series converging toward the {sup 2}A{sub 1} ionic ground state. The overall energies of vinylidene Rydberg states lie above their acetylene counterpart. Finally, predictions for optical transitions in singlet vinylidene are suggested for further experimental spectroscopic
Boyé-Péronne, Séverine; Gauyacq, Dolores; Liévin, Jacques
2014-11-07
The first quantitative description of the Rydberg and valence singlet electronic states of vinylidene lying in the 0-10 eV region is performed by using large scale ab initio calculations. A deep analysis of Rydberg-valence interactions has been achieved thanks to the comprehensive information contained in the accurate Multi-Reference Configuration Interaction wavefunctions and an original population analysis highlighting the respective role played by orbital and state mixing in such interactions. The present theoretical approach is thus adequate for dealing with larger than diatomic Rydberg systems. The nine lowest singlet valence states have been optimized. Among them, some are involved in strong Rydberg-valence interactions in the region of the Rydberg state equilibrium geometry. The Rydberg states of vinylidene present a great similarity with the acetylene isomer, concerning their quantum defects and Rydberg molecular orbital character. As in acetylene, strong s-d mixing is revealed in the n = 3 s-d supercomplex. Nevertheless, unlike in acetylene, the close-energy of the two vinylidene ionic cores (2)A1 and (2)B1 results into two overlapped Rydberg series. These Rydberg series exhibit local perturbations when an accidental degeneracy occurs between them and results in avoided crossings. In addition, some Δl = 1 (s-p and p-d) mixings arise for some Rydberg states and are rationalized in term of electrostatic interaction from the electric dipole moment of the ionic core. The strongest dipole moment of the (2)B1 cationic state also stabilizes the lowest members of the n = 3 Rydberg series converging to this excited state, as compared to the adjacent series converging toward the (2)A1 ionic ground state. The overall energies of vinylidene Rydberg states lie above their acetylene counterpart. Finally, predictions for optical transitions in singlet vinylidene are suggested for further experimental spectroscopic characterization of vinylidene.
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
NASA Astrophysics Data System (ADS)
Ma, Weiguang; Silander, Isak; Hausmaninger, Thomas; Axner, Ove
2016-01-01
Doppler-broadened (Db) noise-immune cavity-enhanced optical heterodyne molecular spectrometry (NICE-OHMS) is conventionally described by an expression (here referred to as the CONV expression) that is restricted to the case when the single-pass absorbance, α0L, is much smaller than the empty cavity losses, π/F [here termed the conventional cavity-limited weak absorption (CCLWA) condition]. This limits the applicability of the technique, primarily its dynamic range and calibration capability. To remedy this, this work derives extended descriptions of Db NICE-OHMS that are not restricted to the CCLWA condition. First, the general principles of Db NICE-OHMS are scrutinized in some detail. Based solely upon a set of general assumptions, predominantly that it is appropriate to linearize the Beer-Lambert law, that the light is modulated to a triplet, and that the Pound-Drever-Hall sidebands are fully reflected, a general description of Db NICE-OHMS that is not limited to any specific restriction on α0L vs. π/F, here referred to as the FULL description, is derived. However, this description constitutes a set of equations to which no closed form solution has been found. Hence, it needs to be solved numerically (by iterations), which is inconvenient. To circumvent this, for the cases when α0L<π/F but without the requirement that the stronger CCLWA condition needs to be fulfilled, a couple of simplified extended expressions that are expressible in closed analytical form, referred to as the extended locking and extended transmission description, ELET, and the extended locking and full transmission description, ELFT, have been derived. An analysis based on simulations validates the various descriptions and assesses to which extent they agree. It is shown that in the CCLWA limit, all extended descriptions revert to the CONV expression. The latter one deviates though from the extended ones for α0L around and above 0.1π/F. The two simplified extended descriptions agree
Allan, M.E.; Wilson, M.L.; Wightman, J. )
1996-01-01
The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.
Allan, M.E.; Wilson, M.L.; Wightman, J.
1996-12-31
The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.
NASA Astrophysics Data System (ADS)
Kim, Jibeom; Jeon, Joonhyeon
2015-01-01
Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.
Grishkevich, Sergey; Sala, Simon; Saenz, Alejandro
2011-12-15
A theoretical approach is described for an exact numerical treatment of a pair of ultracold atoms interacting via a central potential and that are trapped in a finite three-dimensional optical lattice. The coupling of center-of-mass and relative-motion coordinates is treated using an exact diagonalization (configuration-interaction) approach. The orthorhombic symmetry of an optical lattice with three different but orthogonal lattice vectors is explicitly considered as is the fermionic or bosonic symmetry in the case of indistinguishable particles.
Reprint of "Theoretical description of metal/oxide interfacial properties: The case of MgO/Ag(001)"
NASA Astrophysics Data System (ADS)
Prada, Stefano; Giordano, Livia; Pacchioni, Gianfranco; Goniakowski, Jacek
2017-02-01
We compare the performances of different DFT functionals applied to ultra-thin MgO(100) films supported on the Ag(100) surface, a prototypical system of a weakly interacting oxide/metal interface, extensively studied in the past. Beyond semi-local DFT-GGA approximation, we also use the hybrid DFT-HSE approach to improve the description of the oxide electronic structure. Moreover, to better account for the interfacial adhesion, we include the van de Waals interactions by means of either the semi-empirical force fields by Grimme (DFT-D2 and DFT-D2*) or the self-consistent density functional optB88-vdW. We compare and discuss the results on the structural, electronic, and adhesion characteristics of the interface as obtained for pristine and oxygen-deficient Ag-supported MgO films in the 1-4 ML thickness range.
NASA Astrophysics Data System (ADS)
Schwerdtfeger, Peter
2016-12-01
In the last two decades cold and hot fusion experiments lead to the production of new elements for the Periodic Table up to nuclear charge 118. Recent developments in relativistic quantum theory have made it possible to obtain accurate electronic properties for the trans-actinide elements with the aim to predict their potential chemical and physical behaviour. Here we report on first results of solid-state calculations for Og (element 118) to support future atom-at-a-time gas-phase adsorption experiments on surfaces such as gold or quartz.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Theoretical description of the low-lying electronic states of LuBr located below 41,700 cm-1
NASA Astrophysics Data System (ADS)
Assaf, Joumana; Taher, Fadia; Magnier, Sylvie
2017-03-01
A theoretical investigation of the lowest molecular states of LuBr located below 41,700 cm-1 in the 2S+1Λ(+/-) and Ω(±) representations when including the spin-orbit effects, has been performed through SA-CASSCF and MRCI calculations. Potential energy curves have been determined for 21 2S+1Λ(+/-) and 42 Ω(±) molecular states in the range of 1.70 to 3.50 Å and the spectroscopic constants (Re, Te, ωe and ωeχe) have been deduced. Transition Dipole Moments have been computed for various allowed ΔΛ=0,±1 on the same range of internuclear distances. In the case of the ground state and the two expected lowest singlet excited states (1)1Π and (2)1Σ+, a good agreement with the experimental results is obtained while new results are reported for the not yet observed 18 2S+1Λ(+/-) and 42 Ω(±) states. A comparison with previous studies on the Lutetium mono-halides LuF, LuCl and LuI is presented, leading to trends in transition energies, equilibrium distances and dipole moments.
NASA Astrophysics Data System (ADS)
Goetz, R. E.; Isaev, T. A.; Nikoobakht, B.; Berger, R.; Koch, C. P.
2017-01-01
Photoelectron circular dichroism refers to the forward/backward asymmetry in the photoelectron angular distribution with respect to the propagation axis of circularly polarized light. It has recently been demonstrated in femtosecond multi-photon photoionization experiments with randomly oriented camphor and fenchone molecules [C. Lux et al., Angew. Chem., Int. Ed. 51, 4755 (2012) and C. S. Lehmann et al., J. Chem. Phys. 139, 234307 (2013)]. A theoretical framework describing this process as (2+1) resonantly enhanced multi-photon ionization is constructed, which consists of two-photon photoselection from randomly oriented molecules and successive one-photon ionization of the photoselected molecules. It combines perturbation theory for the light-matter interaction with ab initio calculations for the two-photon absorption and a single-center expansion of the photoelectron wavefunction in terms of hydrogenic continuum functions. It is verified that the model correctly reproduces the basic symmetry behavior expected under exchange of handedness and light helicity. When applied to fenchone and camphor, semi-quantitative agreement with the experimental data is found, for which a sufficient d wave character of the electronically excited intermediate state is crucial.
NASA Astrophysics Data System (ADS)
Wang, Li-yong; Li, Le; Zhang, Zhi-hua
2016-09-01
Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.
Liu, Ke; Nissinen, Jaakko; de Boer, Josko; Slager, Robert-Jan; Zaanen, Jan
2017-02-01
The paradigm of spontaneous symmetry breaking encompasses the breaking of the rotational symmetries O(3) of isotropic space to a discrete subgroup, i.e., a three-dimensional point group. The subgroups form a rich hierarchy and allow for many different phases of matter with orientational order. Such spontaneous symmetry breaking occurs in nematic liquid crystals, and a highlight of such anisotropic liquids is the uniaxial and biaxial nematics. Generalizing the familiar uniaxial and biaxial nematics to phases characterized by an arbitrary point-group symmetry, referred to as generalized nematics, leads to a large hierarchy of phases and possible orientational phase transitions. We discuss how a particular class of nematic phase transitions related to axial point groups can be efficiently captured within a recently proposed gauge theoretical formulation of generalized nematics [K. Liu, J. Nissinen, R.-J. Slager, K. Wu, and J. Zaanen, Phys. Rev. X 6, 041025 (2016)2160-330810.1103/PhysRevX.6.041025]. These transitions can be introduced in the model by considering anisotropic couplings that do not break any additional symmetries. By and large this generalizes the well-known uniaxial-biaxial nematic phase transition to any arbitrary axial point group in three dimensions. We find in particular that the generalized axial transitions are distinguished by two types of phase diagrams with intermediate vestigial orientational phases and that the window of the vestigial phase is intimately related to the amount of symmetry of the defining point group due to inherently growing fluctuations of the order parameter. This might explain the stability of the observed uniaxial-biaxial phases as compared to the yet to be observed other possible forms of generalized nematic order with higher point-group symmetries.
NASA Astrophysics Data System (ADS)
Liu, Ke; Nissinen, Jaakko; de Boer, Josko; Slager, Robert-Jan; Zaanen, Jan
2017-02-01
The paradigm of spontaneous symmetry breaking encompasses the breaking of the rotational symmetries O(3 ) of isotropic space to a discrete subgroup, i.e., a three-dimensional point group. The subgroups form a rich hierarchy and allow for many different phases of matter with orientational order. Such spontaneous symmetry breaking occurs in nematic liquid crystals, and a highlight of such anisotropic liquids is the uniaxial and biaxial nematics. Generalizing the familiar uniaxial and biaxial nematics to phases characterized by an arbitrary point-group symmetry, referred to as generalized nematics, leads to a large hierarchy of phases and possible orientational phase transitions. We discuss how a particular class of nematic phase transitions related to axial point groups can be efficiently captured within a recently proposed gauge theoretical formulation of generalized nematics [K. Liu, J. Nissinen, R.-J. Slager, K. Wu, and J. Zaanen, Phys. Rev. X 6, 041025 (2016), 10.1103/PhysRevX.6.041025]. These transitions can be introduced in the model by considering anisotropic couplings that do not break any additional symmetries. By and large this generalizes the well-known uniaxial-biaxial nematic phase transition to any arbitrary axial point group in three dimensions. We find in particular that the generalized axial transitions are distinguished by two types of phase diagrams with intermediate vestigial orientational phases and that the window of the vestigial phase is intimately related to the amount of symmetry of the defining point group due to inherently growing fluctuations of the order parameter. This might explain the stability of the observed uniaxial-biaxial phases as compared to the yet to be observed other possible forms of generalized nematic order with higher point-group symmetries.
Grishkevich, Sergey; Saenz, Alejandro
2009-07-15
A theoretical approach was developed for an exact numerical description of a pair of ultracold atoms interacting via a central potential, which is trapped in a three-dimensional optical lattice. The coupling of center-of-mass and relative-motion coordinates is explicitly considered using a configuration-interaction (exact-diagonalization) technique. Deviations from the harmonic approximation are discussed for several heteronuclear alkali-metal atom pairs trapped in a single site of an optical lattice. The consequences are discussed for the analysis of a recent experiment [C. Ospelkaus et al., Phys. Rev. Lett. 97, 120402 (2006)] in which radio-frequency association was used to create diatomic molecules from a fermionic and a bosonic atom and to measure their binding energies close to a magnetic Feshbach resonance.
NASA Astrophysics Data System (ADS)
Schoenfeld, Andreas A.; Wieker, Soeren; Harder, Dietrich; Poppe, Bjoern
2016-11-01
The optical origin of the lateral response and orientation artifacts, which occur when using EBT3 and EBT-XD radiochromic films together with flatbed scanners, has been reinvestigated by experimental and theoretical means. The common feature of these artifacts is the well-known parabolic increase in the optical density OD(x) = -log10 I(x)/I 0(x) versus offset x from the scanner midline (Poppinga et al 2014 Med. Phys. 41 021707). This holds for landscape and portrait orientations as well as for the three color channels. Dose-independent optical subjects, such as neutral density filters, linear polarizers, the EBT polyester foil and diffusive glass, also present the parabolic lateral artifact when scanned with a flatbed scanner. The curvature parameter c of the parabola function OD(x) = c 0 + cx 2 is found to be a linear function of the dose, the parameters of which are influenced by the film orientation and film type, EBT3 or EBT-XD. The ubiquitous parabolic shape of function OD(x) is attributed (a) to the optical path-length effect (van Battum et al 2016 Phys. Med. Biol. 61 625-49), due to the increasing obliquity of the optical scanner light associated with increasing offset x from the scanner midline, and (b) and (c) to the partial polarization and scattering of the light leaving the film, which affect the ratio ~I(x)/{{I}0}(x) , thus making OD(x) increase with x 2. The orientation effect results from the changes of effects (b) and (c) associated with turning the film position, and thereby the orientation of the polymer structure of the sensitive film layer. In a comparison of experimental results obtained with selected optical subjects, the relative weights of the contributions of the optical path-length effect and the polarization and scattering of light leaving the films to the lateral response artifact have been estimated to be of the same order of magnitude. Mathematical models of these causes for the parabolic shape of function
NASA Astrophysics Data System (ADS)
Wopperer, P.; Gao, C. Z.; Barillot, T.; Cauchy, C.; Marciniak, A.; Despré, V.; Loriot, V.; Celep, G.; Bordas, C.; Lépine, F.; Dinh, P. M.; Suraud, E.; Reinhard, P.-G.
2015-04-01
We have studied theoretical photoelectron-momentum distributions of C60 using time-dependent density functional theory (TDDFT) in real time and including a self-interaction correction. Our calculations furthermore account for a proper orientation averaging allowing a direct comparison with experimental results. To illustrate the capabilities of this direct (microscopic and time-dependent) approach, two very different photo-excitation conditions are considered: excitation with a high-frequency XUV light at 20 eV and with a low-frequency IR femtosecond pulse at 1.55 eV. The interaction with the XUV light leads to one-photon transitions and a linear ionization regime. In that situation, the spectrum of occupied single-electron states in C60 is directly mapped to the photoelectron spectrum. On the contrary, the IR pulse leads to multiphoton ionization in which only the two least-bound states contribute to the process. In both dynamical regimes (mono- and multiphoton), calculated and experimental angle-resolved photoelectron spectra compare reasonably well. The observed discrepancies can be understood by the theoretical underestimation of higher-order many-body interaction processes such as electron-electron scattering and by the fact that experiments are performed at finite temperature. These results pave the way to a multiscale description of the C60 ionization mechanisms that is required to render justice to the variety of processes observed experimentally for fullerene molecules.
NASA Astrophysics Data System (ADS)
Close, Laird M.; Thatte, Niranjan; Nielsen, Eric L.; Abuter, Roberto; Clarke, Fraser; Tecza, Matthias
2007-08-01
We present new photometric and spectroscopic measurements for the unique, young, low-mass evolutionary track calibrator AB Dor C. While the new Ks photometry is similar to that we have previously published, the spectral type is found to be much earlier. Based on new H and K IFS spectra of AB Dor C from Thatte et al. (Paper I), we adopt a spectral type of M5.5+/-1.0 for AB Dor C. This is considerably earlier than the M8+/-1 previously estimated by Close et al. and Nielsen et al. yet is consistent with the M6+/-1 independently derived by Luhman & Potter. However, the spectrum presented in Paper I and analyzed here is a significant improvement over any previous spectrum of AB Dor C. We also present new astrometry for the system, which further supports a 0.090+/-0.005 Msolar mass for the system. Once armed with an accurate spectrum and Ks flux, we find L=0.0021+/-0.0005 Lsolar and Teff=2925+170-145 K for AB Dor C. These values are consistent with a ~75 Myr, 0.090+/-0.005 Msolar object like AB Dor C according to the DUSTY evolutionary tracks. Hence, masses can be estimated from the H-R diagram with the DUSTY tracks for young low-mass objects such as AB Dor C. However, we cautiously note that underestimates of the mass from the tracks can occur if one lacks a proper (continuum-preserved) spectrum or is relying on near-infrared fluxes alone. Based on observations made with ESO telescopes at the Paranal Observatories under program 276.C-5013.
NASA Technical Reports Server (NTRS)
Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; Zeuch, M. A.; Pollock, C. J.; Jacques, A. D.
2015-01-01
system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.
ERIC Educational Resources Information Center
Rasmussen, Ole Elstrup
"Scanator" (a modern, ecological psychophysics encompassing a cohesive set of theories and methods for the study of mental functions) provides the basis for a study of "competence," the capacity for making sense in complex situations. The paper develops a functional model that forms a theoretical expression of the phenomenon of…
Geskin, Victor; Cornil, Jérôme; Stadler, Robert
2015-01-22
Nonequilibrium Green's function techniques (NEGF) combined with density functional theory (DFT) calculations have become a standard tool for the description of electron transport through single molecule nanojunctions in the coherent tunneling (CT) regime. However, the applicability of these methods for transport in the Coulomb blockade (CB) regime is questionable. For a molecular assembly model, with multideterminant calculations as a benchmark, we show how a closed-shell ansatz, the usual ingredient of mean-field methods, fails to properly describe the step like electron-transfer characteristic in weakly coupled systems. Detailed analysis of this misbehavior allows us to propose a practical scheme to extract the addition energies in the CB regime for single-molecule junctions from NEGF DFT within the local-density approximation (closed shell). We show also that electrostatic screening effects are taken into account within this simple approach.
NASA Astrophysics Data System (ADS)
Myong, R. S.
2016-01-01
The Knudsen layer, found in the region of gas flow very close (in order of a few mean free paths) to the solid surfaces, plays a critical role in accurately modeling rarefied and micro-scale gases. In various previous investigations, abnormal behaviors at high Knudsen numbers such as nonlinear velocity profile, velocity gradient singularity, and pronounced thermal effect are identified to exist in the Knudsen layer. However, some behaviors, in particular, the velocity gradient singularity near the surface and higher temperature, remain elusive in the continuum framework. In this study, based on the second-order macroscopic constitutive equation recently derived from the kinetic Boltzmann equation via the balanced closure and cumulant expansion [R. S. Myong, "On the high Mach number shock structure singularity caused by overreach of Maxwellian molecules," Phys. Fluids 26(5), 056102 (2014)], the macroscopic second-order constitutive and slip-jump models that are able to explain qualitatively all the known non-classical and non-isothermal behaviors are proposed. As a result, new analytical solutions to the Knudsen layer in Couette flow, in conjunction with the algebraic nonlinearly coupled second-order constitutive and Maxwell velocity slip and Smoluchowski temperature jump models, are derived. It was shown that the velocity gradient singularity in the Knudsen layer can be explained within the continuum framework, when the nonlinearity of the constitutive model is morphed into the determination of the velocity slip in the nonlinear slip and jump model. Also, the smaller velocity slip and shear stress are shown to be caused by the shear-thinning property of the second-order constitutive model, that is, vanishing effective viscosity at high Knudsen number.
Towboat Maneuvering Simulator. Volume III. Theoretical Description.
1979-05-01
overshoot or :igzag maneuver;I - 1,2,3 .. . 6FL F- _’ Flan"ing rudder deflection rate a _ __ Steering rudder deflection rate Ship propulsion ratlol " elh...used with the equations are for the ship propulsion point (n - 1.0). The equations are written in terms of the complete barge flotillia towboat
NASA Astrophysics Data System (ADS)
Mikeš, Daniel
2010-05-01
erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.
Accurate thermoelastic tensor and acoustic velocities of NaCl
NASA Astrophysics Data System (ADS)
Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.
2015-12-01
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
ERIC Educational Resources Information Center
LoPresto, Michael C.
2014-01-01
What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that…
Lestelle, Lawrence C.; Lichatowich, James A.; Mobrand, Lars E.; Cullinan, Valerie I.
1994-03-01
This document describes the formulation and operation of a model designed to assist in planning supplementation projects. It also has application in examining a broader array of questions related to natural fish production and stock restoration. The model is referred to as the Ecosystem Diagnosis and Treatment (EDT) Model because of its utility in helping to diagnose and identify possible treatments to be applied to natural production problems for salmonids. It was developed through the Regional Assessment of Supplementation Project (RASP), which was an initiative to help coordinate supplementation planning in the Columbia Basin. The model is operated within the spreadsheet environment of Quattro Pro using a system of customized menus. No experience with spreadsheet macros is required to operate it. As currently configured, the model should only be applied to spring chinook; modifications are required to apply it to fall chinook and other species. The purpose of the model is to enable managers to consider possible outcomes of supplementation under different sets of assumptions about the natural production system and the integration of supplementation fish into that system. It was designed to help assess uncertainty and the relative risks and benefits of alternative supplementation strategies. The model is a tool to facilitate both planning and learning; it is not a predictive model. This document consists of three principal parts. Part I provides a description of the model. Part II is a guide to running the model. Part III provides theoretical documentation. In addition, a sensitivity analysis of many of the model's parameters is provided in the appendix. This analysis was used to test whether the model produces consistent and reasonable results and to assess the relative effects of specific parameter inputs on outcome.
Comment on ``The First Accurate Description of an Aurora''
NASA Astrophysics Data System (ADS)
Silverman, Sam
2007-11-01
Schröder [2006] discusses Das Buch der Natur (The Book of Nature), written by Konrad von Megenberg between 1348 and 1350. The Buch was the first encyclopedia of natural phenomena written in German. (For a contemporary German translation, see Schulz [1897] for definitions of Megenberg's astronomical terminology, see Deschler [1977]). Megenberg translated the Liber de Natura Rerum, written by Thomas of Cantimpré between 1225 and 1240.
TAD- THEORETICAL AERODYNAMICS PROGRAM
NASA Technical Reports Server (NTRS)
Barrowman, J.
1994-01-01
This theoretical aerodynamics program, TAD, was developed to predict the aerodynamic characteristics of vehicles with sounding rocket configurations. These slender, axisymmetric finned vehicle configurations have a wide range of aeronautical applications from rockets to high speed armament. Over a given range of Mach numbers, TAD will compute the normal force coefficient derivative, the center-of-pressure, the roll forcing moment coefficient derivative, the roll damping moment coefficient derivative, and the pitch damping moment coefficient derivative of a sounding rocket configured vehicle. The vehicle may consist of a sharp pointed nose of cone or tangent ogive shape, up to nine other body divisions of conical shoulder, conical boattail, or circular cylinder shape, and fins of trapezoid planform shape with constant cross section and either three or four fins per fin set. The characteristics computed by TAD have been shown to be accurate to within ten percent of experimental data in the supersonic region. The TAD program calculates the characteristics of separate portions of the vehicle, calculates the interference between separate portions of the vehicle, and then combines the results to form a total vehicle solution. Also, TAD can be used to calculate the characteristics of the body or fins separately as an aid in the design process. Input to the TAD program consists of simple descriptions of the body and fin geometries and the Mach range of interest. Output includes the aerodynamic characteristics of the total vehicle, or user-selected portions, at specified points over the mach range. The TAD program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 123K of 8 bit bytes. The TAD program was originally developed in 1967 and last updated in 1972.
Accurate Theoretical Predictions of the Properties of Energetic Materials
2008-09-18
collisionally induce a decomposition reaction at a liquid surface. (Given the paucity of full reactive potential functions that describe dissociation to...the correct structurally relaxed products, we believe that the diatomic model system at least provides a test of whether dissociation might be...and that the probability that the surface species will undergo a collision that leads to direct excitation of the diatomic above its bond dissociation
Accurate Theoretical Prediction of the Properties of Energetic Materials
2007-11-02
calculations (e.g. Cheetah ). 8. Sensitivity. The structure prediction and lattice potential work will serve as a platform to examine impact/shock...nitromethane molecules. (In an extension of the present work, we will freeze the internal coordinates of the molecules and assess the extent to which the
Theoretical understanding of charm decays
Bigi, I.I.
1986-08-01
A detailed description of charm decays has emerged. The various concepts involved are sketched. Although this description is quite successful in reproducing the data the chapter on heavy flavour decays is far from closed. Relevant questions like on th real strength of weak annihilation, Penguin operators, etc. are still unanswered. Important directions in future work, both on the experimental and theoretical side are identified.
Shi, Runhua; McLarty, Jerry W
2009-10-01
In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
A statistical mechanical description of biomolecular hydration
1996-02-01
We present an efficient and accurate theoretical description of the structural hydration of biological macromolecules. The hydration of molecules of almost arbitrary size (tRNA, antibody-antigen complexes, photosynthetic reaction centre) can be studied in solution and in the crystal environment. The biomolecular structure obtained from x-ray crystallography, NMR, or modeling is required as input information. The structural arrangement of water molecules near a biomolecular surface is represented by the local water density analogous to the corresponding electron density in an x-ray diffraction experiment. The water-density distribution is approximated in terms of two- and three-particle correlation functions of solute atoms with water using a potentials-of-mean-force expansion.
NASA Astrophysics Data System (ADS)
LoPresto, Michael C.
2014-09-01
What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that compare well to both the desired frequencies of the musical pitches and those actually played on a real trombone.
ERIC Educational Resources Information Center
Beller, Charley
2013-01-01
The study of definite descriptions has been a central part of research in linguistics and philosophy of language since Russell's seminal work "On Denoting" (Russell 1905). In that work Russell quickly dispatches analyses of denoting expressions with forms like "no man," "some man," "a man," and "every…
Accurate ab Initio Spin Densities.
Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus
2012-06-12
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Rendón-Macías, Mario Enrique; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe
2016-01-01
Descriptive statistics is the branch of statistics that gives recommendations on how to summarize clearly and simply research data in tables, figures, charts, or graphs. Before performing a descriptive analysis it is paramount to summarize its goal or goals, and to identify the measurement scales of the different variables recorded in the study. Tables or charts aim to provide timely information on the results of an investigation. The graphs show trends and can be histograms, pie charts, "box and whiskers" plots, line graphs, or scatter plots. Images serve as examples to reinforce concepts or facts. The choice of a chart, graph, or image must be based on the study objectives. Usually it is not recommended to use more than seven in an article, also depending on its length.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Accurate spectral color measurements
NASA Astrophysics Data System (ADS)
Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.
1999-08-01
Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.
Institute for Theoretical Physics
Giddings, S.B.; Ooguri, H.; Peet, A.W.; Schwarz, J.H.
1998-06-01
String theory is the only serious candidate for a unified description of all known fundamental particles and interactions, including gravity, in a single theoretical framework. Over the past two years, activity in this subject has grown rapidly, thanks to dramatic advances in understanding the dynamics of supersymmetric field theories and string theories. The cornerstone of these new developments is the discovery of duality which relates apparently different string theories and transforms difficult strongly coupled problems of one theory into weakly coupled problems of another theory.
Marc Vanderhaeghen
2007-04-01
The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.
NASA Astrophysics Data System (ADS)
Ford, David; Huntsman, Steven
2006-06-01
Thermodynamics (in concert with its sister discipline, statistical physics) can be regarded as a data reduction scheme based on partitioning a total system into a subsystem and a bath that weakly interact with each other. Whereas conventionally, the systems investigated require this form of data reduction in order to facilitate prediction, a different problem also occurs, in the context of communication networks, markets, etc. Such “empirically accessible” systems typically overwhelm observers with the sort of information that in the case of (say) a gas is effectively unobtainable. What is required for such complex interacting systems is not prediction (this may be impossible when humans besides the observer are responsible for the interactions) but rather, description as a route to understanding. Still, the need for a thermodynamical data reduction scheme remains. In this paper, we show how an empirical temperature can be computed for finite, empirically accessible systems, and further outline how this construction allows the age-old science of thermodynamics to be fruitfully applied to them.
Theoretical ecology without species
NASA Astrophysics Data System (ADS)
Tikhonov, Mikhail
The sequencing-driven revolution in microbial ecology demonstrated that discrete ``species'' are an inadequate description of the vast majority of life on our planet. Developing a novel theoretical language that, unlike classical ecology, would not require postulating the existence of species, is a challenge of tremendous medical and environmental significance, and an exciting direction for theoretical physics. Here, it is proposed that community dynamics can be described in a naturally hierarchical way in terms of population fluctuation eigenmodes. The approach is applied to a simple model of division of labor in a multi-species community. In one regime, effective species with a core and accessory genome are shown to naturally appear as emergent concepts. However, the same model allows a transition into a regime where the species formalism becomes inadequate, but the eigenmode description remains well-defined. Treating a community as a black box that expresses enzymes in response to resources reveals mathematically exact parallels between a community and a single coherent organism with its own fitness function. This coherence is a generic consequence of division of labor, requires no cooperative interactions, and can be expected to be widespread in microbial ecosystems. Harvard Center of Mathematical Sciences and Applications;John A. Paulson School of Engineering and Applied Sciences.
NASA Astrophysics Data System (ADS)
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
Serth, J; Panitz, F; Herrmann, H; Alves, J
1998-10-01
Competitive PCR is a frequently used technique for quantitation of DNA and mRNA. However, the application of the most favourable homologous mutated competitors is impeded by the formation of heteroduplex molecules which complicates the data evaluation and may lead to quantitation errors. Moreover, in most cases a single quantitation of an unknown sample requires multiple competitive reactions for identification of the equivalence point. In the present study, a highly efficient and reliable method as well as the underlying theoretical model is described. The mathematical solutions of this model provide the basis for single-tube quantitation using a homologous mutated competitor. For quantitation of Human Papilloma Virus 16-DNA, it is shown that single tube quantitations using simple PAGE separation and video evaluation for signal analysis permit linear detection within more than two orders of magnitude. In addition, repeated single-tube competitive PCRs exhibited good precision (average standard deviation 5%), even if carried out as nested high cycle PCR for quantitation of low abundant sequences (intraassay sensitivity <2 x 10(2) copies). This evaluation method can be applied to any DNA separation and detection method which is capable of resolving the heteroduplex fraction from both homoduplex molecules.
NASA Astrophysics Data System (ADS)
Borkowski, Andrzej; Kosek, Wiesław
2015-12-01
The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+) that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis. New algorithms based on the wavelet, Fourier and Hilbert transforms were applied to find time-frequency characteristics of geodetic and geophysical time series as well as time-frequency relations between them. Statistical properties of these time series are also presented using different statistical tests as well as 2nd, 3rd and 4th moments about the mean. The new forecasts methods are presented which enable prediction of the considered time series in different frequency bands.
Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description
ERIC Educational Resources Information Center
Hogarth, Robin M.; Soyer, Emre
2011-01-01
Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Theoretical Issues in Software Engineering.
1982-09-01
large software projects. It has been less successful in acquiring a solid theoretical foundation for these methods. The software development process...justification save practice that has evolved for large , concur- rently processed programs. Furthermore, each phase needs formal description and analysis. The...Abstract B Me discipline of software engineering has transferred the common-sense methods of good programing and management to large software projects. It
Connecting single cell to collective cell behavior in a unified theoretical framework
NASA Astrophysics Data System (ADS)
George, Mishel; Bullo, Francesco; Campàs, Otger
Collective cell behavior is an essential part of tissue and organ morphogenesis during embryonic development, as well as of various disease processes, such as cancer. In contrast to many in vitro studies of collective cell migration, most cases of in vivo collective cell migration involve rather small groups of cells, with large sheets of migrating cells being less common. The vast majority of theoretical descriptions of collective cell behavior focus on large numbers of cells, but fail to accurately capture the dynamics of small groups of cells. Here we introduce a low-dimensional theoretical description that successfully captures single cell migration, cell collisions, collective dynamics in small groups of cells, and force propagation during sheet expansion, all within a common theoretical framework. Our description is derived from first principles and also includes key phenomenological aspects of cell migration that control the dynamics of traction forces. Among other results, we explain the counter-intuitive observations that pairs of cells repel each other upon collision while they behave in a coordinated manner within larger clusters.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Recent theoretical results on electron-polyatomic molecule collisions
McCurdy, C.W.
1994-03-01
Until recently, the principal barrier to the accurate theoretical description of electronic collisions with polyatomic molecules was the computational problem of scattering by a nonlocal, arbitrarily asymmetric potential. Effective numerical techniques capable of solving this variety of potential scattering problem for electronic collisions have now matured, and the first applications of methods for treating many-body aspects of collisions of electrons with polyatomic molecules have begun to appear in the literature. The past two years have seen the appearance of a large collection of calculations on electron-polyatomic collisions which compare favorably with experimental determinations. In addition to the dramatic developments in methods which explicitly exploit the methods of quantum chemistry to treat the effects of electron correlation, polarization, etc., parameter-free model potential methods for electronically elastic collisions have also evolved markedly in recent years. Progress in both electronically elastic and inelastic processes is reviewed briefly.
Theoretical description of excited state dynamics in nanostructures
NASA Astrophysics Data System (ADS)
Rubio, Angel
2009-03-01
There has been much progress in the synthesis and characterization of nanostructures however, there remain immense challenges in understanding their properties and interactions with external probes in order to realize their tremendous potential for applications (molecular electronics, nanoscale opto-electronic devices, light harvesting and emitting nanostructures). We will review the recent implementations of TDDFT to study the optical absorption of biological chromophores, one-dimensional polymers and layered materials. In particular we will show the effect of electron-hole attraction in those systems. Applications to the optical properties of solvated nanostructures as well as excited state dynamics in some organic molecules will be used as text cases to illustrate the performance of the approach. Work done in collaboration with A. Castro, M. Marques, X. Andrade, J.L Alonso, Pablo Echenique, L. Wirtz, A. Marini, M. Gruning, C. Rozzi, D. Varsano and E.K.U. Gross.
Coherent Change Detection: Theoretical Description and Experimental Results
2006-08-01
multilook polarimetric and interferometric SAR imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 32, no. 5, pp. 1017–1027, 1994. 50. J. W...scene changes using repeat pass Synthetic Aperture Radar ( SAR ) imagery. As SAR is a coherent imaging system two forms of change detection may be...changes to the sub-resolution cell scattering structure that may be undetectable using inco- herent techniques. The repeat pass SAR imagery however, must
Coherent Change Detection: Theoretical Description and Experimental Results
2006-08-01
primary and repeat pass intensity images obtained over one of the tracks. It can be seen that the speckle pattern of the pair of intensity images is...that manifests in the transduced imagery as speckle noise. In a single SAR image the noise term does not contribute any useful information to the...pixel intensity I = |f |2. This estimate however is cor- rupted by the speckle noise component, see (71), and in general some form of averaging is
Soft Biometrics; Human Identification Using Comparative Descriptions.
Reid, Daniel A; Nixon, Mark S; Stevenage, Sarah V
2014-06-01
Soft biometrics are a new form of biometric identification which use physical or behavioral traits that can be naturally described by humans. Unlike other biometric approaches, this allows identification based solely on verbal descriptions, bridging the semantic gap between biometrics and human description. To permit soft biometric identification the description must be accurate, yet conventional human descriptions comprising of absolute labels and estimations are often unreliable. A novel method of obtaining human descriptions will be introduced which utilizes comparative categorical labels to describe differences between subjects. This innovative approach has been shown to address many problems associated with absolute categorical labels-most critically, the descriptions contain more objective information and have increased discriminatory capabilities. Relative measurements of the subjects' traits can be inferred from comparative human descriptions using the Elo rating system. The resulting soft biometric signatures have been demonstrated to be robust and allow accurate recognition of subjects. Relative measurements can also be obtained from other forms of human representation. This is demonstrated using a support vector machine to determine relative measurements from gait biometric signatures-allowing retrieval of subjects from video footage by using human comparisons, bridging the semantic gap.
Semenov, Alexander; Babikov, Dmitri
2015-05-21
An efficient and accurate mixed quantum/classical theory approach for computational treatment of inelastic scattering is extended to describe collision of an atom with a general asymmetric-top rotor polyatomic molecule. Quantum mechanics, employed to describe transitions between the internal states of the molecule, and classical mechanics, employed for description of scattering of the atom, are used in a self-consistent manner. Such calculations for rotational excitation of HCOOCH3 in collisions with He produce accurate results at scattering energies above 15 cm(-1), although resonances near threshold, below 5 cm(-1), cannot be reproduced. Importantly, the method remains computationally affordable at high scattering energies (here up to 1000 cm(-1)), which enables calculations for larger molecules and at higher collision energies than was possible previously with the standard full-quantum approach. Theoretical prediction of inelastic cross sections for a number of complex organic molecules observed in space becomes feasible using this new computational tool.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Accurate Cross Sections for Microanalysis
Rez, Peter
2002-01-01
To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a few elements. Results of systematic plane wave Born approximation calculations with exchange for K, L, and M shell ionization cross sections over the range of electron energies used in microanalysis are presented. Comparisons are made with experimental measurement for selected K shells and it is shown that the plane wave theory is not appropriate for overvoltages less than 2.5 V. PMID:27446747
Information Theoretic Shape Matching.
Hasanbelliu, Erion; Giraldo, Luis Sanchez; Príncipe, José C
2014-12-01
In this paper, we describe two related algorithms that provide both rigid and non-rigid point set registration with different computational complexity and accuracy. The first algorithm utilizes a nonlinear similarity measure known as correntropy. The measure combines second and high order moments in its decision statistic showing improvements especially in the presence of impulsive noise. The algorithm assumes that the correspondence between the point sets is known, which is determined with the surprise metric. The second algorithm mitigates the need to establish a correspondence by representing the point sets as probability density functions (PDF). The registration problem is then treated as a distribution alignment. The method utilizes the Cauchy-Schwarz divergence to measure the similarity/distance between the point sets and recover the spatial transformation function needed to register them. Both algorithms utilize information theoretic descriptors; however, correntropy works at the realizations level, whereas Cauchy-Schwarz divergence works at the PDF level. This allows correntropy to be less computationally expensive, and for correct correspondence, more accurate. The two algorithms are robust against noise and outliers and perform well under varying levels of distortion. They outperform several well-known and state-of-the-art methods for point set registration.
Theoretical Ecology: Beginnings of a Predictive Science
ERIC Educational Resources Information Center
Kolata, Gina Bari
1974-01-01
Examines new directions in ecological research in which ecologists are analyzing systems with theoretical models and are using descriptive studies to confirm and extend their studies. The development of a model relating to species equilibriums on islands is now being applied to problems of conservation of wildlife in national parks. (JR)
Accurate thermoplasmonic simulation of metallic nanoparticles
NASA Astrophysics Data System (ADS)
Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing
2017-01-01
Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Theoretical Assessment of 178m2Hf De-Excitation
Hartouni, E P; Chen, M; Descalle, M A; Escher, J E; Loshak, A; Navratil, P; Ormand, W E; Pruet, J; Thompson, I J; Wang, T F
2008-10-06
This document contains a comprehensive literature review in support of the theoretical assessment of the {sup 178m2}Hf de-excitation, as well as a rigorous description of controlled energy release from an isomeric nuclear state.
Multimedia content description framework
NASA Technical Reports Server (NTRS)
Bergman, Lawrence David (Inventor); Kim, Michelle Yoonk Yung (Inventor); Li, Chung-Sheng (Inventor); Mohan, Rakesh (Inventor); Smith, John Richard (Inventor)
2003-01-01
A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.
An information theoretic approach to pedigree reconstruction.
Almudevar, Anthony
2016-02-01
Network structure is a dominant feature of many biological systems, both at the cellular level and within natural populations. Advances in genotype and gene expression screening made over the last few decades have permitted the reconstruction of these networks. However, resolution to a single model estimate will generally not be possible, leaving open the question of the appropriate method of formal statistical inference. The nonstandard structure of the problem precludes most traditional statistical methodologies. Alternatively, a Bayesian approach provides a natural methodology for formal inference. Construction of a posterior density on the space of network structures allows formal inference regarding features of network structure using specific marginal posterior distributions. An information theoretic approach to this problem will be described, based on the Minimum Description Length principle. This leads to a Bayesian inference model based on the information content of data rather than on more commonly used probabilistic models. The approach is applied to the problem of pedigree reconstruction based on genotypic data. Using this application, it is shown how the MDL approach is able to provide a truly objective control for model complexity. A two-cohort model is used for a simulation study. The MDL approach is compared to COLONY-2, a well known pedigree reconstruction application. The study highlights the problem of genotyping error modeling. COLONY-2 requires prior error rate estimates, and its accuracy proves to be highly sensitive to these estimates. In contrast, the MDL approach does not require prior error rate estimates, and is able to accurately adjust for genotyping error across the range of models considered.
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Physics 3204. Course Description.
ERIC Educational Resources Information Center
Newfoundland and Labrador Dept. of Education.
A description of the physics 3204 course in Newfoundland and Labrador is provided. The description includes: (1) statement of purpose, including general objectives of science education; (2) a list of six course objectives; (3) course content for units on sound, light, optical instruments, electrostatics, current electricity, Michael Faraday and…
Descriptive Metadata: Emerging Standards.
ERIC Educational Resources Information Center
Ahronheim, Judith R.
1998-01-01
Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
QTIPs: Questionable theoretical and interpretive practices in social psychology.
Brandt, Mark J; Proulx, Travis
2015-01-01
One possible consequence of ideological homogeneity is the misinterpretation of data collected with otherwise solid methods. To help identify these issues outside of politically relevant research, we name and give broad descriptions to three questionable interpretive practices described by Duarte et al. and introduce three additional questionable theoretical practices that also reduce the theoretical power and paradigmatic scope of psychology.
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
Langevin description of nonequilibrium quantum fields
NASA Astrophysics Data System (ADS)
Gautier, F.; Serreau, J.
2012-12-01
We consider the nonequilibrium dynamics of a real quantum scalar field. We show the formal equivalence of the exact evolution equations for the statistical and spectral two-point functions with a fictitious Langevin process and examine the conditions under which a local Markovian dynamics is a valid approximation. In quantum field theory, the memory kernel and the noise correlator typically exhibit long time power laws and are thus highly nonlocal, thereby questioning the possibility of a local description. We show that despite this fact, there is a finite time range during which a local description is accurate. This requires the theory to be (effectively) weakly coupled. We illustrate the use of such a local description for studies of decoherence and entropy production in quantum field theory.
[Examination procedure and description of skin lesions].
Ochsendorf, Falk; Meister, Laura
2017-02-09
The dermatologic examination follows a clear structure. After a short history is taken, the whole skin is inspected. The description, which is ideally provided in writing, forces one to look at the skin more closely. The description should include an accurate description of the location, the distribution, the form, and the type of lesion. The article contains tables with internationally approved definitions to describe skin changes. The analysis of these findings allows one to deduce pathophysiologic mechanisms occurring in the skin and to deduce hypotheses, i. e., suspected and differential diagnoses. These are confirmed or excluded by further diagnostic measures. The expert comes to a diagnosis very quickly by a pattern-recognition process, whereby novices must still develop this kind of thinking. Experts can minimize cognitive bias by reflective analytical reasoning and reorganization of knowledge.
Description and Recognition of the Concept of Social Capital in Higher Education System
ERIC Educational Resources Information Center
Tonkaboni, Forouzan; Yousefy, Alireza; Keshtiaray, Narges
2013-01-01
The current research is intended to describe and recognize the concept of social capital in higher education based on theoretical method in a descriptive-analytical approach. Description and Recognition of the data, gathered from theoretical and experimental studies, indicated that social capital is one of the most important indices for…
Hardware description languages
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.
1994-01-01
Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.
NASA Astrophysics Data System (ADS)
Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco
2012-11-01
Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.
The use of experimental bending tests to more accurate numerical description of TBC damage process
NASA Astrophysics Data System (ADS)
Sadowski, T.; Golewski, P.
2016-04-01
Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.
2014-10-08
Marenich, Christopher J. Cramer, Donald G. Truhlar, and Chang-Guo Zhan. Free Energies of Solvation with Surface , Volume, and Local Electrostatic...Effects and Atomic Surface Tensions to Represent the First Solvation Shell (Reprint), Journal of Chemical Theory and Computation, (01 2010): . doi...the Gibbs free energy of solvation and dissociation of HCl in water via Monte Carlo simulations and continuum solvation models, Physical Chemistry
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Shortis, Mark
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172
Calibration Techniques for Accurate Measurements by Underwater Camera Systems.
Shortis, Mark
2015-12-07
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.
Theoretical investigation of gas-surface interactions
NASA Technical Reports Server (NTRS)
Lee, Timothy J.
1989-01-01
Four reprints are presented from four projects which are to be published in a refereed journal. Two are of interest to us and are presented herein. One is a description of a very detailed theoretical study of four anionic hydrogen bonded complexes. The other is a detailed study of the first generally reliable diagnostic for determining the quality of results that may be expected from single reference based electron correlation methods.
NASA Astrophysics Data System (ADS)
Benisti, Didier; Morice, Olivier; Gremillet, Laurent; Siminos, Evangelos; Strozzi, David
2010-11-01
Using a nonlinear kinetic analysis, we provide a theoretical description for the nonlinear Landau damping rate, frequency, and group velocity of a slowly varying electron plasma wave (EPW). In particular, we show that the nonlinear group velocity of the EPW is not the derivative of its frequency with respect to its wave number, and we discuss previous results on the nonlinear Landau damping rate and on the nonlinear frequency shift of the EPW. Our theoretical predictions are moreover very carefully compared against results from Vlasov simulations of stimulated Raman scattering (SRS), and an excellent agreement is found between numerical and theoretical results. We use the previous analysis to derive envelope equations modeling SRS in the nonlinear kinetic regime. These equations provide very accurate predictions regarding threshold intensities for SRS and the growth time of SRS beyond threshold, provided that one uses the ansatz of self-optimization that we detail. Finally, we discuss saturation of SRS and, in particular, we derive growth rates for sidebands using a spectral method.
ERIC Educational Resources Information Center
Brashers, H. C.
1968-01-01
As the inexperienced writer becomes aware of the issues involved in the composition of effective descriptive prose, he also develops a consistent control over his materials. The persona he chooses, if coherently thought out, can function as an index of many choices, helping him to manipulate the tone, intent, and mood of this style; to regulate…
Andrew integrated reservoir description
Todd, S.P.
1996-12-31
The Andrew field is an oil and gas accumulation in Palaeocene deep marine sands in the Central North Sea. It is currently being developed with mainly horizontal oil producers. Because of the field`s relatively small reserves (mean 118 mmbbls), the performance of each of the 10 or so horizontal wells is highly important. Reservoir description work at sanction time concentrated on supporting the case that the field could be developed commercially with the minimum number of wells. The present Integrated Reservoir Description (IRD) is focussed on delivering the next level of detail that will impact the understanding of the local reservoir architecture and dynamic performance of each well. Highlights of Andrew IRD Include: (1) Use of a Reservoir Uncertainty Statement (RUS) developed at sanction time to focus the descriptive effort of both asset, support and contract petrotechnical staff, (2) High resolution biostratigraphic correlation to support confident zonation of the reservoir, (3) Detailed sedimentological analysis of the core including the use of dipmeter to interpret channel/sheet architecture to provide new insights into reservoir heterogeneity; (4) Integrated petrographical and petrophysical investigation of the controls on Sw-Height and relative permeability of water; (5) Fluids description using oil geochemistry and Residual Salt Analysis Sr isotope studies. Andrew IRD has highlighted several important risks to well performance, including the influence of more heterolithic intervals on gas breakthrough and the controls on water coning exerted by suppressed water relative permeability in the transition zone.
Andrew integrated reservoir description
Todd, S.P.
1996-01-01
The Andrew field is an oil and gas accumulation in Palaeocene deep marine sands in the Central North Sea. It is currently being developed with mainly horizontal oil producers. Because of the field's relatively small reserves (mean 118 mmbbls), the performance of each of the 10 or so horizontal wells is highly important. Reservoir description work at sanction time concentrated on supporting the case that the field could be developed commercially with the minimum number of wells. The present Integrated Reservoir Description (IRD) is focussed on delivering the next level of detail that will impact the understanding of the local reservoir architecture and dynamic performance of each well. Highlights of Andrew IRD Include: (1) Use of a Reservoir Uncertainty Statement (RUS) developed at sanction time to focus the descriptive effort of both asset, support and contract petrotechnical staff, (2) High resolution biostratigraphic correlation to support confident zonation of the reservoir, (3) Detailed sedimentological analysis of the core including the use of dipmeter to interpret channel/sheet architecture to provide new insights into reservoir heterogeneity; (4) Integrated petrographical and petrophysical investigation of the controls on Sw-Height and relative permeability of water; (5) Fluids description using oil geochemistry and Residual Salt Analysis Sr isotope studies. Andrew IRD has highlighted several important risks to well performance, including the influence of more heterolithic intervals on gas breakthrough and the controls on water coning exerted by suppressed water relative permeability in the transition zone.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
On the Accurate Prediction of CME Arrival At the Earth
NASA Astrophysics Data System (ADS)
Zhang, Jie; Hess, Phillip
2016-07-01
We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.
Information Theoretic Causal Coordination
2013-09-12
his 1969 paper, Clive Granger , British economist and Nobel laureate, proposed a statistical def- inition of causality between stochastic processes. It...showed that the directed infor- mation, an information theoretic quantity, quantifies Granger causality . We also explored a more pessimistic setup...Final Technical Report Project Title: Information Theoretic Causal Coordination AFOSR Award Number: AF FA9550-10-1-0345 Reporting Period: July 15
Theoretical and computational chemistry.
Meuwly, Markus
2010-01-01
Computer-based and theoretical approaches to chemical problems can provide atomistic understanding of complex processes at the molecular level. Examples ranging from rates of ligand-binding reactions in proteins to structural and energetic investigations of diastereomers relevant to organo-catalysis are discussed in the following. They highlight the range of application of theoretical and computational methods to current questions in chemical research.
Reasoning about scene descriptions
DiManzo, M.; Adorni, G.; Giunchiglia, F.
1986-07-01
When a scene is described by means of natural language sentences, many details are usually omitted, because they are not in the focus of the conversation. Moreover, natural language is not the best tool to define precisely positions and spatial relationships. The process of interpreting ambiguous statements and inferring missing details involves many types of knowledge, from linguistics to physics. This paper is mainly concerned with the problem of modeling the process of understanding descriptions of static scenes. The specific topics covered by this work are the analysis of the meaning of spatial prepositions, the problem of the reference system and dimensionality, the activation of expectations about unmentioned objects, the role of default knowledge about object positions and its integration with contextual information sources, and the problem of space representation. The issue of understanding dynamic scenes descriptions is briefly approached in the last section.
Spacelab J experiment descriptions
Miller, T.Y.
1993-08-01
Brief descriptions of the experiment investigations for the Spacelab J Mission which was launched from the Kennedy Space Center aboard the Endeavour in Sept. 1992 are presented. Experiments cover the following: semiconductor crystals; single crystals; superconducting composite materials; crystal growth; bubble behavior in weightlessness; microgravity environment; health monitoring of Payload Specialists; cultured plant cells; effect of low gravity on calcium metabolism and bone formation; and circadian rhythm. Separate abstracts have been prepared for articles from this report.
Spacelab J experiment descriptions
NASA Technical Reports Server (NTRS)
Miller, Teresa Y. (Editor)
1993-01-01
Brief descriptions of the experiment investigations for the Spacelab J Mission which was launched from the Kennedy Space Center aboard the Endeavour in Sept. 1992 are presented. Experiments cover the following: semiconductor crystals; single crystals; superconducting composite materials; crystal growth; bubble behavior in weightlessness; microgravity environment; health monitoring of Payload Specialists; cultured plant cells; effect of low gravity on calcium metabolism and bone formation; and circadian rhythm.
Management control system description
Bence, P. J.
1990-10-01
This Management Control System (MCS) description describes the processes used to manage the cost and schedule of work performed by Westinghouse Hanford Company (Westinghouse Hanford) for the US Department of Energy, Richland Operations Office (DOE-RL), Richland, Washington. Westinghouse Hanford will maintain and use formal cost and schedule management control systems, as presented in this document, in performing work for the DOE-RL. This MCS description is a controlled document and will be modified or updated as required. This document must be approved by the DOE-RL; thereafter, any significant change will require DOE-RL concurrence. Westinghouse Hanford is the DOE-RL operations and engineering contractor at the Hanford Site. Activities associated with this contract (DE-AC06-87RL10930) include operating existing plant facilities, managing defined projects and programs, and planning future enhancements. This document is designed to comply with Section I-13 of the contract by providing a description of Westinghouse Hanford's cost and schedule control systems used in managing the above activities. 5 refs., 22 figs., 1 tab.
Theoretical dissociation energies for ionic molecules
NASA Technical Reports Server (NTRS)
Langhoff, S. R.; Bauschlicher, C. W., Jr.; Partridge, H.
1986-01-01
Ab initio calculations at the self-consistent-field and singles plus doubles configuration-interaction level are used to determine accurate spectroscopic parameters for most of the alkali and alkaline-earth fluorides, chlorides, oxides, sulfides, hydroxides, and isocyanides. Numerical Hartree-Fock (NHF) calculations are performed on selected systems to ensure that the extended Slater basis sets employed for the diatomic systems are near the Hartree-Fock limit. Extended Gaussian basis sets of at least triple-zeta plus double polarization equality are employed for the triatomic system. With this model, correlation effects are relatively small, but invariably increase the theoretical dissociation energies. The importance of correlating the electrons on both the anion and the metal is discussed. The theoretical dissociation energies are critically compared with the literature to rule out disparate experimental values. Theoretical (sup 2)Pi - (sup 2)Sigma (sup +) energy separations are presented for the alkali oxides and sulfides.
Computational and theoretical methods for protein folding.
Compiani, Mario; Capriotti, Emidio
2013-12-03
A computational approach is essential whenever the complexity of the process under study is such that direct theoretical or experimental approaches are not viable. This is the case for protein folding, for which a significant amount of data are being collected. This paper reports on the essential role of in silico methods and the unprecedented interplay of computational and theoretical approaches, which is a defining point of the interdisciplinary investigations of the protein folding process. Besides giving an overview of the available computational methods and tools, we argue that computation plays not merely an ancillary role but has a more constructive function in that computational work may precede theory and experiments. More precisely, computation can provide the primary conceptual clues to inspire subsequent theoretical and experimental work even in a case where no preexisting evidence or theoretical frameworks are available. This is cogently manifested in the application of machine learning methods to come to grips with the folding dynamics. These close relationships suggested complementing the review of computational methods within the appropriate theoretical context to provide a self-contained outlook of the basic concepts that have converged into a unified description of folding and have grown in a synergic relationship with their computational counterpart. Finally, the advantages and limitations of current computational methodologies are discussed to show how the smart analysis of large amounts of data and the development of more effective algorithms can improve our understanding of protein folding.
Measuring Joint Stimulus Control by Complex Graph/Description Correspondences
ERIC Educational Resources Information Center
Fields, Lanny; Spear, Jack
2012-01-01
Joint stimulus control occurs when responding is determined by the correspondence of elements of a complex sample and a complex comparison stimulus. In academic settings, joint stimulus control of behavior would be evidenced by the selection of an accurate description of a complex graph in which each element of a graph corresponded to particular…
The Genre of Technical Description.
ERIC Educational Resources Information Center
Jordan, Michael P.
1986-01-01
Summarizes recent research into systems of lexical and grammatical cohesion in technical description. Discusses various methods by which technical writers "re-enter" the topic of description back into the text in successive sentences. (HTH)
Accurate three-dimensional documentation of distinct sites
NASA Astrophysics Data System (ADS)
Singh, Mahesh K.; Dutta, Ashish; Subramanian, Venkatesh K.
2017-01-01
One of the most critical aspects of documenting distinct sites is acquiring detailed and accurate range information. Several three-dimensional (3-D) acquisition techniques are available, but each has its own limitations. This paper presents a range data fusion method with the aim to enhance the descriptive contents of the entire 3-D reconstructed model. A kernel function is introduced for supervised classification of the range data using a kernelized support vector machine. The classification method is based on the local saliency features of the acquired range data. The range data acquired from heterogeneous range sensors are transformed into a defined common reference frame. Based on the segmentation criterion, the fusion of range data is performed by integrating finer regions of range data acquired from a laser range scanner with the coarser region of Kinect's range data. After fusion, the Delaunay triangulation algorithm is applied to generate the highly accurate, realistic 3-D model of the scene. Finally, experimental results show the robustness of the proposed approach.
[Once again: theoretical pathology].
Bleyl, U
2010-07-01
Theoretical pathology refers to the attempt to reintroduce methodical approaches from the humanities, philosophical logic and "gestalt philosophy" into medical research and pathology. Diseases, in particular disease entities and more complex polypathogenetic mechanisms of disease, have a "gestalt quality" due to the significance of their pathophysiologic coherence: they have a "gestalt". The Research group Theoretical Pathology at the Academy of Science in Heidelberg are credited with having revitalized the philosophical notion of "gestalt" for morphological and pathological diagnostics. Gestalt means interrelated schemes of pathophysiological significance in the mind of the diagnostician. In pathology, additive and associative diagnostic are simply not possible without considering the notion of synthetic entities in Kant's logic.
Using Scaling for accurate stochastic macroweather forecasts (including the "pause")
NASA Astrophysics Data System (ADS)
Lovejoy, Shaun; del Rio Amador, Lenin
2015-04-01
At scales corresponding to the lifetimes of structures of planetary extent (about 5 - 10 days), atmospheric processes undergo a drastic "dimensional transition" from high frequency weather to lower frequency macroweather processes. While conventional GCM's generally well reproduce both the transition and the corresponding (scaling) statistics, due to their sensitive dependence on initial conditions, the role of the weather scale processes is to provide random perturbations to the macroweather processes. The main problem with GCM's is thus that their long term (control run, unforced) statistics converge to the GCM climate and this is somewhat different from the real climate. This is the motivation for using a stochastic model and exploiting the empirical scaling properties and past data to make a stochastic model. It turns out that macroweather intermittency is typically low (the multifractal corrections are small) so that they can be approximated by fractional Gaussian Noise (fGN) processes whose memory can be enormous. For example for annual forecasts, and using the observed global temperature exponent, even 50 years of global temperature data would only allow us to exploit 90% of the available memory (for ocean regions, the figure increases to 600 years). The only complication is that anthropogenic effects dominate the global statistics at time scales beyond about 20 years. However, these are easy to remove using the CO2 forcing as a linear surrogate for all the anthropogenic effects. Using this theoretical framework, we show how to make accurate stochastic macroweather forecasts. We illustrate this on monthly and annual scale series of global and northern hemisphere surface temperatures (including nearly perfect hindcasts of the "pause" in the warming since 1998). We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow. These scaling hindcasts - using a single effective climate sensitivity and single scaling exponent are
MCO Monitoring activity description
SEXTON, R.A.
1998-11-09
Spent Nuclear Fuel remaining from Hanford's N-Reactor operations in the 1970s has been stored under water in the K-Reactor Basins. This fuel will be repackaged, dried and stored in a new facility in the 200E Area. The safety basis for this process of retrieval, drying, and interim storage of the spent fuel has been established. The monitoring of MCOS in dry storage is a currently identified issue in the SNF Project. This plan outlines the key elements of the proposed monitoring activity. Other fuel stored in the K-Reactor Basins, including SPR fuel, will have other monitoring considerations and is not addressed by this activity description.
Three Approaches to Descriptive Research.
ERIC Educational Resources Information Center
Svensson, Lennart
This report compares three approaches to descriptive research, focusing on the kinds of descriptions developed and on the methods used to develop the descriptions. The main emphasis in all three approaches is on verbal data. In these approaches the importance of interpretation and its intuitive nature are emphasized. The three approaches, however,…
An accurate metric for the spacetime around rotating neutron stars.
NASA Astrophysics Data System (ADS)
Pappas, George
2017-01-01
The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parameterisation of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a 3-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.
NASA Astrophysics Data System (ADS)
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2014-11-01
A theoretical description of the low-temperature phase of short-range spin glasses has remained elusive for decades. In particular, it is unclear if theories that assert a single pair of pure states, or theories that are based on infinitely many pure states—such as replica symmetry breaking—best describe realistic short-range systems. To resolve this controversy, the three-dimensional Edwards-Anderson Ising spin glass in thermal boundary conditions is studied numerically using population annealing Monte Carlo. In thermal boundary conditions all eight combinations of periodic vs antiperiodic boundary conditions in the three spatial directions appear in the ensemble with their respective Boltzmann weights, thus minimizing finite-size corrections due to domain walls. From the relative weighting of the eight boundary conditions for each disorder instance a sample stiffness is defined, and its typical value is shown to grow with system size according to a stiffness exponent. An extrapolation to the large-system-size limit is in agreement with a description that supports the droplet picture and other theories that assert a single pair of pure states. The results are, however, incompatible with the mean-field replica symmetry breaking picture, thus highlighting the need to go beyond mean-field descriptions to accurately describe short-range spin-glass systems.
Interactive Multimedia Animation with Macromedia Flash in Descriptive Geometry Teaching
ERIC Educational Resources Information Center
Garcia, Ramon Rubio; Quiros, Javier Suarez; Santos, Ramon Gallego; Gonzalez, Santiago Martin; Fernanz, Samuel Moran
2007-01-01
The growing concern of teachers to improve their theoretical classes together with the revolution in content and methods brought about by the New Information Technologies combine to offer students a new more attractive, efficient and agreeable form of learning. The case of Descriptive Geometry (DG) is particularly special, since the main purpose…
Research in Theoretical Particle Physics
Feldman, Hume A; Marfatia, Danny
2014-09-24
This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 – March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The group’s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.
Diborane, dialane, and digallane: Accurate geometries and vibrational frequencies
Magers, D.H.; Hood, R.B.; Leszczynski, J.
1994-12-31
Optimum equilibrium geometries, harmonic vibrational frequencies, and infrared intensities within the double harmonic approximation are computed for diborane, B{sub 2}H{sub 6}, dialane, Al{sub 2}H{sub 6}, and digallane, Ga{sub 2}H{sub 6}, at both the SCF level of theory and the second-order perturbation theory [E(2)] using three large basis sets: 6-311G(d,p), 6-311G(2d,2p), and 6-311G(2df,2p). In particular, the results obtained with the latter basis set make this present work the first study to include f-type polarization functions in a systematic investigation of the molecular structure and properties of all three molecules in the series. Because of the good agreement of the present theoretical results with experimental data and with previous theoretical studies which employed a higher treatment of electron correlation, this study serves to show that large basis sets can in part compensate for the lack of a more advanced treatment of electron correlation in these electron-deficient systems. In addition, this study establishes the level of basis set needed for future work on these systems including a thorough description of the total electronic density at a correlated level.
Accurate oscillator strengths for interstellar ultraviolet lines of Cl I
NASA Technical Reports Server (NTRS)
Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.
1993-01-01
Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.
Accurate superimposition of perimetry data onto fundus photographs.
Bek, T; Lund-Andersen, H
1990-02-01
A technique for accurate superimposition of computerized perimetry data onto the corresponding retinal locations seen on fundus photographs was developed. The technique was designed to take into account: 1) that the photographic field of view of the fundus camera varies with ametropia-dependent camera focusing 2) possible distortion by the fundus camera, and 3) that corrective lenses employed during perimetry magnify or minify the visual field. The technique allowed an overlay of perimetry data of the central 60 degrees of the visual field onto fundus photographs with an accuracy of 0.5 degree. The correlation of localized retinal morphology to localized retinal function was therefore limited by the spatial resolution of the computerized perimetry, which was 2.5 degrees in the Dicon AP-2500 perimeter employed for this study. The theoretical assumptions of the technique were confirmed by comparing visual field records to fundus photographs from patients with morphologically well-defined non-functioning lesions in the retina.
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
A new approach to compute accurate velocity of meteors
NASA Astrophysics Data System (ADS)
Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William
2016-10-01
The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy
Unification of theoretical approaches for epidemic spreading on complex networks.
Wang, Wei; Tang, Ming; Eugene Stanley, H; Braunstein, Lidia A
2017-03-01
Models of epidemic spreading on complex networks have attracted great attention among researchers in physics, mathematics, and epidemiology due to their success in predicting and controlling scenarios of epidemic spreading in real-world scenarios. To understand the interplay between epidemic spreading and the topology of a contact network, several outstanding theoretical approaches have been developed. An accurate theoretical approach describing the spreading dynamics must take both the network topology and dynamical correlations into consideration at the expense of increasing the complexity of the equations. In this short survey we unify the most widely used theoretical approaches for epidemic spreading on complex networks in terms of increasing complexity, including the mean-field, the heterogeneous mean-field, the quench mean-field, dynamical message-passing, link percolation, and pairwise approximation. We build connections among these approaches to provide new insights into developing an accurate theoretical approach to spreading dynamics on complex networks.
Unification of theoretical approaches for epidemic spreading on complex networks
NASA Astrophysics Data System (ADS)
Wang, Wei; Tang, Ming; Stanley, H. Eugene; Braunstein, Lidia A.
2017-03-01
Models of epidemic spreading on complex networks have attracted great attention among researchers in physics, mathematics, and epidemiology due to their success in predicting and controlling scenarios of epidemic spreading in real-world scenarios. To understand the interplay between epidemic spreading and the topology of a contact network, several outstanding theoretical approaches have been developed. An accurate theoretical approach describing the spreading dynamics must take both the network topology and dynamical correlations into consideration at the expense of increasing the complexity of the equations. In this short survey we unify the most widely used theoretical approaches for epidemic spreading on complex networks in terms of increasing complexity, including the mean-field, the heterogeneous mean-field, the quench mean-field, dynamical message-passing, link percolation, and pairwise approximation. We build connections among these approaches to provide new insights into developing an accurate theoretical approach to spreading dynamics on complex networks.
NASA Technical Reports Server (NTRS)
Papageorgiou, Demetrios T.
1996-01-01
In this article we review recent results on the breakup of cylindrical jets of a Newtonian fluid. Capillary forces provide the main driving mechanism and our interest is in the description of the flow as the jet pinches to form drops. The approach is to describe such topological singularities by constructing local (in time and space) similarity solutions from the governing equations. This is described for breakup according to the Euler, Stokes or Navier-Stokes equations. It is found that slender jet theories can be applied when viscosity is present, but for inviscid jets the local shape of the jet at breakup is most likely of a non-slender geometry. Systems of one-dimensional models of the governing equations are solved numerically in order to illustrate these differences.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
NASA Astrophysics Data System (ADS)
Dunajewski, Adam; Dusza, Jacek J.; Rosado Muñoz, Alfredo
2014-11-01
The article presents a proposal for the description of human gait as a periodic and symmetric process. Firstly, the data for researches was obtained in the Laboratory of Group SATI in the School of Engineering of University of Valencia. Then, the periodical model - Mean Double Step (MDS) was made. Finally, on the basis of MDS, the symmetrical models - Left Mean Double Step and Right Mean Double Step (LMDS and RMDS) could be created. The method of various functional extensions was used. Symmetrical gait models can be used to calculate the coefficients of asymmetry at any time or phase of the gait. In this way it is possible to create asymmetry, function which better describes human gait dysfunction. The paper also describes an algorithm for calculating symmetric models, and shows exemplary results based on the experimental data.
YUCCA MOUNTAIN SITE DESCRIPTION
A.M. Simmons
2004-04-16
The ''Yucca Mountain Site Description'' summarizes, in a single document, the current state of knowledge and understanding of the natural system at Yucca Mountain. It describes the geology; geochemistry; past, present, and projected future climate; regional hydrologic system; and flow and transport within the unsaturated and saturated zones at the site. In addition, it discusses factors affecting radionuclide transport, the effect of thermal loading on the natural system, and tectonic hazards. The ''Yucca Mountain Site Description'' is broad in nature. It summarizes investigations carried out as part of the Yucca Mountain Project since 1988, but it also includes work done at the site in earlier years, as well as studies performed by others. The document has been prepared under the Office of Civilian Radioactive Waste Management quality assurance program for the Yucca Mountain Project. Yucca Mountain is located in Nye County in southern Nevada. The site lies in the north-central part of the Basin and Range physiographic province, within the northernmost subprovince commonly referred to as the Great Basin. The basin and range physiography reflects the extensional tectonic regime that has affected the region during the middle and late Cenozoic Era. Yucca Mountain was initially selected for characterization, in part, because of its thick unsaturated zone, its arid to semiarid climate, and the existence of a rock type that would support excavation of stable openings. In 1987, the United States Congress directed that Yucca Mountain be the only site characterized to evaluate its suitability for development of a geologic repository for high-level radioactive waste and spent nuclear fuel.
Older Adults’ Pain Descriptions
McDonald, Deborah Dillon
2008-01-01
The purpose of this study was to describe the types of pain information described by older adults with chronic osteoarthritis pain. Pain descriptions were obtained from older adults’ who participated in a posttest only double blind study testing how the phrasing of healthcare practitioners’ pain questions affected the amount of communicated pain information. The 207 community dwelling older adults were randomized to respond to either the open-ended or closed-ended pain question. They viewed and orally responded to a computer displayed videotape of a practitioner asking them the respective pain question. All then viewed and responded to the general follow up question, ““What else can you tell me?” and lastly, “What else can you tell me about your pain, aches, soreness or discomfort?” Audio-taped responses were transcribed and content analyzed by trained, independent raters using 16 a priori criteria from the American Pain Society (2002) Guidelines for the Management of Pain in Osteoarthritis, Rheumatoid Arthritis, and Juvenile Chronic Arthritis. Older adults described important but limited types of information primarily about pain location, timing, and intensity. Pain treatment information was elicited after repeated questioning. Therefore, practitioners need to follow up older adults’ initial pain descriptions with pain questions that promote a more complete pain management discussion. Routine use of a multidimensional pain assessment instrument that measures information such as functional interference, current pain treatments, treatment effects, and side effects would be one way of insuring a more complete pain management discussion with older adults. PMID:19706351
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Semiotic and Theoretic Control in Argumentation and Proof Activities
ERIC Educational Resources Information Center
Arzarello, Ferdinando; Sabena, Cristina
2011-01-01
We present a model to analyze the students' activities of argumentation and proof in the graphical context of Elementary Calculus. The theoretical background is provided by the integration of Toulmin's structural description of arguments, Peirce's notions of sign, diagrammatic reasoning and abduction, and Habermas' model for rational behavior.…
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS.
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.
Active disturbance rejection control: methodology and theoretical analysis.
Huang, Yi; Xue, Wenchao
2014-07-01
The methodology of ADRC and the progress of its theoretical analysis are reviewed in the paper. Several breakthroughs for control of nonlinear uncertain systems, made possible by ADRC, are discussed. The key in employing ADRC, which is to accurately determine the "total disturbance" that affects the output of the system, is illuminated. The latest results in theoretical analysis of the ADRC-based control systems are introduced.
Theoretical Approaches to Nanoparticles
NASA Astrophysics Data System (ADS)
Kempa, Krzysztof
Nanoparticles can be viewed as wave resonators. Involved waves are, for example, carrier waves, plasmon waves, polariton waves, etc. A few examples of successful theoretical treatments that follow this approach are given. In one, an effective medium theory of a nanoparticle composite is presented. In another, plasmon polaritonic solutions allow to extend concepts of radio technology, such as an antenna and a coaxial transmission line, to the visible frequency range.
Theoretical Delay Time Distributions
NASA Astrophysics Data System (ADS)
Nelemans, Gijs; Toonen, Silvia; Bours, Madelon
2013-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of Type Ia supernova progenitors. We also compare the results of different research groups and conclude that, although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this alone cannot explain all the differences.
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate colorimetric feedback for RGB LED clusters
NASA Astrophysics Data System (ADS)
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Panorama of theoretical physics
NASA Astrophysics Data System (ADS)
Mimouni, J.
2012-06-01
We shall start this panorama of theoretical physics by giving an overview of physics in general, this branch of knowledge that has been taken since the scientific revolution as the archetype of the scientific discipline. We shall then proceed in showing in what way theoretical physics from Newton to Maxwell, Einstein, Feynman and the like, in all modesty, could be considered as the ticking heart of physics. By its special mode of inquiry and its tantalizing successes, it has capturing the very spirit of the scientific method, and indeed it has been taken as a role model by other disciplines all the way from the "hard" ones to the social sciences. We shall then review how much we know today of the world of matter, both in term of its basic content and in the way it is structured. We will then present the dreams of today's theoretical physics as a way of penetrating into its psyche, discovering in this way its aspirations and longing in much the same way that a child's dreams tell us about his yearning and craving. Yet our understanding of matter has been going in the past decades through a crisis of sort. As a necessary antidote, we shall thus discuss the pitfalls of dreams pushed too far….
Theoretical Developments in SUSY
NASA Astrophysics Data System (ADS)
Shifman, M.
2009-01-01
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.
An Accurate, Simplified Model Intrabeam Scattering
Bane, Karl LF
2002-05-23
Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
On accurate determination of contact angle
NASA Technical Reports Server (NTRS)
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
Theoretical Modeling for Hepatic Microwave Ablation
Prakash, Punit
2010-01-01
Thermal tissue ablation is an interventional procedure increasingly being used for treatment of diverse medical conditions. Microwave ablation is emerging as an attractive modality for thermal therapy of large soft tissue targets in short periods of time, making it particularly suitable for ablation of hepatic and other tumors. Theoretical models of the ablation process are a powerful tool for predicting the temperature profile in tissue and resultant tissue damage created by ablation devices. These models play an important role in the design and optimization of devices for microwave tissue ablation. Furthermore, they are a useful tool for exploring and planning treatment delivery strategies. This review describes the status of theoretical models developed for microwave tissue ablation. It also reviews current challenges, research trends and progress towards development of accurate models for high temperature microwave tissue ablation. PMID:20309393
The Theoretical Instability Strip of V777 Her White Dwarfs
NASA Astrophysics Data System (ADS)
Van Grootel, V.; Fontaine, G.; Brassard, P.; Dupret, M.-A.
2017-03-01
We present a new theoretical investigation of the instability strip of V777 Her (DBV) white dwarfs. We apply a time-dependent convection (TDC) treatment to cooling models of DB and DBA white dwarfs. Using the spectroscopic calibration for the convective efficiency, ML2/α=1.25, we find a wide strip covering the range of effective temperature from 30,000 K down to about 22,000 K at log g = 8.0. This accounts very well for the empirical instability strip derived from a new accurate and homogenous spectroscopic analysis of known pulsators. Our approach leads to an exact description of the blue edge and to a correct understanding of the onset and development of pulsational instabilities, similarly to our results of TDC applied to ZZ Ceti white dwarfs in the recent past. We propose that, contrarily to what is generally believed, there is practically no fuzziness on the boundaries of the V777 Her instability strip due to traces of hydrogen in the atmospheres of some of these helium-dominated-atmosphere stars. Contrary to the blue edge, the red edge provided by TDC computations is far too cool compared to the empirical one. A similar situation was observed for the ZZ Ceti stars as well. We hence test the energy leakage argument (i.e., the red edge occurs when the thermal timescale in the driving region becomes equal to the critical period beyond which gravity modes cease to exist), which was successful to correctly reproduce the red edge of ZZ Ceti white dwarfs. Based on this argument, the red edge is qualitatively well reproduced as indicated above. However, upon close inspection, it may be about 1000 K too cool compared to the empirical one, although the latter relies on a few objects only. We also test the hypothesis of including turbulent pressure in our TDC computations in order to provide an alternate physical mechanism to account for the red edge. First promising results are presented.
A gauge-theoretic approach to gravity
Krasnov, Kirill
2012-01-01
Einstein's general relativity (GR) is a dynamical theory of the space–time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang–Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach. PMID:22792040
A gauge-theoretic approach to gravity.
Krasnov, Kirill
2012-08-08
Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.
Microgravity Environment Description Handbook
NASA Technical Reports Server (NTRS)
DeLombard, Richard; McPherson, Kevin; Hrovat, Kenneth; Moskowitz, Milton; Rogers, Melissa J. B.; Reckart, Timothy
1997-01-01
The Microgravity Measurement and Analysis Project (MMAP) at the NASA Lewis Research Center (LeRC) manages the Space Acceleration Measurement System (SAMS) and the Orbital Acceleration Research Experiment (OARE) instruments to measure the microgravity environment on orbiting space laboratories. These laboratories include the Spacelab payloads on the shuttle, the SPACEHAB module on the shuttle, the middeck area of the shuttle, and Russia's Mir space station. Experiments are performed in these laboratories to investigate scientific principles in the near-absence of gravity. The microgravity environment desired for most experiments would have zero acceleration across all frequency bands or a true weightless condition. This is not possible due to the nature of spaceflight where there are numerous factors which introduce accelerations to the environment. This handbook presents an overview of the major microgravity environment disturbances of these laboratories. These disturbances are characterized by their source (where known), their magnitude, frequency and duration, and their effect on the microgravity environment. Each disturbance is characterized on a single page for ease in understanding the effect of a particular disturbance. The handbook also contains a brief description of each laboratory.
Theoretical Astrophysics at Fermilab
NASA Technical Reports Server (NTRS)
2004-01-01
The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:
NASA Technical Reports Server (NTRS)
Mullan, Dermott J.
1987-01-01
Theoretical work on the atmospheres of M dwarfs has progressed along lines parallel to those followed in the study of other classes of stars. Such models have become increasingly sophisticated as improvements in opacities, in the equation of state, and in the treatment of convection were incorporated during the last 15 to 20 years. As a result, spectrophotometric data on M dwarfs can now be fitted rather well by current models. The various attempts at modeling M dwarf photospheres in purely thermal terms are summarized. Some extensions of these models to include the effects of microturbulence and magnetic inhomogeneities are presented.
Theoretical Optics: An Introduction
NASA Astrophysics Data System (ADS)
Römer, Hartmann
2005-02-01
Starting from basic electrodynamics, this volume provides a solid, yet concise introduction to theoretical optics, containing topics such as nonlinear optics, light-matter interaction, and modern topics in quantum optics, including entanglement, cryptography, and quantum computation. The author, with many years of experience in teaching and research, goes way beyond the scope of traditional lectures, enabling readers to keep up with the current state of knowledge. Both content and presentation make it essential reading for graduate and phD students as well as a valuable reference for researchers.
Theoretical Aspects of Dromedaryfoil.
1977-11-01
Seginer were taken on a Yoshihara "A" supercritical airfoil. Steinle and Gross used a 64A010 airfoil. All the data points lie within the theoretical...experimental data that for the same airfoil, either 64A410 or 64A010 , the higher the angle of attack, the sooner the limiting pressure is reached. The...shock 13 Stivers, L.S., Jr., "Effects of Subsonic Mach Numbers on the Forces and Pressure Distributions on Four NACA 64A-Series Airfoil Sections at
Trusting Description: Authenticity, Accountability, and Archival Description Standards
ERIC Educational Resources Information Center
MacNeil, Heather
2009-01-01
It has been suggested that one of the purposes of archival description is to establish grounds for presuming the authenticity of the records being described. The article examines the implications of this statement by examining the relationship between and among authenticity, archival description, and archival accountability, assessing how this…
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Does the Taylor Spatial Frame Accurately Correct Tibial Deformities?
Segal, Kira; Ilizarov, Svetlana; Fragomen, Austin T.; Ilizarov, Gabriel
2009-01-01
Background Optimal leg alignment is the goal of tibial osteotomy. The Taylor Spatial Frame (TSF) and the Ilizarov method enable gradual realignment of angulation and translation in the coronal, sagittal, and axial planes, therefore, the term six-axis correction. Questions/purposes We asked whether this approach would allow precise correction of tibial deformities. Methods We retrospectively reviewed 102 patients (122 tibiae) with tibial deformities treated with percutaneous osteotomy and gradual correction with the TSF. The proximal osteotomy group was subdivided into two subgroups to distinguish those with an intentional overcorrection of the mechanical axis deviation (MAD). The minimum followup after frame removal was 10 months (average, 48 months; range, 10–98 months). Results In the proximal osteotomy group, patients with varus and valgus deformities for whom the goal of alignment was neutral or overcorrection experienced accurate correction of MAD. In the proximal tibia, the medial proximal tibial angle improved from 80° to 89° in patients with a varus deformity and from 96° to 85° in patients with a valgus deformity. In the middle osteotomy group, all patients had less than 5° coronal plane deformity and 15 of 17 patients had less that 5° sagittal plane deformity. In the distal osteotomy group, the lateral distal tibial angle improved from 77° to 86° in patients with a valgus deformity and from 101° to 90° for patients with a varus deformity. Conclusions Gradual correction of all tibial deformities with the TSF was accurate and with few complications. Level of Evidence Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:19911244
Determining accurate distances to nearby galaxies
NASA Astrophysics Data System (ADS)
Bonanos, Alceste Zoe
2005-11-01
Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Benchmark data base for accurate van der Waals interaction in inorganic fragments
NASA Astrophysics Data System (ADS)
Brndiar, Jan; Stich, Ivan
2012-02-01
A range of inorganic materials, such as Sb, As, P, S, Se are built from van der Waals (vdW) interacting units forming the crystals, which neither the standard DFT GGA description as well as cheap quantum chemistry methods, such as MP2, do not describe correctly. We use this data base, for which have performed ultra accurate CCSD(T) calculations in complete basis set limit, to test the alternative approximate theories, such as Grimme [1], Langreth-Lundqvist [2], and Tkachenko-Scheffler [3]. While none of these theories gives entirely correct description, Grimme consistently provides more accurate results than Langreth-Lundqvist, which tend to overestimate the distances and underestimate the interaction energies for this set of systems. Contrary Tkachenko-Scheffler appear to yield surprisingly accurate and computationally cheap and convenient description applicable also for systems with appreciable charge transfer. [4pt] [1] S. Grimme, J. Comp. Chem. 27, 1787 (2006) [0pt] [2] K. Lee, et al., Phys. Rev. B 82 081101 (R) (2010) [0pt] [3] Tkachenko and M. Scheffler Phys. Rev. Lett. 102 073005 (2009).
Not Available
1991-01-01
This report discussed the following topics: Consistent RHA-RPA for finite nuclei; vacuum polarization in a finite system; isovector correlations in QHD description of nuclear matter; nuclear response functions in quasielastic electron scattering; charge density differences for nuclei near {sup 208}Pb in quantum hadro-dynamics; excitation of the 10.957 MeV 0{sup {minus}}; T=0 state in {sup 16}O by 400 MeV protons; deformed chiral nucleons; new basis for exact vacuum calculations in 3-spatial dimensions; second order processes in the (e,e{prime}d) reaction; scalar and vector contributions to {bar p}p {yields} {bar {Lambda}}{Lambda} and {bar p}p {yields} {bar {Lambda}}{Sigma}{sup 0} + c.c; and radiative capture of protons by light nuclei at low energies.
Physics in one dimension: theoretical concepts for quantum many-body systems.
Schönhammer, K
2013-01-09
Various sophisticated approximation methods exist for the description of quantum many-body systems. It was realized early on that the theoretical description can simplify considerably in one-dimensional systems and various exact solutions exist. The focus in this introductory paper is on fermionic systems and the emergence of the Luttinger liquid concept.
Dark matter: Theoretical perspectives
Turner, M.S. Fermi National Accelerator Lab., Batavia, IL )
1993-06-01
The author both reviews and makes the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that (i) there are no dark-matter candidates within the [open quotes]standard model[close quotes] of particle physics, (ii) there are several compelling candidates within attractive extensions of the standard model of particle physics, and (iii) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for [open quotes]new physics.[close quotes] The compelling candidates are a very light axion (10[sup [minus]6]--10[sup [minus]4] eV), a light neutrino (20--90 eV), and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. The author briefly mentions more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. 119 refs.
Dark matter: theoretical perspectives.
Turner, M S
1993-01-01
I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that (i) there are no dark-matter candidates within the "standard model" of particle physics, (ii) there are several compelling candidates within attractive extensions of the standard model of particle physics, and (iii) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for "new physics." The compelling candidates are a very light axion (10(-6)-10(-4) eV), a light neutrino (20-90 eV), and a heavy neutralino (10 GeV-2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. PMID:11607395
Dark matter: Theoretical perspectives
Turner, M.S. |
1993-01-01
I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for ``new physics.`` The compelling candidates are: a very light axion ( 10{sup {minus}6} eV--10{sup {minus}4} eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos.
Dark matter: Theoretical perspectives
Turner, M.S. . Enrico Fermi Inst. Fermi National Accelerator Lab., Batavia, IL )
1993-01-01
I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for new physics.'' The compelling candidates are: a very light axion ( 10[sup [minus]6] eV--10[sup [minus]4] eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos.
Accurate spectral modeling for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Gupta, S. K.
1977-01-01
Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.
Accurate taxonomic assignment of short pyrosequencing reads.
Clemente, José C; Jansson, Jesper; Valiente, Gabriel
2010-01-01
Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Sparse and accurate high resolution SAR imaging
NASA Astrophysics Data System (ADS)
Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian
2012-05-01
We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.
Logic synthesis from DDL description
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1980-01-01
The implementation of DDLTRN and DDLSIM programs on SEL-2 computer system is reported. These programs were tested with DDL descriptions of various complexity. An algorithm to synthesize the combinational logic using the cells available in the standard IC cell library was formulated. The algorithm is implemented as a FORTRAN program and a description of the program is given.
Mission data system framework description
NASA Technical Reports Server (NTRS)
Meyer, K.; Rinker, G.; Dvorak, D.; Rosmussen, R.; Reinholttz, K.
2002-01-01
This document provides an overall description of the MDS Framework technology. Since the purpose is to provide a general reference for the frameworks, the descriptions are organized as compendium. This document does not provide guidance for how the MDS technology should be used.
Descriptive Linear modeling of steady-state visual evoked response
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.; Kenner, K.
1986-01-01
A study is being conducted to explore use of the steady state visual-evoke electrocortical response as an indicator of cognitive task loading. Application of linear descriptive modeling to steady state Visual Evoked Response (VER) data is summarized. Two aspects of linear modeling are reviewed: (1) unwrapping the phase-shift portion of the frequency response, and (2) parsimonious characterization of task-loading effects in terms of changes in model parameters. Model-based phase unwrapping appears to be most reliable in applications, such as manual control, where theoretical models are available. Linear descriptive modeling of the VER has not yet been shown to provide consistent and readily interpretable results.
An accurate equation of state for fluids and solids.
Parsafar, G A; Spohr, H V; Patey, G N
2009-09-03
A simple functional form for a general equation of state based on an effective near-neighbor pair interaction of an extended Lennard-Jones (12,6,3) type is given and tested against experimental data for a wide variety of fluids and solids. Computer simulation results for ionic liquids are used for further evaluation. For fluids, there appears to be no upper density limitation on the equation of state. The lower density limit for isotherms near the critical temperature is the critical density. The equation of state gives a good description of all types of fluids, nonpolar (including long-chain hydrocarbons), polar, hydrogen-bonded, and metallic, at temperatures ranging from the triple point to the highest temperature for which there is experimental data. For solids, the equation of state is very accurate for all types considered, including covalent, molecular, metallic, and ionic systems. The experimental pvT data available for solids does not reveal any pressure or temperature limitations. An analysis of the importance and possible underlying physical significance of the terms in the equation of state is given.
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Theoretical Particle Astrophysics
Kamionkowski, Marc
2013-08-07
Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.
Reference module selection criteria for accurate testing of photovoltaic (PV) panels
Roy, J.N.; Gariki, Govardhan Rao; Nagalakhsmi, V.
2010-01-15
It is shown that for accurate testing of PV panels the correct selection of reference modules is important. A detailed description of the test methodology is given. Three different types of reference modules, having different I{sub SC} (short circuit current) and power (in Wp) have been used for this study. These reference modules have been calibrated from NREL. It has been found that for accurate testing, both I{sub SC} and power of the reference module must be either similar or exceed to that of modules under test. In case corresponding values of the test modules are less than a particular limit, the measurements may not be accurate. The experimental results obtained have been modeled by using simple equivalent circuit model and associated I-V equations. (author)
Accurate measurement of the specific absorption rate using a suitable adiabatic magnetothermal setup
NASA Astrophysics Data System (ADS)
Natividad, Eva; Castro, Miguel; Mediano, Arturo
2008-03-01
Accurate measurements of the specific absorption rate (SAR) of solids and fluids were obtained by a calorimetric method, using a special-purpose setup working under adiabatic conditions. Unlike in current nonadiabatic setups, the weak heat exchange with the surroundings allowed a straightforward determination of temperature increments, avoiding the usual initial-time approximations. The measurements performed on a commercial magnetite aqueous ferrofluid revealed a good reproducibility (4%). Also, the measurements on a copper sample allowed comparison between experimental and theoretical values: adiabatic conditions gave SAR values only 3% higher than the theoretical ones, while the typical nonadiabatic method underestimated SAR by 21%.
Accurate Modeling of Scaffold Hopping Transformations in Drug Discovery.
Wang, Lingle; Deng, Yuqing; Wu, Yujie; Kim, Byungchan; LeBard, David N; Wandschneider, Dan; Beachy, Mike; Friesner, Richard A; Abel, Robert
2017-01-10
The accurate prediction of protein-ligand binding free energies remains a significant challenge of central importance in computational biophysics and structure-based drug design. Multiple recent advances including the development of greatly improved protein and ligand molecular mechanics force fields, more efficient enhanced sampling methods, and low-cost powerful GPU computing clusters have enabled accurate and reliable predictions of relative protein-ligand binding free energies through the free energy perturbation (FEP) methods. However, the existing FEP methods can only be used to calculate the relative binding free energies for R-group modifications or single-atom modifications and cannot be used to efficiently evaluate scaffold hopping modifications to a lead molecule. Scaffold hopping or core hopping, a very common design strategy in drug discovery projects, is critical not only in the early stages of a discovery campaign where novel active matter must be identified but also in lead optimization where the resolution of a variety of ADME/Tox problems may require identification of a novel core structure. In this paper, we introduce a method that enables theoretically rigorous, yet computationally tractable, relative protein-ligand binding free energy calculations to be pursued for scaffold hopping modifications. We apply the method to six pharmaceutically interesting cases where diverse types of scaffold hopping modifications were required to identify the drug molecules ultimately sent into the clinic. For these six diverse cases, the predicted binding affinities were in close agreement with experiment, demonstrating the wide applicability and the significant impact Core Hopping FEP may provide in drug discovery projects.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Apparatus for accurately measuring high temperatures
Smith, Douglas D.
1985-01-01
The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
LSM: perceptually accurate line segment merging
NASA Astrophysics Data System (ADS)
Hamid, Naila; Khan, Nazar
2016-11-01
Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.
Highly accurate articulated coordinate measuring machine
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-12-30
Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Obtaining accurate translations from expressed sequence tags.
Wasmuth, James; Blaxter, Mark
2009-01-01
The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
Accurate radio positions with the Tidbinbilla interferometer
NASA Technical Reports Server (NTRS)
Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.
1979-01-01
The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.
Magnetic ranging tool accurately guides replacement well
Lane, J.B.; Wesson, J.P. )
1992-12-21
This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.
1979-01-01
format for the sequence of species cards is (AlO , 3E10 3, lOX , AlO) . ~J t*. 102 I...AlO , 3E10 . 3, lOX, A10) These cards define the energy E (eV) and molecular weight MOLWT (g/mole) of the species NAME. P is the concentration: if...Arbitrarily long package of species cards for specifica- tion of concentrations, energies, and molecular weights • I) NAME, P, E, MOLWT (AlO , 3E10 . 3) 2
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Spherical shell model description of deformation and superdeformation
NASA Astrophysics Data System (ADS)
Poves, A.; Caurier, E.; Nowacki, F.; Zuker, A.
2003-04-01
Large-scale shell model calculations give at present a very accurate and comprehensive description of light and medium-light nuclei, specially when 0hbar ω spaces are adequate. The full pf-shell calculations have made it possible to describe many collective features in an spherical shell model context. Calculations including two major oscillator shells have proven able to describe also superdeformed bands.
Exploiting spatial descriptions in visual scene analysis.
Ziegler, Leon; Johannsen, Katrin; Swadzba, Agnes; De Ruiter, Jan P; Wachsmuth, Sven
2012-08-01
The reliable automatic visual recognition of indoor scenes with complex object constellations using only sensor data is a nontrivial problem. In order to improve the construction of an accurate semantic 3D model of an indoor scene, we exploit human-produced verbal descriptions of the relative location of pairs of objects. This requires the ability to deal with different spatial reference frames (RF) that humans use interchangeably. In German, both the intrinsic and relative RF are used frequently, which often leads to ambiguities in referential communication. We assume that there are certain regularities that help in specific contexts. In a first experiment, we investigated how speakers of German describe spatial relationships between different pieces of furniture. This gave us important information about the distribution of the RFs used for furniture-predicate combinations, and by implication also about the preferred spatial predicate. The results of this experiment are compiled into a computational model that extracts partial orderings of spatial arrangements between furniture items from verbal descriptions. In the implemented system, the visual scene is initially scanned by a 3D camera system. From the 3D point cloud, we extract point clusters that suggest the presence of certain furniture objects. We then integrate the partial orderings extracted from the verbal utterances incrementally and cumulatively with the estimated probabilities about the identity and location of objects in the scene, and also estimate the probable orientation of the objects. This allows the system to significantly improve both the accuracy and richness of its visual scene representation.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Topics in theoretical astrophysics
NASA Astrophysics Data System (ADS)
Li, Chao
This thesis presents a study of various interesting problems in theoretical astrophysics, including gravitational wave astronomy, gamma ray bursts and cosmology. Chapters 2, 3 and 4 explore prospects for detecting gravitational waves from stellar-mass compact objects spiraling into intermediate-mass black holes with ground-based observatories. It is shown in chapter 2 that if the central body is not a BH but its metric is stationary, axisymmetric, reflection symmetric and asymptotically flat, then the waves will likely be triperiodic, as for a BH. Chapters 3 and 4 show that the evolutions of the waves' three fundamental frequencies and of the complex amplitudes of their spectral components encode (in principle) details of the central body's metric, the energy and angular momentum exchange between the central body and the orbit, and the time-evolving orbital elements. Chapter 5 studies a local readout method to enhance the low frequency sensitivity of detuned signal-recycling interferometers. We provide both the results of improvement in quantum noise and the implementation details in Advanced LIGO. Chapter 6 applies and generalizes causal Wiener filter to data analysis in macroscopic quantum mechanical experiments. With the causal Wiener filter method, we demonstrate that in theory we can put the test masses in the interferometer to its quantum mechanical ground states. Chapter 7 presents some analytical solutions for expanding fireballs, the common theoretical model for gamma ray bursts and soft gamma ray repeaters. We apply our results to SGR 1806-20 and rediscover the mismatch between the model and the afterglow observations. Chapter 8 discusses the reconstruction of the scalar-field potential of the dark energy. We advocate direct reconstruction of the scalar field potential as a way to minimize prior assumptions on the shape, and thus minimize the introduction of bias in the derived potential. Chapter 9 discusses gravitational lensing modifications to cosmic
Accurate Automated Apnea Analysis in Preterm Infants
Vergales, Brooke D.; Paget-Brown, Alix O.; Lee, Hoshik; Guin, Lauren E.; Smoot, Terri J.; Rusin, Craig G.; Clark, Matthew T.; Delos, John B.; Fairchild, Karen D.; Lake, Douglas E.; Moorman, Randall; Kattwinkel, John
2017-01-01
Objective In 2006 the apnea of prematurity (AOP) consensus group identified inaccurate counting of apnea episodes as a major barrier to progress in AOP research. We compare nursing records of AOP to events detected by a clinically validated computer algorithm that detects apnea from standard bedside monitors. Study Design Waveform, vital sign, and alarm data were collected continuously from all very low-birth-weight infants admitted over a 25-month period, analyzed for central apnea, bradycardia, and desaturation (ABD) events, and compared with nursing documentation collected from charts. Our algorithm defined apnea as > 10 seconds if accompanied by bradycardia and desaturation. Results Of the 3,019 nurse-recorded events, only 68% had any algorithm-detected ABD event. Of the 5,275 algorithm-detected prolonged apnea events > 30 seconds, only 26% had nurse-recorded documentation within 1 hour. Monitor alarms sounded in only 74% of events of algorithm-detected prolonged apnea events > 10 seconds. There were 8,190,418 monitor alarms of any description throughout the neonatal intensive care unit during the 747 days analyzed, or one alarm every 2 to 3 minutes per nurse. Conclusion An automated computer algorithm for continuous ABD quantitation is a far more reliable tool than the medical record to address the important research questions identified by the 2006 AOP consensus group. PMID:23592319
Krings, Thomas; Mauerhofer, Eric
2011-06-01
This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.
Accurate ab initio vibrational energies of methyl chloride
Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2015-06-28
Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup HL}, and CBS-37{sup HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup HL} and CBS-37{sup HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.
Raman Spectroscopy as an Accurate Probe of Defects in Graphene
NASA Astrophysics Data System (ADS)
Rodriguez-Nieva, Joaquin; Barros, Eduardo; Saito, Riichiro; Dresselhaus, Mildred
2014-03-01
Raman Spectroscopy has proved to be an invaluable non-destructive technique that allows us to obtain intrinsic information about graphene. Furthermore, defect-induced Raman features, namely the D and D' bands, have previously been used to assess the purity of graphitic samples. However, quantitative studies of the signatures of the different types of defects on the Raman spectra is still an open problem. Experimental results already suggest that the Raman intensity ratio ID /ID' may allow us to identify the nature of the defects. We study from a theoretical point of view the power and limitations of Raman spectroscopy in the study of defects in graphene. We derive an analytic model that describes the Double Resonance Raman process of disordered graphene samples, and which explicitly shows the role played by both the defect-dependent parameters as well as the experimentally-controlled variables. We compare our model with previous Raman experiments, and use it to guide new ways in which defects in graphene can be accurately probed with Raman spectroscopy. We acknowledge support from NSF grant DMR1004147.
Accurate Completion of Medical Report on Diagnosing Death.
Savić, Slobodan; Alempijević, Djordje; Andjelić, Sladjana
2015-01-01
Diagnosing death and issuing a Death Diagnosing Form (DDF) represents an activity that carries a great deal of public responsibility for medical professionals of the Emergency Medical Services (EMS) and is perpetually exposed to the control of the general public. Diagnosing death is necessary so as to confirm true, to exclude apparent death and consequentially to avoid burying a person alive, i.e. apparently dead. These expert-methodological guidelines based on the most up-to-date and medically based evidence have the goal of helping the physicians of the EMS in accurately filling out a medical report on diagnosing death. If the outcome of applied cardiopulmonary resuscitation measures is negative or when the person is found dead, the physician is under obligation to diagnose death and correctly fill out the DDF. It is also recommended to perform electrocardiography (EKG) and record asystole in at least two leads. In the process of diagnostics and treatment, it is a moral obligation of each Belgrade EMS physician to apply all available achievements and knowledge of modern medicine acquired from extensive international studies, which have been indeed the major theoretical basis for the creation of these expert-methodological guidelines. Those acting differently do so in accordance with their conscience and risk professional, and even criminal sanctions.
Accurate measurement of liquid transport through nanoscale conduits
Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua
2016-01-01
Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404
Does a pneumotach accurately characterize voice function?
NASA Astrophysics Data System (ADS)
Walters, Gage; Krane, Michael
2016-11-01
A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.
Accurate method for computing correlated color temperature.
Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier
2016-06-27
For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 10^{6} K.
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Accurate methods for large molecular systems.
Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A
2009-07-23
Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.
Accurate equilibrium structures for piperidine and cyclohexane.
Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter
2015-03-05
Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.
Accurate upper body rehabilitation system using kinect.
Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit
2016-08-01
The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.
Noninvasive hemoglobin monitoring: how accurate is enough?
Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E
2013-10-01
Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.
Accurate, reproducible measurement of blood pressure.
Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W
1990-01-01
The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Accurate Fission Data for Nuclear Safety
NASA Astrophysics Data System (ADS)
Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.
2014-05-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.
Tactical Planning Workstation Software Description
1990-09-01
Tactical Planning Workstation Software Description 12. PERSONAL AUTHOR(S) Packard, Bruce R. 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year...3-7 3-2. Unit type codes....................................3-7 3-3. Battle function codes ................................ 3-8 3-4...3-9 3-7. Control measure types ...............................3-11 3-8. Product description files
Accurate simulation of optical properties in dyes.
Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo
2009-02-17
Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them.
ERIC Educational Resources Information Center
Siguan, Miguel
1976-01-01
A presentation of a rigorous method allowing an accurate description of collective bilingualism in any given population, including both the speaker's degree of language command and the patterns of linguistic behavior in each of the languages. [In Spanish] (NQ)
Adventures in theoretical astrophysics
NASA Astrophysics Data System (ADS)
Farmer, Alison Jane
This thesis is a tour of topics in theoretical astrophysics, unified by their diversity and their pursuit of physical understanding of astrophysical phenomena. In the first chapter, we raise the possibility of the detection of white dwarfs in transit surveys for extrasolar Earths, and discuss the peculiarities of detecting these more massive objects. A population synthesis calculation of the gravitational wave background from extragalactic binary stars is then presented. In this study, we establish a firm understanding of the uncertainties in such a calculation and provide a valuable reference for planning the Laser Interferometer Space Antenna mission. The long-established problem of cosmic ray confinement to the Galaxy is addressed in another chapter. We introduce a new wave damping mechanism, due to the presence of background turbulence, that prevents the confinement of cosmic rays by the resonant streaming instability. We also investigate the spokes in Saturn's B ring, an electrodynamic mystery that is being illuminated by new data sent back from the Cassini spacecraft. In particular, we present assessments of the presence of charged dust near the rings, and the size of currents and electric fields in the ring system. We make inferences from the Cassini discovery of oxygen ions above the rings. In addition, the previous leading theory for spoke formation is demonstrated to be unphysical. In the final chapter, we explain the wayward motions of Prometheus and Pandora, two small moons of Saturn. Previously found to be chaotic as a result of mutual interactions, we account for their behavior by analogy with a parametric pendulum. We caution that this behavior may soon enter a new regime.
A quantitative description for efficient financial markets
NASA Astrophysics Data System (ADS)
Immonen, Eero
2015-09-01
In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.
[Aromatherapy and nursing: historical and theoretical conception].
Gnatta, Juliana Rizzo; Kurebayashi, Leonice Fumiko Sato; Turrini, Ruth Natalia Teresa; Silva, Maria Júlia Paes da
2016-02-01
Aromatherapy is a Practical or Complementary Health Therapy that uses volatile concentrates extracted from plants called essential oils, in order to improve physical, mental and emotional well-being. Aromatherapy has been practiced historically and worldwide by nurses and, as in Brazil is supported by the Federal Nursing Council, it is relevant to discuss this practice in the context of Nursing through Theories of Nursing. This study of theoretical reflection, exploratory and descriptive, aims to discuss the pharmacognosy of essential oils, the historical trajectory of Aromatherapy in Nursing and the conceptions to support Aromatherapy in light of eight Nursing Theorists (Florence Nightingale, Myra Levine, Hildegard Peplau, Martha Rogers, Callista Roy, Wanda Horta, Jean Watson and Katharine Kolcaba), contributing to its inclusion as a nursing care practice.
Accurate orbit propagation with planetary close encounters
NASA Astrophysics Data System (ADS)
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Accurate glucose detection in a small etalon
NASA Astrophysics Data System (ADS)
Martini, Joerg; Kuebler, Sebastian; Recht, Michael; Torres, Francisco; Roe, Jeffrey; Kiesel, Peter; Bruce, Richard
2010-02-01
We are developing a continuous glucose monitor for subcutaneous long-term implantation. This detector contains a double chamber Fabry-Perot-etalon that measures the differential refractive index (RI) between a reference and a measurement chamber at 850 nm. The etalon chambers have wavelength dependent transmission maxima which dependent linearly on the RI of their contents. An RI difference of ▵n=1.5.10-6 changes the spectral position of a transmission maximum by 1pm in our measurement. By sweeping the wavelength of a single-mode Vertical-Cavity-Surface-Emitting-Laser (VCSEL) linearly in time and detecting the maximum transmission peaks of the etalon we are able to measure the RI of a liquid. We have demonstrated accuracy of ▵n=+/-3.5.10-6 over a ▵n-range of 0 to 1.75.10-4 and an accuracy of 2% over a ▵nrange of 1.75.10-4 to 9.8.10-4. The accuracy is primarily limited by the reference measurement. The RI difference between the etalon chambers is made specific to glucose by the competitive, reversible release of Concanavalin A (ConA) from an immobilized dextran matrix. The matrix and ConA bound to it, is positioned outside the optical detection path. ConA is released from the matrix by reacting with glucose and diffuses into the optical path to change the RI in the etalon. Factors such as temperature affect the RI in measurement and detection chamber equally but do not affect the differential measurement. A typical standard deviation in RI is +/-1.4.10-6 over the range 32°C to 42°C. The detector enables an accurate glucose specific concentration measurement.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Astrophysics Data System (ADS)
Wheeler, K.; Knuth, K.; Castle, P.
2005-12-01
and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Prandi, Ingrid G; Viani, Lucas; Andreussi, Oliviero; Mennucci, Benedetta
2016-04-30
Carotenoids are important actors both in light-harvesting (LH) and in photoprotection functions of photosynthetic pigment-protein complexes. A deep theoretical investigation of this multiple role is still missing owing to the difficulty of describing the delicate interplay between electronic and nuclear degrees of freedom. A possible strategy is to combine accurate quantum mechanical (QM) methods with classical molecular dynamics. To do this, however, accurate force-fields (FF) are necessary. This article presents a new FF for the different carotenoids present in LH complexes of plants. The results show that all the important structural properties described by the new FF are in very good agreement with QM reference values. This increased accuracy in the simulation of the structural fluctuations is also reflected in the description of excited states. Both the energy order and the different nature of the lowest singlet states are preserved during the dynamics when the new FF is used, whereas an unphysical mixing is found when a standard FF is used.
Photon beam description in PEREGRINE for Monte Carlo dose calculations
Cox, L. J., LLNL
1997-03-04
Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.
Cohen, Andrew; Schmaltz, Martin; Katz, Emmanuel; Rebbi, Claudio; Glashow, Sheldon; Brower, Richard; Pi, So-Young
2016-09-30
This award supported a broadly based research effort in theoretical particle physics, including research aimed at uncovering the laws of nature at short (subatomic) and long (cosmological) distances. These theoretical developments apply to experiments in laboratories such as CERN, the facility that operates the Large Hadron Collider outside Geneva, as well as to cosmological investigations done using telescopes and satellites. The results reported here apply to physics beyond the so-called Standard Model of particle physics; physics of high energy collisions such as those observed at the Large Hadron Collider; theoretical and mathematical tools and frameworks for describing the laws of nature at short distances; cosmology and astrophysics; and analytic and computational methods to solve theories of short distance physics. Some specific research accomplishments include + Theories of the electroweak interactions, the forces that give rise to many forms of radioactive decay; + Physics of the recently discovered Higgs boson. + Models and phenomenology of dark matter, the mysterious component of the universe, that has so far been detected only by its gravitational effects. + High energy particles in astrophysics and cosmology. + Algorithmic research and Computational methods for physics of and beyond the Standard Model. + Theory and applications of relativity and its possible limitations. + Topological effects in field theory and cosmology. + Conformally invariant systems and AdS/CFT. This award also supported significant training of students and postdoctoral fellows to lead the research effort in particle theory for the coming decades. These students and fellows worked closely with other members of the group as well as theoretical and experimental colleagues throughout the physics community. Many of the research projects funded by this grant arose in response to recently obtained experimental results in the areas of particle physics and cosmology. We describe a few of
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
, validated on several technical and econometrical cases, has been used for this purpose. A database of several conventional stages, operated with either solid or liquid propellants, has been made up, in conjunction with an evolutionary set of geometrical, physical and functional parameters likely to contribute to the description of the mass fraction and presumably known at the early steps of the preliminary design. After several iterations aiming at selecting the most influential parameters, polynomial expressions of the mass fraction have been made up, associated to a confidence level. The outcome highlights the real possibility of a parametric formulation of the mass fraction for conventional stages on the basis of a limited number of descriptive parameters and with a high degree of accuracy, lower than 10%. The formulas have been later on tested on existing or preliminary stages not included in the initial database, for validation purposes. Their mass faction is assessed with a comparable accuracy. The polynomial generation method in use allows also for a search of the influence of each parameter. The devised method, suitable for the preliminary design phase, represents, compared to the classical empirical approach, a significant way of improvement of the mass fraction prediction. It enables a rapid dissemination of more accurate and consistent weight data estimates to support system studies. It makes also possible the upstream processing of the preliminary design tasks through a global system approach. This method, currently in the experimental phase, is already in use as a complementary means at the technical underdirectorate of CNES-DLA. * IRIS :Instrument de Recherche des Indices Structuraux
The Basic Theoretical Framework
NASA Astrophysics Data System (ADS)
Loeb, Abraham
Cosmology is by now a mature experimental science. We are privileged to live at a time when the story of genesis (how the Universe started and developed) can be critically explored by direct observations. Looking deep into the Universe through powerful telescopes, we can see images of the Universe when it was younger because of the finite time it takes light to travel to us from distant sources. Existing data sets include an image of the Universe when it was 0.4 million years old (in the form of the cosmic microwave background), as well as images of individual galaxies when the Universe was older than a billion years. But there is a serious challenge: in between these two epochs was a period when the Universe was dark, stars had not yet formed, and the cosmic microwave background no longer traced the distribution of matter. And this is precisely the most interesting period, when the primordial soup evolved into the rich zoo of objects we now see. The observers are moving ahead along several fronts. The first involves the construction of large infrared telescopes on the ground and in space, that will provide us with new photos of the first galaxies. Current plans include ground-based telescopes which are 24-42 m in diameter, and NASA's successor to the Hubble Space Telescope, called the James Webb Space Telescope. In addition, several observational groups around the globe are constructing radio arrays that will be capable of mapping the three-dimensional distribution of cosmic hydrogen in the infant Universe. These arrays are aiming to detect the long-wavelength (redshifted 21-cm) radio emission from hydrogen atoms. The images from these antenna arrays will reveal how the non-uniform distribution of neutral hydrogen evolved with cosmic time and eventually was extinguished by the ultra-violet radiation from the first galaxies. Theoretical research has focused in recent years on predicting the expected signals for the above instruments and motivating these ambitious
Automated management of life cycle for future network experiment based on description language
NASA Astrophysics Data System (ADS)
Niu, Hongxia; Liang, Junxue; Lin, Zhaowen; Ma, Yan
2016-12-01
Future network is a complex resources pool including multiple physical resources and virtual resources. Establishing experiment on future network is complicate and tedious. That achieving the automated management of future network experiments is so important. This paper brings forward the way for researching and managing the life cycle of experiment based on the description language. The description language uses the framework, which couples with a low hierarchical structure and a complete description of the network experiment. In this way, the experiment description template can be generated by this description framework accurately and completely. In reality, we can also customize and reuse network experiment by modifying the description template. The results show that this method can achieve the aim for managing the life cycle of network experiment effectively and automatically, which greatly saves time, reduces the difficulty, and implements the reusability of services.
Investigations in Experimental and Theoretical High Energy Physics
Krennrich, Frank
2013-07-29
We report on the work done under DOE grant DE-FG02-01ER41155. The experimental tasks have ongoing efforts at CERN (ATLAS), the Whipple observatory (VERITAS) and R&D work on dual readout calorimetry and neutrino-less double beta decay. The theoretical task emphasizes the weak interaction and in particular CP violation and neutrino physics. The detailed descriptions of the final report on each project are given under the appropriate task section of this report.
From information theory to quantitative description of steric effects.
Alipour, Mojtaba; Safari, Zahra
2016-07-21
Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.
Robust Retinal Blood Vessel Segmentation Based on Reinforcement Local Descriptions
Li, Meng; Ma, Zhenshen; Liu, Chao; Han, Zhe
2017-01-01
Retinal blood vessels segmentation plays an important role for retinal image analysis. In this paper, we propose robust retinal blood vessel segmentation method based on reinforcement local descriptions. A novel line set based feature is firstly developed to capture local shape information of vessels by employing the length prior of vessels, which is robust to intensity variety. After that, local intensity feature is calculated for each pixel, and then morphological gradient feature is extracted for enhancing the local edge of smaller vessel. At last, line set based feature, local intensity feature, and morphological gradient feature are combined to obtain the reinforcement local descriptions. Compared with existing local descriptions, proposed reinforcement local description contains more local information of local shape, intensity, and edge of vessels, which is more robust. After feature extraction, SVM is trained for blood vessel segmentation. In addition, we also develop a postprocessing method based on morphological reconstruction to connect some discontinuous vessels and further obtain more accurate segmentation result. Experimental results on two public databases (DRIVE and STARE) demonstrate that proposed reinforcement local descriptions outperform the state-of-the-art method. PMID:28194407
Robust Retinal Blood Vessel Segmentation Based on Reinforcement Local Descriptions.
Li, Meng; Ma, Zhenshen; Liu, Chao; Zhang, Guang; Han, Zhe
2017-01-01
Retinal blood vessels segmentation plays an important role for retinal image analysis. In this paper, we propose robust retinal blood vessel segmentation method based on reinforcement local descriptions. A novel line set based feature is firstly developed to capture local shape information of vessels by employing the length prior of vessels, which is robust to intensity variety. After that, local intensity feature is calculated for each pixel, and then morphological gradient feature is extracted for enhancing the local edge of smaller vessel. At last, line set based feature, local intensity feature, and morphological gradient feature are combined to obtain the reinforcement local descriptions. Compared with existing local descriptions, proposed reinforcement local description contains more local information of local shape, intensity, and edge of vessels, which is more robust. After feature extraction, SVM is trained for blood vessel segmentation. In addition, we also develop a postprocessing method based on morphological reconstruction to connect some discontinuous vessels and further obtain more accurate segmentation result. Experimental results on two public databases (DRIVE and STARE) demonstrate that proposed reinforcement local descriptions outperform the state-of-the-art method.
Theoretical studies of hadrons and nuclei
COTANCH, STEPHEN R
2007-03-20
This report details final research results obtained during the 9 year period from June 1, 1997 through July 15, 2006. The research project, entitled Theoretical Studies of Hadrons and Nuclei , was supported by grant DE-FG02-97ER41048 between North Carolina State University [NCSU] and the U. S. Department of Energy [DOE]. In compliance with grant requirements the Principal Investigator [PI], Professor Stephen R. Cotanch, conducted a theoretical research program investigating hadrons and nuclei and devoted to this program 50% of his time during the academic year and 100% of his time in the summer. Highlights of new, significant research results are briefly summarized in the following three sections corresponding to the respective sub-programs of this project (hadron structure, probing hadrons and hadron systems electromagnetically, and many-body studies). Recent progress is also discussed in a recent renewal/supplemental grant proposal submitted to DOE. Finally, full detailed descriptions of completed work can be found in the publications listed at the end of this report.
Theoretical study of the bond dissociation energies of methanol
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Walch, Stephen P.
1992-01-01
A theoretical study of the bond dissociation energies for H2O and CH3OH is presented. The C-H and O-H bond energies are computed accurately with the modified coupled-pair functional method using a large basis set. For these bonds, an accuracy of +/- 2 kcal/mol is achieved, which is consistent with the C-H and C-C single bond energies of other molecules. The C-O bond is much more difficult to compute accurately because it requires higher levels of correlation treatment and more extensive one-particle basis sets.
CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT
J.F. Beesley
2005-04-21
The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.
Micropolar continuum in spatial description
NASA Astrophysics Data System (ADS)
Ivanova, Elena A.; Vilchevskaya, Elena N.
2016-11-01
Within the spatial description, it is customary to refer thermodynamic state quantities to an elementary volume fixed in space containing an ensemble of particles. During its evolution, the elementary volume is occupied by different particles, each having its own mass, tensor of inertia, angular and linear velocities. The aim of the present paper is to answer the question of how to determine the inertial and kinematic characteristics of the elementary volume. In order to model structural transformations due to the consolidation or defragmentation of particles or anisotropic changes, one should consider the fact that the tensor of inertia of the elementary volume may change. This means that an additional constitutive equation must be formulated. The paper suggests kinetic equations for the tensor of inertia of the elementary volume. It also discusses the specificity of the inelastic polar continuum description within the framework of the spatial description.
Analysis of a theoretically optimized transonic airfoil
NASA Technical Reports Server (NTRS)
Lores, M. E.; Burdges, K. P.; Shrewsbury, G. D.
1978-01-01
Numerical optimization was used in conjunction with an inviscid, full potential equation, transonic flow analysis computer code to design an upper surface contour for a conventional airfoil to improve its supercritical performance. The modified airfoil was tested in a compressible flow wind tunnel. The modified airfoil's performance was evaluated by comparison with test data for the baseline airfoil and for an airfoil developed by optimization of leading edge of the baseline airfoil. While the leading edge modification performed as expected, the upper surface re-design did not produce all of the expected performance improvements. Theoretical solutions computed using a full potential, transonic airfoil code corrected for viscosity were compared to experimental data for the baseline airfoil and the upper surface modification. These correlations showed that the theory predicted the aerodynamics of the baseline airfoil fairly well, but failed to accurately compute drag characteristics for the upper surface modification.
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Simple and surprisingly accurate approach to the chemical bond obtained from dimensional scaling.
Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R
2005-08-19
We present a new dimensional scaling transformation of the Schrödinger equation for the two electron bond. This yields, for the first time, a good description of the bond via D scaling. There also emerges, in the large-D limit, an intuitively appealing semiclassical picture, akin to a molecular model proposed by Bohr in 1913. In this limit, the electrons are confined to specific orbits in the scaled space, yet the uncertainty principle is maintained. A first-order perturbation correction, proportional to 1/D, substantially improves the agreement with the exact ground state potential energy curve. The present treatment is very simple mathematically, yet provides a strikingly accurate description of the potential curves for the lowest singlet, triplet, and excited states of H2. We find the modified D-scaling method also gives good results for other molecules. It can be combined advantageously with Hartree-Fock and other conventional methods.
Description du langage scientifique (Description of Scientific Language)
ERIC Educational Resources Information Center
Widdowson, H. G.
1977-01-01
A description of scientific language using three approaches: text, textualization, and discourse. Scientific discourse is analogous to universal deep structure; text, to surface variations in diverse languages; and textualization, to transformational processes. The relationship of the primary and secondary (scientific) cultures and their languages…
Auteur Description: From the Director's Creative Vision to Audio Description
ERIC Educational Resources Information Center
Szarkowska, Agnieszka
2013-01-01
In this report, the author follows the suggestion that a film director's creative vision should be incorporated into Audio description (AD), a major technique for making films, theater performances, operas, and other events accessible to people who are blind or have low vision. The author presents a new type of AD for auteur and artistic films:…
77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...
NASA Astrophysics Data System (ADS)
Beranek, T.; Merkel, H.; Vanderhaeghen, M.
2013-07-01
Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the Standard Model of particle physics like the (g-2)μ discrepancy, U(1) extensions of the Standard Model have been proposed in recent years. Such U(1) extensions allow for the interaction of dark matter by exchange of a photonlike massive force carrier γ' not included in the Standard Model. In order to search for γ' bosons, various experimental programs have been started. One approach is the dedicated search at fixed-target experiments at modest energies as performed at microtron (MAMI) or at the Jefferson Lab. In these experiments the process e(A,Z)→e(A,Z)l+l- is investigated, and a search for a very narrow resonance in the invariant mass distribution of the l+l- pair is performed. In this work we analyze this process in terms of signal and background in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions for exclusion limits in the γ' parameter space. We present a detailed theoretical analysis of the cross sections entering in the description of such processes.
Accurate free and forced rotational motions of rigid Venus
NASA Astrophysics Data System (ADS)
Cottereau, L.; Souchay, J.; Aljbaae, S.
2010-06-01
Context. The precise and accurate modelling of a terrestrial planet like Venus is an exciting and challenging topic, all the more interesting because it can be compared with that of Earth for which such a modelling has already been achieved at the milli-arcsecond level. Aims: We aim to complete a previous study, by determining the polhody at the milli-arcsecond level, i.e. the torque-free motion of the angular momentum axis of a rigid Venus in a body-fixed frame, as well as the nutation of its third axis of figure in space, which is fundamental from an observational point of view. Methods: We use the same theoretical framework as Kinoshita (1977, Celest. Mech., 15, 277) did to determine the precession-nutation motion of a rigid Earth. It is based on a representation of the rotation of a rigid Venus, with the help of Andoyer variables and a set of canonical equations in Hamiltonian formalism. Results: In a first part we computed the polhody, we showed that this motion is highly elliptical, with a very long period of 525 cy compared with 430 d for the Earth. This is due to the very small dynamical flattening of Venus in comparison with our planet. In a second part we precisely computed the Oppolzer terms, which allow us to represent the motion in space of the third Venus figure axis with respect to the Venus angular momentum axis under the influence of the solar gravitational torque. We determined the corresponding tables of the nutation coefficients of the third figure axis both in longitude and in obliquity due to the Sun, which are of the same order of amplitude as for the Earth. We showed that the nutation coefficients for the third figure axis are significantly different from those of the angular momentum axis on the contrary of the Earth. Our analytical results have been validated by a numerical integration, which revealed the indirect planetary effects.
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our
Estimation of bone permeability using accurate microstructural measurements.
Beno, Thoma; Yoon, Young-June; Cowin, Stephen C; Fritton, Susannah P
2006-01-01
While interstitial fluid flow is necessary for the viability of osteocytes, it is also believed to play a role in bone's mechanosensory system by shearing bone cell membranes or causing cytoskeleton deformation and thus activating biochemical responses that lead to the process of bone adaptation. However, the fluid flow properties that regulate bone's adaptive response are poorly understood. In this paper, we present an analytical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity in bone. First, we estimate the total number of canaliculi emanating from each osteocyte lacuna based on published measurements from parallel-fibered shaft bones of several species (chick, rabbit, bovine, horse, dog, and human). Next, we determine the local three-dimensional permeability of the lacunar-canalicular porosity for these species using recent microstructural measurements and adapting a previously developed model. Results demonstrated that the number of canaliculi per osteocyte lacuna ranged from 41 for human to 115 for horse. Permeability coefficients were found to be different in three local principal directions, indicating local orthotropic symmetry of bone permeability in parallel-fibered cortical bone for all species examined. For the range of parameters investigated, the local lacunar-canalicular permeability varied more than three orders of magnitude, with the osteocyte lacunar shape and size along with the 3-D canalicular distribution determining the degree of anisotropy of the local permeability. This two-step theoretical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity will be useful for accurate quantification of interstitial fluid movement in bone.
Langley Atmospheric Information Retrieval System (LAIRS): System description and user's guide
NASA Technical Reports Server (NTRS)
Boland, D. E., Jr.; Lee, T.
1982-01-01
This document presents the user's guide, system description, and mathematical specifications for the Langley Atmospheric Information Retrieval System (LAIRS). It also includes a description of an optimal procedure for operational use of LAIRS. The primary objective of the LAIRS Program is to make it possible to obtain accurate estimates of atmospheric pressure, density, temperature, and winds along Shuttle reentry trajectories for use in postflight data reduction.
2017-01-01
Developing ab initio approaches able to provide accurate excited-state energies at a reasonable computational cost is one of the biggest challenges in theoretical chemistry. In that framework, the Bethe–Salpeter equation approach, combined with the GW exchange-correlation self-energy, which maintains the same scaling with system size as TD-DFT, has recently been the focus of a rapidly increasing number of applications in molecular chemistry. Using a recently proposed set encompassing excitation energies of many kinds [J. Phys. Chem. Lett.2016, 7, 586–591], we investigate here the performances of BSE/GW. We compare these results to CASPT2, EOM-CCSD, and TD-DFT data and show that BSE/GW provides an accuracy comparable to the two wave function methods. It is particularly remarkable that the BSE/GW is equally efficient for valence, Rydberg, and charge-transfer excitations. In contrast, it provides a poor description of triplet excited states, for which EOM-CCSD and CASPT2 clearly outperform BSE/GW. This contribution therefore supports the use of the Bethe–Salpeter approach for spin-conserving transitions. PMID:28301726
Jacquemin, Denis; Duchemin, Ivan; Blase, Xavier
2017-03-16
Developing ab initio approaches able to provide accurate excited-state energies at a reasonable computational cost is one of the biggest challenges in theoretical chemistry. In that framework the Bethe-Salpeter equation approach, combined with the GW exchange-correlation self-energy, that maintains the same scaling with system size as TD-DFT, has recently been the focus of a rapidly increasing number of applications in molecular chemistry. Using a recently-proposed set encompassing excitation energies of many kinds [J. Phys. Chem. Lett., 7 (2016), 586-591], we investigate here the performances of BSE/GW. We compare these results to CASPT2, EOM-CCSD, and TD-DFT data and show that BSE/GW provides an accuracy comparable to the two wavefunction methods. It is particularly remarkable that the BSE/GW is equally efficient for valence, Rydberg and charge-transfer excitations. In contrast, it provides a poor description of triplet excited-states, for which EOM-CCSD and CASPT2 clearly outperform BSE/GW. This contribution therefore supports the use of the Bethe-Salpeter approach for spin-conserving transitions.
NASA Astrophysics Data System (ADS)
Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François
2017-02-01
In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.
Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François
2017-02-15
In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.
WARP: accurate retrieval of shapes using phase of fourier descriptors and time warping distance.
Bartolini, Ilaria; Ciaccia, Paolo; Patella, Marco
2005-01-01
Effective and efficient retrieval of similar shapes from large image databases is still a challenging problem in spite of the high relevance that shape information can have in describing image contents. In this paper, we propose a novel Fourier-based approach, called WARP, for matching and retrieving similar shapes. The unique characteristics of WARP are the exploitation of the phase of Fourier coefficients and the use of the Dynamic Time Warping (DTW) distance to compare shape descriptors. While phase information provides a more accurate description of object boundaries than using only the amplitude of Fourier coefficients, the DTW distance permits us to accurately match images even in the presence of (limited) phase shiftings. In terms of classical precision/recall measures, we experimentally demonstrate that WARP can gain, say, up to 35 percent in precision at a 20 percent recall level with respect to Fourier-based techniques that use neither phase nor DTW distance.
NASA Astrophysics Data System (ADS)
Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong
2009-09-01
To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.
Theoretical evaluation of high speed aerodynamics for arrow wing configurations
NASA Technical Reports Server (NTRS)
Dollyhigh, S. M.
1978-01-01
A limited study in the use of theoretical methods to calculate the high speed aerodynamics of arrow wing supersonic cruise configurations was conducted. The study consisted of correlations with existing wind tunnel data at Mach numbers from 0.8 to 2.7, using theoretical methods to extrapolate the wind tunnel data to full scale flight conditions, and presentation of a typical supersonic data package for an advanced supersonic transport application prepared using the theoretical methods. A brief description of the methods and their application was given. In general, all three methods had excellent correlation with wind tunnel data at supersonic speeds for drag and lift characteristics and fair to poor agreement with pitching moment characteristics. The VORLAX program had excellent correlation with wind tunnel data at subsonic speeds for lift and pitching moment characteristics and fair agreement in drag characteristics.
A Field-Theoretic Approach to the Wiener Sausage
NASA Astrophysics Data System (ADS)
Nekovar, S.; Pruessner, G.
2016-05-01
The Wiener Sausage, the volume traced out by a sphere attached to a Brownian particle, is a classical problem in statistics and mathematical physics. Initially motivated by a range of field-theoretic, technical questions, we present a single loop renormalised perturbation theory of a stochastic process closely related to the Wiener Sausage, which, however, proves to be exact for the exponents and some amplitudes. The field-theoretic approach is particularly elegant and very enjoyable to see at work on such a classic problem. While we recover a number of known, classical results, the field-theoretic techniques deployed provide a particularly versatile framework, which allows easy calculation with different boundary conditions even of higher momenta and more complicated correlation functions. At the same time, we provide a highly instructive, non-trivial example for some of the technical particularities of the field-theoretic description of stochastic processes, such as excluded volume, lack of translational invariance and immobile particles. The aim of the present work is not to improve upon the well-established results for the Wiener Sausage, but to provide a field-theoretic approach to it, in order to gain a better understanding of the field-theoretic obstacles to overcome.
Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry
NASA Astrophysics Data System (ADS)
Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.
2013-09-01
Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable
A Universal Operator Theoretic Framework for Quantum Fault Tolerance.
NASA Astrophysics Data System (ADS)
Gilbert, Gerald; Calderbank, Robert; Aggarwal, Vaneet; Hamrick, Michael; Weinstein, Yaakov
2008-03-01
We introduce a universal operator theoretic framework for quantum fault tolerance. This incorporates a top-down approach that implements a system-level criterion based on specification of the full system dynamics, applied at every level of error correction concatenation. This leads to more accurate determinations of error thresholds than could previously be obtained. The basis for the approach is the Quantum Computer Condition (QCC), an inequality governing the evolution of a quantum computer. In addition to more accurate determination of error threshold values, we show that the QCC provides a means to systematically determine optimality (or non-optimality) of different choices of error correction coding and error avoidance strategies. This is possible because, as we show, all known coding schemes are actually special cases of the QCC. We demonstrate this by introducing a new, operator theoretic form of entanglement assisted quantum error correction.
Natural Language Description of Emotion
ERIC Educational Resources Information Center
Kazemzadeh, Abe
2013-01-01
This dissertation studies how people describe emotions with language and how computers can simulate this descriptive behavior. Although many non-human animals can express their current emotions as social signals, only humans can communicate about emotions symbolically. This symbolic communication of emotion allows us to talk about emotions that we…
Developmental Kindergarten: Definition and Description.
ERIC Educational Resources Information Center
Virginia State Dept. of Education, Richmond.
This paper sets forth a definition and operational description of a developmental program that should be of use as a guide, especially to Virginia's teachers and administrators. Also included in the paper are kindergarten curriculum objectives in the areas of language arts, mathematics, science, art, social studies, family life, health, mental…
High-accuracy theoretical thermochemistry of fluoroethanes.
Nagy, Balázs; Csontos, Botond; Csontos, József; Szakács, Péter; Kállay, Mihály
2014-07-03
A highly accurate coupled-cluster-based ab initio model chemistry has been applied to calculate the thermodynamic functions including enthalpies of formation and standard entropies for fluorinated ethane derivatives, C2HxF6-x (x = 0-5), as well as ethane, C2H6. The invoked composite protocol includes contributions up to quadruple excitations in coupled-cluster (CC) theory as well as corrections beyond the nonrelativistic and Born-Oppenheimer approximations. For species CH2F-CH2F, CH2F-CHF2, and CHF2-CHF2, where anti/gauche isomerism occurs due to the hindered rotation around the C-C bond, conformationally averaged enthalpies and entropies at 298.15 K are also calculated. The results obtained here are in reasonable agreement with previous experimental and theoretical findings, and for all fluorinated ethanes except CH2FCH3 and C2F6 this study delivers the best available theoretical enthalpy and entropy estimates.
NASA Technical Reports Server (NTRS)
Schwenke, David W.
1990-01-01
The dissociation and recombination of H2 over the temperature range 1000-5000 K are calculated in a nonempirical manner. The computation procedure involves the calculation of the state-to-state energy transfer rate coefficients, the solution of the 349 coupled equations which form the master equation, and the determination of the phenomenological rate coefficients. The nonempirical results presented here are in good agreement with experimental data at 1000 and 3000 K.
Theoretical Dipole Moment for the X211 State of NO
NASA Technical Reports Server (NTRS)
Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.; Partridge, Harry; Arnold, James O. (Technical Monitor)
1994-01-01
The dipole moment function for the X(sup 2)II state of NO is studied as a function of the completeness in both the one- and n-particle spaces. Einstein coefficients are presented that are significantly more accurate than previous tabulations for the higher vibrational levels. The theoretical values give considerable insight into the limitations of recently published ratios of Einstein coefficients measured by spectrally resolved infrared chemiluminescence.
Theoretical molecular studies of astrophysical interest
NASA Technical Reports Server (NTRS)
Flynn, George
1991-01-01
When work under this grant began in 1974 there was a great need for state-to-state collisional excitation rates for interstellar molecules observed by radio astronomers. These were required to interpret observed line intensities in terms of local temperatures and densities, but, owing to lack of experimental or theoretical values, estimates then being used for this purpose ranged over several orders of magnitude. A problem of particular interest was collisional excitation of formaldehyde; Townes and Cheung had suggested that the relative size of different state-to-state rates (propensity rules) was responsible for the anomalous absorption observed for this species. We believed that numerical molecular scattering techniques (in particular the close coupling or coupled channel method) could be used to obtain accurate results, and that these would be computationally feasible since only a few molecular rotational levels are populated at the low temperatures thought to prevail in the observed regions. Such calculations also require detailed knowledge of the intermolecular forces, but we thought that those could also be obtained with sufficient accuracy by theoretical (quantum chemical) techniques. Others, notably Roy Gordon at Harvard, had made progress in solving the molecular scattering equations, generally using semi-empirical intermolecular potentials. Work done under this grant generalized Gordon's scattering code, and introduced the use of theoretical interaction potentials obtained by solving the molecular Schroedinger equation. Earlier work had considered only the excitation of a diatomic molecule by collisions with an atom, and we extended the formalism to include excitation of more general molecular rotors (e.g., H2CO, NH2, and H2O) and also collisions of two rotors (e.g., H2-H2).
An effective method for accurate prediction of the first hyperpolarizability of alkalides.
Wang, Jia-Nan; Xu, Hong-Liang; Sun, Shi-Ling; Gao, Ting; Li, Hong-Zhi; Li, Hui; Su, Zhong-Min
2012-01-15
The proper theoretical calculation method for nonlinear optical (NLO) properties is a key factor to design the excellent NLO materials. Yet it is a difficult task to obatin the accurate NLO property of large scale molecule. In present work, an effective intelligent computing method, as called extreme learning machine-neural network (ELM-NN), is proposed to predict accurately the first hyperpolarizability (β(0)) of alkalides from low-accuracy first hyperpolarizability. Compared with neural network (NN) and genetic algorithm neural network (GANN), the root-mean-square deviations of the predicted values obtained by ELM-NN, GANN, and NN with their MP2 counterpart are 0.02, 0.08, and 0.17 a.u., respectively. It suggests that the predicted values obtained by ELM-NN are more accurate than those calculated by NN and GANN methods. Another excellent point of ELM-NN is the ability to obtain the high accuracy level calculated values with less computing cost. Experimental results show that the computing time of MP2 is 2.4-4 times of the computing time of ELM-NN. Thus, the proposed method is a potentially powerful tool in computational chemistry, and it may predict β(0) of the large scale molecules, which is difficult to obtain by high-accuracy theoretical method due to dramatic increasing computational cost.
Ab Initio Potential Energy Surfaces and the Calculation of Accurate Vibrational Frequencies
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Dateo, Christopher E.; Martin, Jan M. L.; Taylor, Peter R.; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
Due to advances in quantum mechanical methods over the last few years, it is now possible to determine ab initio potential energy surfaces in which fundamental vibrational frequencies are accurate to within plus or minus 8 cm(exp -1) on average, and molecular bond distances are accurate to within plus or minus 0.001-0.003 Angstroms, depending on the nature of the bond. That is, the potential energy surfaces have not been scaled or empirically adjusted in any way, showing that theoretical methods have progressed to the point of being useful in analyzing spectra that are not from a tightly controlled laboratory environment, such as vibrational spectra from the interstellar medium. Some recent examples demonstrating this accuracy will be presented and discussed. These include the HNO, CH4, C2H4, and ClCN molecules. The HNO molecule is interesting due to the very large H-N anharmonicity, while ClCN has a very large Fermi resonance. The ab initio studies for the CH4 and C2H4 molecules present the first accurate full quartic force fields of any kind (i.e., whether theoretical or empirical) for a five-atom and six-atom system, respectively.
Theoretical Principles of Distance Education.
ERIC Educational Resources Information Center
Keegan, Desmond, Ed.
This book contains the following papers examining the didactic, academic, analytic, philosophical, and technological underpinnings of distance education: "Introduction"; "Quality and Access in Distance Education: Theoretical Considerations" (D. Randy Garrison); "Theory of Transactional Distance" (Michael G. Moore);…
Theoretical Foundations of Learning Communities
ERIC Educational Resources Information Center
Jessup-Anger, Jody E.
2015-01-01
This chapter describes the historical and contemporary theoretical underpinnings of learning communities and argues that there is a need for more complex models in conceptualizing and assessing their effectiveness.
An integrative nursing theoretical framework.
Schmieding, N J
1990-04-01
The use of an integrative nursing theoretical framework for both clinical and administrative practice has recently been suggested. The author developed a theoretical framework which incorporates key concepts from the writings of Ida J. Orlando and Virginia Henderson and proposes it to be used as an integrative framework. The rationale for using a framework is discussed along with clinical and administrative examples of how to integrate concepts from the proposed framework. The reasons for using an integrative theoretical framework are that it: serves as a guide for both clinical and administrative decisions; forms the basis of the nursing philosophy; facilitates communication with patients and colleagues; helps identify congruent supporting theories and concepts; provides a basis for educational programmes; helps to differentiate nursing from non-nursing activities; and enhances nurse unity and self-esteem. The premise of the article is that benefits are derived from the use of a nursing theoretical framework because it provides a specific vision of nursing.
Theoretical Studies of Nanocluster Formation
2016-05-26
Briefing Charts 3. DATES COVERED (From - To) 22 April 2016 - 25 May 2016 4. TITLE AND SUBTITLE Theoretical Studies of nanocluster formation 5a. CONTRACT...Date: 5/5/2016 14. ABSTRACT Viewgraph/Briefing Charts 15. SUBJECT TERMS N/A 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...SAR 17 19b. TELEPHONE NO (include area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 Theoretical studies of
NASA Astrophysics Data System (ADS)
Tamma, Vincenzo
2016-12-01
We describe a novel analogue algorithm that allows the simultaneous factorization of an exponential number of large integers with a polynomial number of experimental runs. It is the interference-induced periodicity of "factoring" interferograms measured at the output of an analogue computer that allows the selection of the factors of each integer. At the present stage, the algorithm manifests an exponential scaling which may be overcome by an extension of this method to correlated qubits emerging from n-order quantum correlations measurements. We describe the conditions for a generic physical system to compute such an analogue algorithm. A particular example given by an "optical computer" based on optical interference will be addressed in the second paper of this series (Tamma in Quantum Inf Process 11128:1189, 2015).
Development of Theoretical Foundations for Description and Analysis of Discrete Information Systems
1975-05-07
l^JVJ»,,|W.•U»VJf^u■u, jmi; -^^.^.■^•■.r ^.’rv’^.A’VT i A7. Much less Inforrr.atlon is documented concerning the anato /ny of the Clt...constructed, there was not much that could be done with it other than simulation. (See reference (4) for an example of optimization . ia simulation
The field-theoretic description of dynamics of interfaces in disordered media
NASA Astrophysics Data System (ADS)
Stepanow, S.
The time evolution of an interface in a disordered media is described by using the propagator method. The method enables one to represent the perturbation expansions of different quantities characterizing the interface by means of diagrams which are familiar from the field theory. By the analysis of the divergences in the vicinity of the critical dimension dc = 4 we found that the regularization of the theory demands the renormalization of the mobility and all moments of the disorder correlator excepting the zero one. The renormalization group (RG) calculations of the average velocity of the interface, the roughness, and the functional RG equation of the disorder correlator are presented to order = 4 - d. The latter coincides with the result obtained by D. S. Fisher in the equilibrium case. The RG equations have a pole at the value of the driving force, which coincides with the value of the threshold below which the interface becomes pinned as predicted by Bruinsma and Aeppli. The behavior of the mobility in the vicinity of the pole is discussed.
ERIC Educational Resources Information Center
Calzada Pérez, María
2013-01-01
The present paper revolves around MaxiECPC, one of the various sub-corpora that make up ECPC (the European Comparable and Parallel Corpora), an electronic archive of speeches delivered at different parliaments (i.e. the European Parliament-EP; the Spanish Congreso de los Diputados-CD; and the British House of Commons-HC) from 1996 to 2009. In…
Theoretical description of laser melt pool dynamics, Task order number B239634, Quarter 3 report
Dykhne, A.
1995-05-10
Melting of solid matter under laser radiation is realized in almost every process of laser technology. The present paper addresses melted material flows in cases when melt zones are shallow, i.e., the zone width is appreciably greater than or of the same order as its depth. Such conditions are usually realized when hardening, doping or perforating thin plates or when using none-deep penetration. Melted material flowing under conditions of deep penetration, drilling of deep openings and cutting depends on a number of additional factors (as compared to the shallow-pool case), namely, formation of a vapor and gas cavern in the sample and propagation of the laser beam through the cavern. These extra circumstances complicate hydrodynamic consideration of the liquid bath and will be addressed is the paper to follow.
Tang, H; Mitragotri, S; Blankschtein, D; Langer, R
2001-05-01
Application of ultrasound enhances transdermal transport of drugs (sonophoresis). The enhancement may result from enhanced diffusion due to ultrasound-induced skin alteration and/or from forced convection. To understand the relative roles played by these two mechanisms in low-frequency sonophoresis (LFS, 20 kHz), a theory describing the transdermal transport of hydrophilic permeants in both the absence and the presence of ultrasound was developed using fundamental equations of membrane transport, hindered-transport theory, and electrochemistry principles. With mannitol as the model permeant, the role of convection in LFS was evaluated experimentally with two commonly used in vitro skin models- human cadaver heat-stripped skin (HSS) and pig full-thickness skin (FTS). Our results suggest that convection plays an important role during LFS of HSS, whereas its effect is negligible when FTS is utilized. The theory developed was utilized to characterize the transport pathways of hydrophilic permeants during both passive diffusion and LFS with mannitol and sucrose as two probe molecules. Our results show that the porous pathway theory can adequately describe the transdermal transport of hydrophilic permeants in both the presence and the absence of ultrasound. Ultrasound alters the skin porous pathways by two mechanisms: (1) enlarging the skin effective pore radii, or (2) creating more pores and/or making the pores less tortuous. During passive diffusion, both HSS and FTS exhibit the same skin effective pore radii (r = 28 +/- 13 A). In contrast, during LFS, r within HSS is greatly enlarged (r > 125 A), whereas r within FTS does not change significantly (23 +/- 10 A). The observed different roles of convection during LFS across HSS and FTS can be attributed to the different degrees of structural alteration that these two types of skin undergo during LFS.
NASA Astrophysics Data System (ADS)
Xu, Sheng; Shen, Xiao; Hallman, Kent A.; Haglund, Richard F.; Pantelides, Sokrates T.
2017-03-01
The debate about whether the insulating phases of vanadium dioxide (V O2 ) can be described by band theory or whether it requires a theory of strong electron correlations remains unresolved even after decades of research. Energy-band calculations using hybrid exchange functionals or including self-energy corrections account for the insulating or metallic nature of different phases but have not yet successfully accounted for the observed magnetic orderings. Strongly correlated theories have had limited quantitative success. Here we report that by using hard pseudopotentials and an optimized hybrid exchange functional, the energy gaps and magnetic orderings of both monoclinic V O2 phases and the metallic nature of the high-temperature rutile phase are consistent with available experimental data, obviating an explicit role for strong correlations. We also identify a potential candidate for the newly found metallic monoclinic phase.
NASA Astrophysics Data System (ADS)
Freericks, J. K.; Matveev, O. P.; Shen, Wen; Shvaika, A. M.; Devereaux, T. P.
2017-03-01
In this review, we develop the formalism employed to describe charge-density-wave insulators in pump/probe experiments that use ultrashort driving pulses of light. The theory emphasizes exact results in the simplest model for a charge-density-wave insulator (given by a noninteracting system with two bands and a gap) and employs nonequilibrium dynamical mean-field theory to solve the Falicov–Kimball model in its ordered phase. We show how to develop the formalism and how the solutions behave. Care is taken to describe the details behind these calculations and to show how to verify their accuracy via sum-rule constraints.
NASA Astrophysics Data System (ADS)
Dreiling, Joan; Tupa, Dale; Norrgard, Eric; Gay, Timothy
2012-06-01
In optical pumping of alkali-metal vapors, the polarization of the atoms is typically determined by probing along the entire length of the pumping beam, resulting in an averaged value of polarization over the length of the cell. Such measurements do not give any information about spatial variations of the polarization along the pump beam axis. Using a D1 probe beam oriented perpendicular to the pumping beam, we have demonstrated a heuristic method for determining the polarization along the pump beam's axis. Adapting a previously developed theory [1], we provide an analysis of the experiment which explains why this method works. The model includes the effects of Rb density, buffer gas pressure, and pump detuning. [4pt] [1] E.B. Norrgard, D. Tupa, J.M. Dreiling, and T.J. Gay, Phys. Rev. A 82, 033408 (2010).
NASA Astrophysics Data System (ADS)
Seto, Keita; Nagatomo, Hideo; Koga, James; Mima, Kunioki
In the near future, the intensity of the ultra-short pulse laser will reach to 1022 W/cm2. When an electron is irradiated by this laser, the electron's behavior is relativistic with significant bremsstrahlung. This radiation from the electron is regarded as the energy loss of electron. Therefore, the electron's motion changes because of the kinetic energy changing. This radiation effect on the charged particle is the self-interaction, called the “radiation reaction” or the “radiation damping”. For this reason, the radiation reaction appears in laser electron interactions with an ultra-short pulse laser whose intensity becomes larger than 1022 W/cm2. In the classical theory, it is described by the Lorentz-Abraham-Dirac (LAD) equation. But, this equation has a mathematical difficulty, which we call the “run-away”. Therefore, there are many methods for avoiding this problem. However, Dirac's viewpoint is brilliant, based on the idea of quantum electrodynamics. We propose a new equation of motion in the quantum theory with radiation reaction in this paper.
Modern approaches for the theoretical description of multiparticle scattering and nuclear reactions
Kukulin, V. I.; Rubtsova, O. A.
2012-11-15
A review of novel approaches to solution of multiparticle scattering problems in the area above three-body breakup together with the review of new computational technologies which provide very effective and ultrafast realization of the novel approaches with ordinary PC are given. The novel direction presented here is based on two key points: a new formulation of the quantum scattering theory in a discrete Hilbert space of stationary wave packets and the massive-parallel solution of the resulted matrix equations with usage of ultrafast graphic processors (the so called GPU-computations). For the reader's convenience, a short review of the modern GPU calculations for the medicine, physics, military applications etc. is presented.
Field-theoretical description of the formation of a crack tip process zone
NASA Astrophysics Data System (ADS)
Boulbitch, Alexei; Korzhenevskii, Alexander L.
2016-12-01
The crack tip process zone is regarded as a region where the solid physical properties are altered due to high stress. They are controlled by the solid degrees of freedom existing within the zone and vanishing outside, and can be divided into two classes: (1) zones always existing at the tip and (2) those emerging as soon as certain conditions are met. We focus on the zones of the second kind and argue that they can be described analogously to phase transitions taking place locally. We report both a numerical and an analytical solution for the process zone. We find that the zone can only exist within a limited domain of the dynamic phase diagram, at one side of the phase transition line. We describe this domain and establish its dependence on the crack velocity. We show the existence of a critical crack velocity above which the zone cannot exist.
Sampling Soil for Characterization and Site Description
NASA Technical Reports Server (NTRS)
Levine, Elissa
1999-01-01
The sampling scheme for soil characterization within the GLOBE program is uniquely different from the sampling methods of the other protocols. The strategy is based on an understanding of the 5 soil forming factors (parent material, climate, biota, topography, and time) at each study site, and how each of these interact to produce a soil profile with unique characteristics and unique input and control into the atmospheric, biological, and hydrological systems. Soil profile characteristics, as opposed to soil moisture and temperature, vegetative growth, and atmospheric and hydrologic conditions, change very slowly, depending on the parameter being measured, ranging from seasonally to many thousands of years. Thus, soil information, including profile description and lab analysis, is collected only one time for each profile at a site. These data serve two purposes: 1) to supplement existing spatial information about soil profile characteristics across the landscape at local, regional, and global scales, and 2) to provide specific information within a given area about the basic substrate to which elements within the other protocols are linked. Because of the intimate link between soil properties and these other environmental elements, the static soil properties at a given site are needed to accurately interpret and understand the continually changing dynamics of soil moisture and temperature, vegetation growth and phenology, atmospheric conditions, and chemistry and turbidity in surface waters. Both the spatial and specific soil information can be used for modeling purposes to assess and make predictions about global change.
The Theoretical Highest Frame Rate of Silicon Image Sensors
Etoh, Takeharu Goji; Nguyen, Anh Quang; Kamakura, Yoshinari; Shimonomura, Kazuhiro; Le, Thi Yen; Mori, Nobuya
2017-01-01
The frame rate of the digital high-speed video camera was 2000 frames per second (fps) in 1989, and has been exponentially increasing. A simulation study showed that a silicon image sensor made with a 130 nm process technology can achieve about 1010 fps. The frame rate seems to approach the upper bound. Rayleigh proposed an expression on the theoretical spatial resolution limit when the resolution of lenses approached the limit. In this paper, the temporal resolution limit of silicon image sensors was theoretically analyzed. It is revealed that the limit is mainly governed by mixing of charges with different travel times caused by the distribution of penetration depth of light. The derived expression of the limit is extremely simple, yet accurate. For example, the limit for green light of 550 nm incident to silicon image sensors at 300 K is 11.1 picoseconds. Therefore, the theoretical highest frame rate is 90.1 Gfps (about 1011 fps). PMID:28264527
Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.
Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J
2016-11-01
Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features.
Continuum description for jointed media
Thomas, R.K.
1982-04-01
A general three-dimensional continuum description is presented for a material containing regularly spaced and approximately parallel jointing planes within a representative elementary volume. Constitutive relationships are introduced for linear behavior of the base material and nonlinear normal and shear behavior across jointing planes. Furthermore, a fracture permeability tensor is calculated so that deformation induced alterations to the in-situ values can be measured. Examples for several strain-controlled loading paths are presented.
GROUNDWATER PROTECTION MANAGEMENT PROGRAM DESCRIPTION.
PAQUETTE,D.E.; BENNETT,D.B.; DORSCH,W.R.; GOODE,G.A.; LEE,R.J.; KLAUS,K.; HOWE,R.F.; GEIGER,K.
2002-05-31
THE DEPARTMENT OF ENERGY ORDER 5400.1, GENERAL ENVIRONMENTAL PROTECTION PROGRAM, REQUIRES THE DEVELOPMENT AND IMPLEMENTATION OF A GROUNDWATER PROTECTION PROGRAM. THE BNL GROUNDWATER PROTECTION MANAGEMENT PROGRAM DESCRIPTION PROVIDES AN OVERVIEW OF HOW THE LABORATORY ENSURES THAT PLANS FOR GROUNDWATER PROTECTION, MONITORING, AND RESTORATION ARE FULLY DEFINED, INTEGRATED, AND MANAGED IN A COST EFFECTIVE MANNER THAT IS CONSISTENT WITH FEDERAL, STATE, AND LOCAL REGULATIONS.
Spacelab Mission 3 experiment descriptions
NASA Technical Reports Server (NTRS)
Hill, C. K. (Editor)
1982-01-01
The Spacelab 3 mission is the first operational flight of Spacelab aboard the shuttle transportation system. The primary objectives of this mission are to conduct application, science, and technology experimentation that requires the low gravity environment of Earth orbit and an extended duration, stable vehicle attitude with emphasis on materials processing. This document provides descriptions of the experiments to be performed during the Spacelab 3 mission.
A Multiscale Morphing Continuum Description for Turbulence
NASA Astrophysics Data System (ADS)
Chen, James; Wonnell, Louis
2015-11-01
Turbulence is a flow physics phenomena invlolving multiple length scales. The popular Navier- Stokes equations only possess one length/time scale. Therefore, extremely fine mesh is needed for DNS attempting to resolve the small scale motion, which comes with a burden of excessive computational cost. For practical application with complex geometries, the research society rely on RANS and LES, which requre turbulence model or subgrid scale (SGS) model for closure problems. Different models not only lead to different results but usually are invalidated on solid physical grounds, such as objectivity and entropy principle.The Morphing Continuum Theory (MCT) is a high-order continuum theory formulated under the framework of thermalmechanics for physics phenomena involving microstructure. In this study, a theoretical perspective for the multiscale nature of the Morphing Continuum Theory is connected with the multiscale nature of turbulence physics. The kinematics, balance laws, constitutive equations and a Morphing Continuum description of turbulence are introduced. The equations were numerically implemented for a zero pressure gradient flat plate. The simulations are compate with the laminar, transitional and turbulence cases.
A unifying description of dark energy
NASA Astrophysics Data System (ADS)
Gleyzes, Jérôme; Langlois, David; Vernizzi, Filippo
2014-01-01
We review and extend a novel approach that we recently introduced, to describe general dark energy or scalar-tensor models. Our approach relies on an Arnowitt-Deser-Misner (ADM) formulation based on the hypersurfaces where the underlying scalar field is uniform. The advantage of this approach is that it can describe in the same language and in a minimal way a vast number of existing models, such as quintessence, F(R) theories, scalar tensor theories, their Horndeski extensions and beyond. It also naturally includes Horava-Lifshitz theories. As summarized in this review, our approach provides a unified treatment of the linear cosmological perturbations about a Friedmann-Lemaître-Robertson-Walker (FLRW) universe, obtained by a systematic expansion of our general action up to quadratic order. This shows that the behavior of these linear perturbations is generically characterized by five time-dependent functions. We derive the full equations of motion in the Newtonian gauge. In the Horndeski case, we obtain the equation of state for dark energy perturbations in terms of these functions. Our unifying description thus provides the simplest and most systematic way to confront theoretical models with current and future cosmological observations.
NASA Technical Reports Server (NTRS)
Oliver, B. M.; Gower, J. F. R.
1977-01-01
A data acquisition system using a Litton LTN-51 inertial navigation unit (INU) was tested and used for aircraft track recovery and for location and tracking from the air of targets at sea. The characteristic position drift of the INU is compensated for by sighting landmarks of accurately known position at discrete time intervals using a visual sighting system in the transparent nose of the Beechcraft 18 aircraft used. For an aircraft altitude of about 300 m, theoretical and experimental tests indicate that calculated aircraft and/or target positions obtained from the interpolated INU drift curve will be accurate to within 10 m for landmarks spaced approximately every 15 minutes in time. For applications in coastal oceanography, such as surface current mapping by tracking artificial targets, the system allows a broad area to be covered without use of high altitude photography and its attendant needs for large targets and clear weather.
Accurate microfour-point probe sheet resistance measurements on small samples.
Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch H; Hansen, Torben Mikael; Kjaer, Daniel; Lin, Rong; Kim, Jang-Yong; Nielsen, Peter F; Hansen, Ole
2009-05-01
We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the "sweet spot," where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance with sufficient accuracy. As an example, the sheet resistance of a 40 microm (50 microm) square sample may be characterized with an accuracy of 0.3% (0.1%) using a 10 microm pitch microfour-point probe and assuming a probe alignment accuracy of +/-2.5 microm.
Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology
NASA Astrophysics Data System (ADS)
Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan
2016-05-01
This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.
Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter
Samsudin, Firdaus; Parker, Joanne L.; Sansom, Mark S.P.; Newstead, Simon; Fowler, Philip W.
2016-01-01
Summary Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the β-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887
ERIC Educational Resources Information Center
Dodd, Bucky J.
2013-01-01
Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…
Theoretical Foundations of Software Technology.
1983-02-14
individual concept [221, the clause following the noun group can not be completing the description. By "individual concept" Carnap means those...34Conceptual Analysis of Noun Group in English", IJCAI-77 (21) Levi J. N. The Syntax and Semantics of Complex Nominals, Academic Press, 1978 (22) Carnap , R
Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao
2015-10-20
Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.
Accurate Evaluation of Microwave-Leakage-Induced Frequency Shifts in Fountain Clocks
NASA Astrophysics Data System (ADS)
Fang, Fang; Liu, Kun; Chen, Wei-Liang; Liu, Nian-Feng; Suo, Rui; Li, Tian-Chun
2014-10-01
We report theoretical calculations of the transition probability errors introduced by microwave leakage in Cs fountain clocks, which will shift the clock frequency. The results show that the transition probability errors are affected by the Ramsey pulse amplitude, the relative phase between the Ramsey field and the leakage field, and the asymmetry of the leakage fields for the upward and downward passages. This effect is quite different for the leakage fields presenting below the Ramsey cavity and above the Ramsey cavity. The leakage-field-induced frequency shifts of the NIM5 fountain clock in different cases are measured. The results are consistent with the theoretical calculations, and give an accurate evaluation of the leakage-field-induced frequency shifts, as distinguished from other microwave-power-related effects for the first time.
Pilot-scale trommel: experimental test descriptions and data
Bolczak, R.
1981-09-01
A pilot scale trommel test at a laboratory in upper Marlboro, Maryland, was initiated to support theoretical work on development of a model performance and to supplement data collected in full scale testing at Recovery 1 in New Orleans. Descriptions and summaries of the project through July 1981 are presented. The feedstocks were identical nearsized flakes and wooden blocks. Three groupings of results are provided. The first group, Feedstock Tests, contains data on feedstock properties. This group includes description of the feedstocks and results of tests on the probability of passage, the dynamic angle of repose, and the coefficient of friction for the test flakes. The second test group on Residence Time and Impingement Tests contains data on the movement of flakes and blocks through the trommel. The last group, Mass Split, Screening Efficiency, and Undersize Distribution contains data on flake and block mass splits to the undersize and oversize products and the axial and sectorial distribution in the undersize. (MCW)
Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey
2011-01-01
Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450
Accurate formula for conversion of tunneling current in dynamic atomic force spectroscopy
NASA Astrophysics Data System (ADS)
Sader, John E.; Sugimoto, Yoshiaki
2010-07-01
Recent developments in frequency modulation atomic force microscopy enable simultaneous measurement of frequency shift and time-averaged tunneling current. Determination of the interaction force is facilitated using an analytical formula, valid for arbitrary oscillation amplitudes [Sader and Jarvis, Appl. Phys. Lett. 84, 1801 (2004)]. Here we present the complementary formula for evaluation of the instantaneous tunneling current from the time-averaged tunneling current. This simple and accurate formula is valid for any oscillation amplitude and current law. The resulting theoretical framework allows for simultaneous measurement of the instantaneous tunneling current and interaction force in dynamic atomic force microscopy.
Mathematical challenges from theoretical/computational chemistry
1995-12-31
The committee believes that this report has relevance and potentially valuable suggestions for a wide range of readers. Target audiences include: graduate departments in the mathematical and chemical sciences; federal and private agencies that fund research in the mathematical and chemical sciences; selected industrial and government research and development laboratories; developers of software and hardware for computational chemistry; and selected individual researchers. Chapter 2 of this report covers some history of computational chemistry for the nonspecialist, while Chapter 3 illustrates the fruits of some past successful cross-fertilization between mathematical scientists and computational/theoretical chemists. In Chapter 4 the committee has assembled a representative, but not exhaustive, survey of research opportunities. Most of these are descriptions of important open problems in computational/theoretical chemistry that could gain much from the efforts of innovative mathematical scientists, written so as to be accessible introductions to the nonspecialist. Chapter 5 is an assessment, necessarily subjective, of cultural differences that must be overcome if collaborative work is to be encouraged between the mathematical and the chemical communities. Finally, the report ends with a brief list of conclusions and recommendations that, if followed, could promote accelerated progress at this interface. Recognizing that bothersome language issues can inhibit prospects for collaborative research at the interface between distinctive disciplines, the committee has attempted throughout to maintain an accessible style, in part by using illustrative boxes, and has included at the end of the report a glossary of technical terms that may be familiar to only a subset of the target audiences listed above.
Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network
NASA Astrophysics Data System (ADS)
Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang
Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.
Meneghetti, Chiara; Muffato, Veronica; Varotto, Diego; De Beni, Rossana
2017-03-01
Previous studies found mental representations of route descriptions north-up oriented when egocentric experience (given by the protagonist's initial view) was congruent with the global reference system. This study examines: (a) the development and maintenance of representations derived from descriptions when the egocentric and global reference systems are congruent or incongruent; and (b) how spatial abilities modulate these representations. Sixty participants (in two groups of 30) heard route descriptions of a protagonist's moves starting from the bottom of a layout and headed mainly northwards (SN description) in one group, and headed south from the top (NS description, the egocentric view facing in the opposite direction to the canonical north) in the other. Description recall was tested with map drawing (after hearing the description a first and second time; i.e. Time 1 and 2) and South-North (SN) or North-South (NS) pointing tasks; and spatial objective tasks were administered. The results showed that: (a) the drawings were more rotated in NS than in SN descriptions, and performed better at Time 2 than at Time 1 for both types of description; SN pointing was more accurate than NS pointing for the SN description, while SN and NS pointing accuracy did not differ for the NS description; (b) spatial (rotation) abilities were related to recall accuracy for both types of description, but were more so for the NS ones. Overall, our results showed that the way in which spatial information is conveyed (with/without congruence between the egocentric and global reference systems) and spatial abilities influence the development and maintenance of mental representations.
Effects of a Training Package to Improve the Accuracy of Descriptive Analysis Data Recording
ERIC Educational Resources Information Center
Mayer, Kimberly L.; DiGennaro Reed, Florence D.
2013-01-01
Functional behavior assessment is an important precursor to developing interventions to address a problem behavior. Descriptive analysis, a type of functional behavior assessment, is effective in informing intervention design only if the gathered data accurately capture relevant events and behaviors. We investigated a training procedure to improve…
Technology Transfer Automated Retrieval System (TEKTRAN)
The utility of Ecological Site Descriptions (ESDs) and State-and-Transition Models (STMs) concepts in guiding rangeland management hinges on their ability to accurately describe and predict community dynamics and the associated consequences. For many rangeland ecosystems, plant community dynamics ar...
Theoretical design of lightning panel
NASA Astrophysics Data System (ADS)
Emetere, M. E.; Olawole, O. F.; Sanni, S. E.
2016-02-01
The light trapping device (LTD) was theoretically designed to suggests the best way of harvesting the energy derived from natural lightning. The Maxwell's equation was expanded using a virtual experimentation via a MATLAB environment. Several parameters like lightning flash and temperature distribution were consider to investigate the ability of the theoretical lightning panel to convert electricity efficiently. The results of the lighting strike angle on the surface of the LTD shows the maximum power expected per time. The results of the microscopic thermal distribution shows that if the LTD casing controls the transmission of the heat energy, then the thermal energy storage (TES) can be introduced to the lightning farm.
Theoretical Biology: Organisms and Mechanisms
NASA Astrophysics Data System (ADS)
Landauer, Christopher; Bellman, Kirstie L.
2002-09-01
The Theoretical Biology Program initiated by Robert Rosen is intended to identify the key theoretical characteristics of organisms, especially those that distinguish organisms from mechanisms, by looking for the proper abstractions and defining the appropriate relationships. There are strong claims about the distinctions in Rosen's book "Life Itself", along with some purported proofs of these assertions. Unfortunately, the Mathematics is incorrect, and the assertions remain unproven (and some of them are simply false). In this paper, we present the ideas of Rosen's approach, demonstrate that his Mathematical formulations and proofs are wrong, and then show how they might be made more successful.
A predictable and accurate technique with elastomeric impression materials.
Barghi, N; Ontiveros, J C
1999-08-01
A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.
Tube dimpling tool assures accurate dip-brazed joints
NASA Technical Reports Server (NTRS)
Beuyukian, C. S.; Heisman, R. M.
1968-01-01
Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.
Orbiter active thermal control system description
NASA Technical Reports Server (NTRS)
Laubach, G. E.
1975-01-01
A brief description of the Orbiter Active Thermal Control System (ATCS) including (1) major functional requirements of heat load, temperature control and heat sink utilization, (2) the overall system arrangement, and (3) detailed description of the elements of the ATCS.
Standardizing the microsystems technology description
NASA Astrophysics Data System (ADS)
Liateni, Karim; Thomas, Gabriel; Hui Bon Hoa, Christophe; Bensaude, David
2002-04-01
The microsystems industry is promising a rapid and widespread growth for the coming years. The automotive, network, telecom and electronics industries take advantage of this technology by including it in their products; thus, getting better integration and high energetic performances. Microsystems related software and data exchange have inherited from the IC technology experience or standards, which appear not to fit the advanced level of conception currently needed by microsystems designers. A typical design flow to validate a microsystem device involves several software from disconnected areas like layout editors, FEM simulators, HDL modeling and simulation tools. However, and fabricated microsystem is obtained through execution of a layered process. Process characteristics will be used at each level of the design and analysis. Basically, the designer will have to customize each of his tools after the process. The project introduced here intends to unify the process description language and speed up the critical and tedious CAD customization task. We gather all the information related to the technology of a microsystem process in a single file. It is based on the XML standard format to receive worldwide attention. This format is called XML-MTD, standing for XML Microsystems Technology Description. Built around XML, it is an ASCII format which gives the ability to handle a comprehensive database for technology data. This format is open, given under general public license, but the aim is to manage the format withing a XML-MTD consortium of leader and well-established EDA companies and Foundries. In this way, it will take profit of their experience. For automated configuration of design and analysis tools regarding process-dependant information, we ship the Technology Manger software. Technology Manager links foundries with a large panel of standard EDA and FEA packages used by design teams relying on the Microsystems Technology Description in XML-MTD format.
SNF AGING SYSTEM DESCRIPTION DOCUMENT
L.L. Swanson
2005-04-06
The purpose of this system description document (SDD) is to establish requirements that drive the design of the spent nuclear fuel (SNF) aging system and associated bases, which will allow the design effort to proceed. This SDD will be revised at strategic points as the design matures. This SDD identifies the requirements and describes the system design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This SDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This SDD is part of an iterative design process. It leads the design process with regard to the flow down of upper tier requirements onto the system. Knowledge of these requirements is essential in performing the design process. The SDD follows the design with regard to the description of the system. The description provided in the SDD reflects the current results of the design process. Throughout this SDD, the term aging cask applies to vertical site-specific casks and to horizontal aging modules. The term overpack is a vertical site-specific cask that contains a dual-purpose canister (DPC) or a disposable canister. Functional and operational requirements applicable to this system were obtained from ''Project Functional and Operational Requirements'' (F&OR) (Curry 2004 [DIRS 170557]). Other requirements that support the design process were taken from documents such as ''Project Design Criteria Document'' (PDC) (BSC 2004 [DES 171599]), ''Site Fire Hazards Analyses'' (BSC 2005 [DIRS 172174]), and ''Nuclear Safety Design Bases for License Application'' (BSC 2005 [DIRS 171512]). The documents address requirements in the ''Project Requirements Document'' (PRD) (Canori and Leitner 2003 [DIRS 166275]). This SDD includes several appendices. Appendix A is a Glossary; Appendix B is a list of key system charts, diagrams, drawings, lists and additional supporting information; and Appendix C is a list of
Hadl: HUMS Architectural Description Language
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.
2004-01-01
Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.
NASA Technical Reports Server (NTRS)
Jennings, J.
1977-01-01
The IUE/IRA rate sensor system designed to meet the requirements of the International Ultraviolet Explorer spacecraft mission is described. The system consists of the sensor unit containing six rate sensor modules and the electronic control unit containing the rate sensor support electronics and the command/control circuitry. The inertial reference assembly formed by the combined units will provide spacecraft rate information for use in the stabilization and control system. The system is described in terms of functional description, operation redundancy performance, mechanical interface, and electrical interface. Test data obtained from the flight unit are summarized.
Descriptive Model of Generic WAMS
Hauer, John F.; DeSteese, John G.
2007-06-01
The Department of Energy’s (DOE) Transmission Reliability Program is supporting the research, deployment, and demonstration of various wide area measurement system (WAMS) technologies to enhance the reliability of the Nation’s electrical power grid. Pacific Northwest National Laboratory (PNNL) was tasked by the DOE National SCADA Test Bed Program to conduct a study of WAMS security. This report represents achievement of the milestone to develop a generic WAMS model description that will provide a basis for the security analysis planned in the next phase of this study.
The MUNU experiment, general description
NASA Astrophysics Data System (ADS)
Amsler, C.; Avenier, M.; Bagieu, G.; Barnoux, C.; Becker, H.-W.; Brissot, R.; Broggini, C.; Busto, J.; Cavaignac, J.-F.; Farine, J.; Filippi, D.; Gervasio, G.; Giarritta, P.; Grgić, G.; Guerre Chaley, B.; Joergens, V.; Koang, D. H.; Lebrun, D.; Luescher, R.; Mattioli, F.; Negrello, M.; Ould-Saada, F.; Paić, A.; Piovan, O.; Puglierin, G.; Schenker, D.; Stutz, A.; Tadsen, A.; Treichel, M.; Vuilleumier, J.-L.; Vuilleumier, J.-M.; MUNU Collaboration
1997-02-01
We are building a low background detector based on a gas time projection chamber surrounded by an active anti-Compton shielding. The detector will be installed near a nuclear reactor in Bugey for the experimental study of overlineνee- scattering. We give here a general description of the experiment, and an estimate of the expected counting rate and background. The construction of the time projection chamber is described in details. Results of first test measurements concerning the attenuation length and the spatial as well as energy resolution in the CF 4 fill gas are reported.
Descriptive analyses of caregiver reprimands.
Sloman, Kimberly N; Vollmer, Timothy R; Cotnoir, Nicole M; Borrero, Carrie S W; Borrero, John C; Samaha, Andrew L; St Peter, Claire C
2005-01-01
We conducted descriptive observations of 5 individuals with developmental disabilities and severe problem behavior while they interacted with their caregivers in either simulated environments (an inpatient hospital facility) or in their homes. The focus of the study was on caregiver reprimands and child problem behavior. Thus, we compared the frequency of problem behavior that immediately preceded a caregiver reprimand to that immediately following a caregiver reprimand, and the results showed that the frequency of problem behavior decreased following a reprimand. It is possible that caregiver reprimands are negatively reinforced by the momentary attenuation of problem behavior, and the implications for long- and short-term effects on caregiver behavior are discussed.
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.
Approaching nanoscale oxides: models and theoretical methods.
Bromley, Stefan T; Moreira, Ibério de P R; Neyman, Konstantin M; Illas, Francesc
2009-09-01
This tutorial review deals with the rapidly developing area of modelling oxide materials at the nanoscale. Top-down and bottom-up modelling approaches and currently used theoretical methods are discussed with the help of a selection of case studies. We show that the critical oxide nanoparticle size required to be beyond the scale where every atom counts to where structural and chemical properties are essentially bulk-like (the scalable regime) strongly depends on the structural and chemical parameters of the material under consideration. This oxide-dependent behaviour with respect to size has fundamental implications with respect to their modelling. Strongly ionic materials such as MgO and CeO(2), for example, start to exhibit scalable-to-bulk crystallite-like characteristics for nanoparticles consisting of about 100 ions. For such systems there exists an overlap in nanoparticle size where both top-down and bottom-up theoretical techniques can be applied and the main problem is the choice of the most suitable computational method. However, for more covalent systems such TiO(2) or SiO(2) the onset of the scalable regime is still unclear and for intermediate sized nanoparticles there exists a gap where neither bottom-up nor top-down modelling are fully adequate. In such difficult cases new efforts to design adequate models are required. Further exacerbating these fundamental methodological concerns are oxide nanosystems exhibiting complex electronic and magnetic behaviour. Due to the need for a simultaneous accurate treatment of the atomistic, electronic and spin degrees of freedom for such systems, the top-down vs. bottom-up separation is still large, and only few studies currently exist.
Theoretical Particle Physics Research Program
Paz, Gil
2015-06-23
This is the final technical report for DOE grant DE-FG02-13ER41997. It contains a brief description of accomplishments: research project that were completed during the period of the grant, research project that were started during the period of the grant, and service to the scientific community. It also lists the publications in the funded period, travel related to the grant, and information about the personal supported by the grant.
Staged description of the Finkelstein test.
Dawson, Courtney; Mudgal, Chaitanya S
2010-09-01
We have revisited the original description of the Finkelstein test and review the reasons for its subsequent erroneous description. We have also outlined a staged description of this test, which we have found to be reliable and minimally painful for the diagnosis of de Quervain's tendonitis within our clinical practice.
Pathways to Provenance: "DACS" and Creator Descriptions
ERIC Educational Resources Information Center
Weimer, Larry
2007-01-01
"Describing Archives: A Content Standard" breaks important ground for American archivists in its distinction between creator descriptions and archival material descriptions. Implementations of creator descriptions, many using Encoded Archival Context (EAC), are found internationally. "DACS"'s optional approach of describing…
UVBLUE: A New High-Resolution Theoretical Library of Ultraviolet Stellar Spectra
NASA Astrophysics Data System (ADS)
Rodríguez-Merino, L. H.; Chavez, M.; Bertone, E.; Buzzoni, A.
2005-06-01
We present an extended ultraviolet-blue (850-4700 Å) library of theoretical stellar spectral energy distributions computed at high resolution, λ/Δλ=50,000. The UVBLUE grid, as we named the library, is based on LTE calculations carried out with ATLAS9 and SYNTHE codes developed by R. L. Kurucz and consists of nearly 1800 entries that cover a large volume of the parameter space. It spans a range in Teff from 3000 to 50,000 K, the surface gravity ranges from logg=0.0 to 5.0 with Δlogg=0.5 dex, while seven chemical compositions are considered: [M/H]=-2.0,-1.5,-1.0,-0.5,+0.0,+0.3, and +0.5 dex. For its coverage across the Hertzsprung-Russell diagram, this library is the most comprehensive one ever computed at high resolution in the short-wavelength spectral range, and useful application can be foreseen for both the study of single stars and in population synthesis models of galaxies and other stellar systems. We briefly discuss some relevant issues for a safe application of the theoretical output to ultraviolet observations, and a comparison of our LTE models with the non-LTE (NLTE) ones from the TLUSTY code is also carried out. NLTE spectra are found, on average, to be slightly ``redder'' compared to the LTE ones for the same value of Teff, while a larger difference could be detected for weak lines, which are nearly wiped out by the enhanced core emission component in case of NLTE atmospheres. These effects seem to be magnified at low metallicity (typically [M/H]<~-1). A match with a working sample of 111 stars from the IUE atlas, with available atmosphere parameters from the literature, shows that UVBLUE models provide an accurate description of the main mid- and low-resolution spectral features for stars along the whole sequence from the B to ~G5 type. The comparison sensibly degrades for later spectral types, with supergiant stars that are in general more poorly reproduced than dwarfs. As a possible explanation of this overall trend, we partly invoke the
Space Service Market (Theoretical Aspect)
NASA Astrophysics Data System (ADS)
Prisniakov, V. F.; Prisniakova, L. M.
The authors propose a mathematical model of the demand and supply in the market economics and in the market of space services, in particular. A theoretical demand formula and a real curve demand are compared. The market equilibrium price is defined. The space market dynamics is studied. The calculations are carried out for the parameters which are close to the market of space services.
Theoretical Perspectives for Developmental Education.
ERIC Educational Resources Information Center
Lundell, Dana Britt, Ed.; Higbee, Jeanne L., Ed.
This monograph from the University of Minnesota General College (GC) discusses theoretical perspectives on developmental education from both new and established standpoints. GC voluntarily eliminated its degree programs in order to focus on preparing under-prepared students for transfer to the university system. GC's curricular model includes a…
Theoretical Foundations of Learning Environments
ERIC Educational Resources Information Center
Jonassen, David H., Ed.; Land, Susan M., Ed.
1999-01-01
"Theoretical Foundations of Learning Environments" describes the most contemporary psychological and pedagogical theories that are foundations for the conception and design of open-ended learning environments and new applications of educational technologies. In the past decade, the cognitive revolution of the 60s and 70s has been…
Lightning Talks 2015: Theoretical Division
Shlachter, Jack S.
2015-11-25
This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.
Asking Research Questions: Theoretical Presuppositions
ERIC Educational Resources Information Center
Tenenberg, Josh
2014-01-01
Asking significant research questions is a crucial aspect of building a research foundation in computer science (CS) education. In this article, I argue that the questions that we ask are shaped by internalized theoretical presuppositions about how the social and behavioral worlds operate. And although such presuppositions are essential in making…
Data, Methods, and Theoretical Implications
ERIC Educational Resources Information Center
Hannagan, Rebecca J.; Schneider, Monica C.; Greenlee, Jill S.
2012-01-01
Within the subfields of political psychology and the study of gender, the introduction of new data collection efforts, methodologies, and theoretical approaches are transforming our understandings of these two fields and the places at which they intersect. In this article we present an overview of the research that was presented at a National…
Model Experiments and Model Descriptions
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian
1999-01-01
The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.
Theoretical studies on tone noise from a ducted fan rotor
NASA Technical Reports Server (NTRS)
Rao, C. V. R.; Chu, W. T.; Digumarthi, R. V.; Agarwal, R. K.
1974-01-01
The method of computing radiated noise from a ducted rotor due to inflow distortion and turbulence are examined. Analytical investigations include an appropriate description of sources, the cut-off conditions imposed on the modal propagation of the pressure waves in the annular duct, and reflections at the upstream end of the duct. Far field sound pressure levels at blade passing frequency due to acoustic radiation from a small scale low speed fan are computed. Theoretical predictions are in reasonable agreement with experimental measurements.
Some recent theoretical and experimental developments in fracture mechanics
NASA Technical Reports Server (NTRS)
Liebowitz, H.; Eftis, J.; Hones, D. L.
1978-01-01
Recent theoretical and experimental developments in four distinct areas of fracture mechanics research are described. These are as follows: experimental comparisons of different nonlinear fracture toughness measures, including the nonlinear energy, R curve, COD and J integral methods; the singular elastic crack-tip stress and displacement equations and the validity of the proposition of their general adequacy as indicated, for example, by the biaxially loaded infinite sheet with a flat crack; the thermodynamic nature of surface energy induced by propagating cracks in relation to a general continuum thermodynamic description of brittle fracture; and analytical and experimental aspects of Mode II fracture, with experimental data for certain aluminum, steel and titanium alloys.
Problems in publishing accurate color in IEEE journals.
Vrhel, Michael J; Trussell, H J
2002-01-01
To demonstrate the performance of color image processing algorithms, it is desirable to be able to accurately display color images in archival publications. In poster presentations, the authors have substantial control of the printing process, although little control of the illumination. For journal publication, the authors must rely on professional intermediaries (printers) to accurately reproduce their results. Our previous work describes requirements for accurately rendering images using your own equipment. This paper discusses the problems of dealing with intermediaries and offers suggestions for improved communication and rendering.
Fabricating an Accurate Implant Master Cast: A Technique Report.
Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F
2015-12-01
The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.
Theoretical approximations and experimental extinction coefficients of biopharmaceuticals.
Miranda-Hernández, Mariana P; Valle-González, Elba R; Ferreira-Gómez, David; Pérez, Néstor O; Flores-Ortiz, Luis F; Medina-Rivero, Emilio
2016-02-01
UV spectrophotometric measurement is a widely accepted and standardized routine analysis for quantitation of highly purified proteins; however, the reliability of the results strictly depends on the accuracy of the employed extinction coefficients. In this work, an experimental estimation of the differential refractive index (dn/dc), based on dry weight measurements, was performed in order to determine accurate extinction coefficients for four biotherapeutic proteins and one synthetic copolymer after separation in a size-exclusion ultra-performance liquid chromatograph coupled to an ultraviolet, multiangle light scattering and refractive index (SE-UPLC-UV-MALS-RI) multidetection system. The results showed small deviations with respect to theoretical values, calculated from the specific amino acid sequences, for all the studied immunoglobulins. Nevertheless, for proteins like etanercept and glatiramer acetate, several considerations, such as glycan content, partial specific volume, polarizability, and higher order structure, should be considered to properly calculate theoretical extinction coefficient values. Herein, these values were assessed with simple approximations. The precision of the experimentally obtained extinction coefficients, and its convergence towards the theoretical values, makes them useful for characterization and comparability exercises. Also, these values provide insight into the absorbance and scattering properties of the evaluated proteins. Overall, this methodology is capable of providing accurate extinction coefficients useful for development studies.
Hayman, Matthew; Thayer, Jeffrey P
2012-04-01
Polarization measurements have become nearly indispensible in lidar cloud and aerosol studies. Despite polarization's widespread use in lidar, its theoretical description has been widely varying in accuracy and completeness. Incomplete polarization lidar descriptions invariably result in poor accountability for scatterer properties and instrument effects, reducing data accuracy and disallowing the intercomparison of polarization lidar data between different systems. We introduce here the Stokes vector lidar equation, which is a full description of polarization in lidar from laser output to detector. We then interpret this theoretical description in the context of forward polar decomposition of Mueller matrices where distinct polarization attributes of diattenuation, retardance, and depolarization are elucidated. This decomposition can be applied to scattering matrices, where volumes consisting of randomly oriented particles are strictly depolarizing, while oriented ice crystals can be diattenuating, retarding, and depolarizing. For instrument effects we provide a description of how different polarization attributes will impact lidar measurements. This includes coupling effects due to retarding and depolarization attributes of the receiver, which have no description in scalar representations of polarization lidar. We also describe how the effects of polarizance in the receiver can result in nonorthogonal polarization detection channels. This violates one of the most common assumptions in polarization lidar operation.
Theoretical evaluation of high-speed aerodynamics for arrow-wing configurations
NASA Technical Reports Server (NTRS)
Dollyhigh, S. M.
1979-01-01
The use of the theoretical methods to calculate the high-speed aerodynamic characteristics of arrow-wing supersonic cruise configurations was studied. Included are correlations of theoretical predictions with wind-tunnel data at Mach numbers from 0.8 to 2.7, examples of the use of theoretical methods to extrapolate the wind-tunnel data to full-scale flight condition, and presentation of a typical supersonic data package for an advanced supersonic transport application. A brief description of the methods and their application is given.
Theoretical Thermodynamics of Mixtures at High Pressures
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1985-01-01
The development of an understanding of the chemistry of mixtures of metallic hydrogen and abundant, higher-z material such as oxygen, carbon, etc., is important for understanding of fundamental processes of energy release, differentiation, and development of atmospheric abundances in the Jovian planets. It provides a significant theoretical base for the interpretation of atmospheric elemental abundances to be provided by atmospheric entry probes in coming years. Significant differences are found when non-perturbative approaches such as Thomas-Fermi-Dirac (TFD) theory are used. Mapping of the phase diagrams of such binary mixtures in the pressure range from approx. 10 Mbar to approx. 1000 Mbar, using results from three-dimensional TFD calculations is undertaken. Derivation of a general and flexible thermodynamic model for such binary mixtures in the relevant pressure range was facilitated by the following breakthrough: there exists an accurate nd fairly simple thermodynamic representation of a liquid two-component plasma (TCP) in which the Helmholtz free energy is represented as a suitable linear combination of terms dependent only on density and terms which depend only on the ion coupling parameter. It is found that the crystal energies of mixtures of H-He, H-C, and H-O can be satisfactorily reproduced by the same type of model, except that an effective, density-dependent ionic charge must be used in place of the actual total ionic charge.
ERIC Educational Resources Information Center
Veronneau, Marie-Helene; Vitaro, Frank
2007-01-01
This article reviews theoretical and empirical work on the relations between child and adolescent peer experiences and high school graduation. First, the different developmental models that guide research in this domain will be explained. Then, descriptions of peer experiences at the group level (peer acceptance/rejection, victimisation, and crowd…
Giorgi, Amedeo
2014-12-01
Rennie (2012) made the claim that, despite their diversity, all qualitative methods are essentially hermeneutical, and he attempted to back up that claim by demonstrating that certain core steps that he called hermeneutical are contained in all of the other methods despite their self-interpretation. In this article, I demonstrate that the method I developed based upon Husserlian phenomenology cannot be so interpreted despite Rennie's effort to do so. I claim that the undertaking of a psychological investigation at large can be considered interpretive but that when the phenomenological method based upon Husserl is employed, it is descriptive. I also object to the attempt to reduce varied theoretical perspectives to the methodical steps of one of the competing theories. Reducing theoretical perspectives to core steps distorts the full value of the theoretical perspective. The last point is demonstrated by showing how the essence of the descriptive phenomenological method is missed if one follows Rennie's core steps.
Flach, G.P.
1990-12-01
FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.
Flach, G.P.
1991-09-01
FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.
Lagrangian description of warm plasmas
NASA Technical Reports Server (NTRS)
Kim, H.
1970-01-01
Efforts are described to extend the averaged Lagrangian method of describing small signal wave propagation and nonlinear wave interaction, developed by earlier workers for cold plasmas, to the more general conditions of warm collisionless plasmas, and to demonstrate particularly the effectiveness of the method in analyzing wave-wave interactions. The theory is developed for both the microscopic description and the hydrodynamic approximation to plasma behavior. First, a microscopic Lagrangian is formulated rigorously, and expanded in terms of perturbations about equilibrium. Two methods are then described for deriving a hydrodynamic Lagrangian. In the first of these, the Lagrangian is obtained by velocity integration of the exact microscopic Lagrangian. In the second, the expanded hydrodynamic Lagrangian is obtained directly from the expanded microscopic Lagrangian. As applications of the microscopic Lagrangian, the small-signal dispersion relations and the coupled mode equations are derived for all possible waves in a warm infinite, weakly inhomogeneous magnetoplasma, and their interactions are examined.
Statistical description for survival data
2016-01-01
Statistical description is always the first step in data analysis. It gives investigator a general impression of the data at hand. Traditionally, data are described as central tendency and deviation. However, this framework does not fit to the survival data (also termed time-to-event data). Such data type contains two components. One is the survival time and the other is the status. Researchers are usually interested in the probability of event at a given survival time point. Hazard function, cumulative hazard function and survival function are commonly used to describe survival data. Survival function can be estimated using Kaplan-Meier estimator, which is also the default method in most statistical packages. Alternatively, Nelson-Aalen estimator is available to estimate survival function. Survival functions of subgroups can be compared using log-rank test. Furthermore, the article also introduces how to describe time-to-event data with parametric modeling. PMID:27867953
Theoretical issues in Spheromak research
Cohen, R. H.; Hooper, E. B.; LoDestro, L. L.; Mattor, N.; Pearlstein, L. D.; Ryutov, D. D.
1997-04-01
This report summarizes the state of theoretical knowledge of several physics issues important to the spheromak. It was prepared as part of the preparation for the Sustained Spheromak Physics Experiment (SSPX), which addresses these goals: energy confinement and the physics which determines it; the physics of transition from a short-pulsed experiment, in which the equilibrium and stability are determined by a conducting wall (``flux conserver``) to one in which the equilibrium is supported by external coils. Physics is examined in this report in four important areas. The status of present theoretical understanding is reviewed, physics which needs to be addressed more fully is identified, and tools which are available or require more development are described. Specifically, the topics include: MHD equilibrium and design, review of MHD stability, spheromak dynamo, and edge plasma in spheromaks.
Theoretical Problems in Materials Science
NASA Technical Reports Server (NTRS)
Langer, J. S.; Glicksman, M. E.
1985-01-01
Interactions between theoretical physics and material sciences to identify problems of common interest in which some of the powerful theoretical approaches developed for other branches of physics may be applied to problems in materials science are presented. A unique structure was identified in rapidly quenched Al-14% Mn. The material has long-range directed bonds with icosahedral symmetry which does not form a regular structure but instead forms an amorphous-like quasiperiodic structure. Finite volume fractions of second phase material is advanced and is coupled with nucleation theory to describe the formation and structure of precipitating phases in alloys. Application of the theory of pattern formation to the problem of dendrite formation is studied.
Theoretical Advanced Study Institute: 2014
DeGrand, Thomas
2016-08-17
The Theoretical Advanced Study Institute (TASI) was held at the University of Colorado, Boulder, during June 2-27, 2014. The topic was "Journeys through the Precision Frontier: Amplitudes for Colliders." The organizers were Professors Lance Dixon (SLAC) and Frank Petriello (Northwestern and Argonne). There were fifty-one students. Nineteen lecturers gave sixty seventy-five minute lectures. A Proceedings was published. This TASI was unique for its large emphasis on methods for calculating amplitudes. This was embedded in a program describing recent theoretical and phenomenological developments in particle physics. Topics included introductions to the Standard Model, to QCD (both in a collider context and on the lattice), effective field theories, Higgs physics, neutrino interactions, an introduction to experimental techniques, and cosmology.
NASA Astrophysics Data System (ADS)
Kim, Chang-Beom; Lim, Jaeho; Hong, Hyobong; Kresh, J. Yasha; Wootton, David M.
2015-07-01
Detailed knowledge of the blood velocity distribution over the cross-sectional area of a microvessel is important for several reasons: (1) Information about the flow field velocity gradients can suggest an adequate description of blood flow. (2) Transport of blood components is determined by the velocity profiles and the concentration of the cells over the cross-sectional area. (3) The velocity profile is required to investigate volume flow rate as well as wall shear rate and shear stress which are important parameters in describing the interaction between blood cells and the vessel wall. The present study shows the accurate measurement of non-Newtonian blood velocity profiles at different shear rates in a microchannel using a novel translating-stage optical method. Newtonian fluid velocity profile has been well known to be a parabola, but blood is a non-Newtonian fluid which has a plug flow region at the centerline due to yield shear stress and has different viscosities depending on shear rates. The experimental results were compared at the same flow conditions with the theoretical flow equations derived from Casson non-Newtonian viscosity model in a rectangular capillary tube. And accurate wall shear rate and shear stress were estimated for different flow rates based on these velocity profiles. Also the velocity profiles were modeled and compared with parabolic profiles, concluding that the wall shear rates were at least 1.46-3.94 times higher than parabolic distribution for the same volume flow rate.
Pearson, Barbara Zurer
2004-02-01
Three avenues of theoretical research provide insights for discovering abstract properties of language that are subject to disorder and amenable to assessment: (1) the study of universal grammar and its acquisition; (2) descriptions of African American English (AAE) Syntax, Semantics, and Phonology within theoretical linguistics; and (3) the study of specific language impairment (SLI) cross-linguistically. Abstract linguistic concepts were translated into a set of assessment protocols that were used to establish normative data on language acquisition (developmental milestones) in typically developing AAE children ages 4 to 9 years. Testing AAE-speaking language impaired (LI) children and both typically developing (TD) and LI Mainstream American English (MAE)-learning children on these same measures provided the data to select assessments for which (1) TD MAE and AAE children performed the same, and (2) TD performance was reliably different from LI performance in both dialect groups.
Controlling Hay Fever Symptoms with Accurate Pollen Counts
... Library ▸ Hay fever and pollen counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has ... Pongdee, MD, FAAAAI Seasonal allergic rhinitis known as hay fever is caused by pollen carried in the air ...
Digital system accurately controls velocity of electromechanical drive
NASA Technical Reports Server (NTRS)
Nichols, G. B.
1965-01-01
Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.
Migration, crisis and theoretical conflict.
Bach, R L; Schraml, L A
1982-01-01
The nature of the distinction between the equilibrium and historical-structuralist positions on migration is examined. Theoretical and political differences in the two positions are considered both historically and in the context of the current global economic crisis. The proposal of Wood to focus on households as a strategy for integrating the two perspectives and for achieving a better understanding of migration and social change is discussed.
The utility of accurate mass and LC elution time information in the analysis of complex proteomes
Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.
2005-08-01
Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.
Accurate Electron Affinity of Iron and Fine Structures of Negative Iron ions
Chen, Xiaolin; Luo, Zhihong; Li, Jiaming; Ning, Chuangang
2016-01-01
Ionization potential (IP) is defined as the amount of energy required to remove the most loosely bound electron of an atom, while electron affinity (EA) is defined as the amount of energy released when an electron is attached to a neutral atom. Both IP and EA are critical for understanding chemical properties of an element. In contrast to accurate IPs and structures of neutral atoms, EAs and structures of negative ions are relatively unexplored, especially for the transition metal anions. Here, we report the accurate EA value of Fe and fine structures of Fe− using the slow electron velocity imaging method. These measurements yield a very accurate EA value of Fe, 1235.93(28) cm−1 or 153.236(34) meV. The fine structures of Fe− were also successfully resolved. The present work provides a reliable benchmark for theoretical calculations, and also paves the way for improving the EA measurements of other transition metal atoms to the sub cm−1 accuracy. PMID:27138292
Winters, Taylor M; Takahashi, Mitsuhiko; Lieber, Richard L; Ward, Samuel R
2011-01-04
An a priori model of the whole active muscle length-tension relationship was constructed utilizing only myofilament length and serial sarcomere number for rabbit tibialis anterior (TA), extensor digitorum longus (EDL), and extensor digitorum II (EDII) muscles. Passive tension was modeled with a two-element Hill-type model. Experimental length-tension relations were then measured for each of these muscles and compared to predictions. The model was able to accurately capture the active-tension characteristics of experimentally-measured data for all muscles (ICC=0.88 ± 0.03). Despite their varied architecture, no differences in predicted versus experimental correlations were observed among muscles. In addition, the model demonstrated that excursion, quantified by full-width-at-half-maximum (FWHM) of the active length-tension relationship, scaled linearly (slope=0.68) with normalized muscle fiber length. Experimental and theoretical FWHM values agreed well with an intraclass correlation coefficient of 0.99 (p<0.001). In contrast to active tension, the passive tension model deviated from experimentally-measured values and thus, was not an accurate predictor of passive tension (ICC=0.70 ± 0.07). These data demonstrate that modeling muscle as a scaled sarcomere provides accurate active functional but not passive functional predictions for rabbit TA, EDL, and EDII muscles and call into question the need for more complex modeling assumptions often proposed.
Fraccarollo, Alberto; Canti, Lorenzo; Marchese, Leonardo; Cossi, Maurizio
2017-03-07
The force fields used to simulate the gas adsorption in porous materials are strongly dominated by the van der Waals (vdW) terms. Here we discuss the delicate problem to estimate these terms accurately, analyzing the effect of different models. To this end, we simulated the physisorption of CH4, CO2, and Ar into various Al-free microporous zeolites (ITQ-29, SSZ-13, and silicalite-1), comparing the theoretical results with accurate experimental isotherms. The vdW terms in the force fields were parametrized against the free gas densities and high-level quantum mechanical (QM) calculations, comparing different methods to evaluate the dispersion energies. In particular, MP2 and DFT with semiempirical corrections, with suitable basis sets, were chosen to approximate the best QM calculations; either Lennard-Jones or Morse expressions were used to include the vdW terms in the force fields. The comparison of the simulated and experimental isotherms revealed that a strong interplay exists between the definition of the dispersion energies and the functional form used in the force field; these results are fairly general and reproducible, at least for the systems considered here. On this basis, the reliability of different models can be discussed, and a recipe can be provided to obtain accurate simulated adsorption isotherms.
Accurate Electron Affinity of Iron and Fine Structures of Negative Iron ions.
Chen, Xiaolin; Luo, Zhihong; Li, Jiaming; Ning, Chuangang
2016-05-03
Ionization potential (IP) is defined as the amount of energy required to remove the most loosely bound electron of an atom, while electron affinity (EA) is defined as the amount of energy released when an electron is attached to a neutral atom. Both IP and EA are critical for understanding chemical properties of an element. In contrast to accurate IPs and structures of neutral atoms, EAs and structures of negative ions are relatively unexplored, especially for the transition metal anions. Here, we report the accurate EA value of Fe and fine structures of Fe(-) using the slow electron velocity imaging method. These measurements yield a very accurate EA value of Fe, 1235.93(28) cm(-1) or 153.236(34) meV. The fine structures of Fe(-) were also successfully resolved. The present work provides a reliable benchmark for theoretical calculations, and also paves the way for improving the EA measurements of other transition metal atoms to the sub cm(-1) accuracy.
Robust air refractometer for accurate compensation of the refractive index of air in everyday use.
Kruger, O; Chetty, N
2016-11-10
The definition of the meter is based on the speed of light in a vacuum; however, most dimensional measurements, when performed using laser interferometry, are performed in air. A velocity of light compensation needs to be applied to the velocity of the laser light for accurate measurements of the speed of light to be approximated in a vacuum. Most practices use a weather-station method, whereby the ambient conditions are measured. Thereafter, the modified Edlén's equation is used, and corrections are calculated for the wavelength of the laser. The theoretical calculation is, however, only accurate to 3*10^{-8} without taking into account the accuracy of the sensors. Thus, this work focuses on investigations into the velocity of light compensations, both to improve upon the accuracy of the Edlén equation method in everyday use, and to verify the accuracy of the current weather-station systems in use through comparison with the refractometer. A refractometer that allows for velocity of light compensation measurements was developed, tested, and verified. The system was designed to be simple and cost-effective for use in everyday dimensional measurements, but with high accuracy. Achieved results show that although simple in design, the refractometer is accurate to at least 1*10^{-8}, which meets our initial condition for design.
Accurate tracking of high dynamic vehicles with translated GPS
NASA Astrophysics Data System (ADS)
Blankshain, Kenneth M.
The GPS concept and the translator processing system (TPS) which were developed for accurate and cost-effective tracking of various types of high dynamic expendable vehicles are described. A technique used by the translator processing system (TPS) to accomplish very accurate high dynamic tracking is presented. Automatic frequency control and fast Fourier transform processes are combined to track 100 g acceleration and 100 g/s jerk with 1-sigma velocity measurement error less than 1 ft/sec.
Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2011-03-23
A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.
Binary and nonbinary description of hypointensity for search and retrieval of brain MR images
NASA Astrophysics Data System (ADS)
Unay, Devrim; Chen, Xiaojing; Ercil, Aytul; Cetin, Mujdat; Jasinschi, Radu; van Buchem, Marc A.; Ekin, Ahmet
2009-01-01
Diagnosis accuracy in the medical field, is mainly affected by either lack of sufficient understanding of some diseases or the inter/intra-observer variability of the diagnoses. We believe that mining of large medical databases can help improve the current status of disease understanding and decision making. In a previous study based on binary description of hypointensity in the brain, it was shown that brain iron accumulation shape provides additional information to the shape-insensitive features, such as the total brain iron load, that are commonly used in clinics. This paper proposes a novel, nonbinary description of hypointensity in the brain based on principal component analysis. We compare the complementary and redundant information provided by the two descriptions using Kendall's rank correlation coefficient in order to better understand the individual descriptions of iron accumulation in the brain and obtain a more robust and accurate search and retrieval system.
NASA Astrophysics Data System (ADS)
Tossell, J. A.
2005-12-01
For more than a decade the B isotopic compositions of marine carbonates have been used as paleo-pH proxies for seawater and to reconstruct paleo-[CO2] concentrations in the atmosphere. A necessary step is this process is the accurate determination of the equilibrium constant, K, for the reaction shown in the title above. This equilibrium constant has been recently calculated using ab initio quantum chemical methods applied to nanoclusters containing the solutes B(OH)3 and B(OH)4- coordinated by large numbers of explicit solvent molecules, a computationally difficult procedure. To obtain the most accurate possible value for K the calculated vibrational frequencies were scaled to best fit the limited experimental data available. The value of K obtained (@ 25°C) was 1.027 (significantly larger than the long used value of 1.0194). Even more recently a purely experimental value of K= 1.0265 ± 0.0015 has been obtained through an accurate spectrophotometric determination of the difference of pKa's of commercially available bulk samples of >99% enriched 10B(OH)3(s) and 11B(OH)3 (s). Since we now know the correct experimental value and have a calculation, admittedly a difficult and slightly parameterized one, which matches the experimental result (which was obtained after the calculation), it is worthwhile to analyze the steps in the theoretical calculation of K in more detail. We need to establish a general procedure which can yield accurate K values for other similar aqueous species even if we have no accurate experimental value for K and no vibrational spectral data. To this end we will examine the dependence of the calculated values of vibrational frequencies, isotopomer frequency differences and K values on a number of factors, including (a) the quantum mechanical level (basis set and treatment of electron correlation) used for the free solutes, (b) the incorporation of aqueous medium effects, (c) the effects of vibrational anharmonicity, (d) incorporation of the
Cryptobiosis: a new theoretical perspective.
Neuman, Yair
2006-10-01
The tardigrade is a microscopic creature that under environmental stress conditions undergoes cryptobiosis [Feofilova, E.P., 2003. Deceleration of vital activity as a universal biochemical mechanism ensuring adaptation of microorganisms to stress factors: A review. Appl. Biochem. Microbiol. 39, 1-18; Nelson, D.R., 2002. Current status of the tardigrada: Evolution and ecology. Integrative Comp. Biol. 42, 652-659]-a temporary metabolic depression-which is considered to be a third state between life and death [Clegg, J.S., 2001. Cryptobiosis-a peculiar state of biological organization. Comp. Biochem. Physiol. Part B 128, 613-624]. In contrast with death, cryptobiosis is a reversible state, and as soon as environmental conditions change, the tardigrade "returns to life." Cryptobiosis in general, and among the tardigrade in particular, is a phenomenon poorly understood [Guppy, M., 2004. The biochemistry of metabolic depression: a history of perceptions. Comp. Biochem. Physiol. Part B 139, 435-442; Schill, R.O., et al., 2004. Stress gene (hsp70) sequences and quantitative expression in Milensium tardigradum (Tardigrade) during active and cryptobiotic stages. J. Exp. Biol. 207, 1607-1613; Watanabe, M., et al., 2002. Mechanisn allowing an insect to survive complete dehydration and extreme temperatures. J. Exp. Biol. 205, 2799-2802; Wright, J.C., 2001. Cryptobiosis 300 years on from van Leuwenhoek: what have we learned about tardigrades? Zool. Anz. 240, 563-582]. Moreover, the ability of the tardigrade to bootstrap itself and to return to life seems paradoxical like the legendary Baron von Munchausen who pulled himself out of the swamp by grabbing his own hair. Two theoretical obstacles prevent us from advancing our knowledge of cryptobiosis. First, we lack appropriate theoretical understanding of reversible processes of biological computation in living systems. Second, we lack appropriate theoretical understanding of bootstrapping in living systems. In this short opinion
ROM Plus®: accurate point-of-care detection of ruptured fetal membranes
McQuivey, Ross W; Block, Jon E
2016-01-01
Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
ROM Plus(®): accurate point-of-care detection of ruptured fetal membranes.
McQuivey, Ross W; Block, Jon E
2016-01-01
Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1992-01-01
The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.
NASA Astrophysics Data System (ADS)
Sangalli, Davide; Dal Conte, Stefano; Manzoni, Cristian; Cerullo, Giulio; Marini, Andrea
2016-05-01
The calculation of the equilibrium optical properties of bulk silicon by using the Bethe-Salpeter equation solved in the Kohn-Sham basis represents a cornerstone in the development of an ab-initio approach to the optical and electronic properties of materials. Nevertheless, calculations of the transient optical spectrum using the same efficient and successful scheme are scarce. We report, here, a joint theoretical and experimental study of the transient reflectivity spectrum of bulk silicon. Femtosecond transient reflectivity is compared to a parameter-free calculation based on the nonequilibrium Bethe-Salpeter equation. By providing an accurate description of the experimental results we disclose the different phenomena that determine the transient optical response of a semiconductor. We give a parameter-free interpretation of concepts such as bleaching, photoinduced absorption, and stimulated emission, beyond the Fermi golden rule. We also introduce the concept of optical gap renormalization, as a generalization of the known mechanism of band gap renormalization. The present scheme successfully describes the case of bulk silicon, showing its universality and accuracy.
Mathematical Description of Dendrimer Structure
NASA Technical Reports Server (NTRS)
Majoros, Istvan J.; Mehta, Chandan B.; Baker, James R., Jr.
2004-01-01
Characteristics of starburst dendrimers can be easily attributed to the multiplicity of the monomers used to synthesize them. The molecular weight, degree of polymerization, number of terminal groups and branch points for each generation of a dendrimer can be calculated using mathematical formulas incorporating these variables. Mathematical models for the calculation of degree of polymerization, molecular weight, and number of terminal groups and branching groups previously published were revised and elaborated on for poly(amidoamine) (PAMAM) dendrimers, and introduced for poly(propyleneimine) (POPAM) dendrimers and the novel POPAM-PAMAM hybrid, which we call the POMAM dendrimer. Experimental verification of the relationship between theoretical and actual structure for the PAMAM dendrimer was also established.
Sustainable Nanotechnology: Opportunities and Challenges for Theoretical/Computational Studies.
Cui, Qiang; Hernandez, Rigoberto; Mason, Sara E; Frauenheim, Thomas; Pedersen, Joel A; Geiger, Franz
2016-08-04
For assistance in the design of the next generation of nanomaterials that are functional and have minimal health and safety concerns, it is imperative to establish causality, rather than correlations, in how properties of nanomaterials determine biological and environmental outcomes. Due to the vast design space available and the complexity of nano/bio interfaces, theoretical and computational studies are expected to play a major role in this context. In this minireview, we highlight opportunities and pressing challenges for theoretical and computational chemistry approaches to explore the relevant physicochemical processes that span broad length and time scales. We focus discussions on a bottom-up framework that relies on the determination of correct intermolecular forces, accurate molecular dynamics, and coarse-graining procedures to systematically bridge the scales, although top-down approaches are also effective at providing insights for many problems such as the effects of nanoparticles on biological membranes.
Theoretical computer science and the natural sciences
NASA Astrophysics Data System (ADS)
Marchal, Bruno
2005-12-01
I present some fundamental theorems in computer science and illustrate their relevance in Biology and Physics. I do not assume prerequisites in mathematics or computer science beyond the set N of natural numbers, functions from N to N, the use of some notational conveniences to describe functions, and at some point, a minimal amount of linear algebra and logic. I start with Cantor's transcendental proof by diagonalization of the non enumerability of the collection of functions from natural numbers to the natural numbers. I explain why this proof is not entirely convincing and show how, by restricting the notion of function in terms of discrete well defined processes, we are led to the non algorithmic enumerability of the computable functions, but also-through Church's thesis-to the algorithmic enumerability of partial computable functions. Such a notion of function constitutes, with respect to our purpose, a crucial generalization of that concept. This will make easy to justify deep and astonishing (counter-intuitive) incompleteness results about computers and similar machines. The modified Cantor diagonalization will provide a theory of concrete self-reference and I illustrate it by pointing toward an elementary theory of self-reproduction-in the Amoeba's way-and cellular self-regeneration-in the flatworm Planaria's way. To make it easier, I introduce a very simple and powerful formal system known as the Schoenfinkel-Curry combinators. I will use the combinators to illustrate in a more concrete way the notion introduced above. The combinators, thanks to their low-level fine grained design, will also make it possible to make a rough but hopefully illuminating description of the main lessons gained by the careful observation of nature, and to describe some new relations, which should exist between computer science, the science of life and the science of inert matter, once some philosophical, if not theological, hypotheses are made in the cognitive sciences. In the
Morishige, Kunimitsu; Tateishi, Masayoshi
2006-04-25
To examine the theoretical and semiempirical relations between pore size and the pressure of capillary condensation or evaporation proposed so far, we constructed an accurate relation between the pore radius and the capillary condensation and evaporation pressure of nitrogen at 77 K for the cylindrical pores of the ordered mesoporous MCM-41 and SBA-15 silicas. Here, the pore size was determined from a comparison between the experimental and calculated X-ray diffraction patterns due to X-ray structural modeling recently developed. Among the many theoretical relations that differ from each other in the degree of theoretical improvements, a macroscopic thermodynamic approach based on Broekhoff-de Boer equations was found to be in fair agreement with the experimental relation obtained in the present study.
Theoretical Studies of Atomic Transitions
Charlotte Froese Fischer
2005-07-08
Atomic structure calculations were performed for properties such as energy levels, binding energies, transition probabilities, lifetimes, hyperfine structure, and isotope shifts. Accurate computational procedures were devised so that properties could be predicted even when they could not be obtained from experiment, and to assist in the identification of observed data. The method used was the multiconfiguration Hartree-Fock (MCHF) method, optionally corrected for relativistic effects in the Breit-Pauli approximation. Fully relativistic Dirac-Fock calculations also were performed using the GRASP code A database of energy levels, lifetimes, and transition probabilities was designed and implemented and, at present, includes many results for Be-like to Ar-like.
Constructing the principles: Method and metaphysics in the progress of theoretical physics
NASA Astrophysics Data System (ADS)
Glass, Lawrence C.
This thesis presents a new framework for the philosophy of physics focused on methodological differences found in the practice of modern theoretical physics. The starting point for this investigation is the longstanding debate over scientific realism. Some philosophers have argued that it is the aim of science to produce an accurate description of the world including explanations for observable phenomena. These scientific realists hold that our best confirmed theories are approximately true and that the entities they propose actually populate the world, whether or not they have been observed. Others have argued that science achieves only frameworks for the prediction and manipulation of observable phenomena. These anti-realists argue that truth is a misleading concept when applied to empirical knowledge. Instead, focus should be on the empirical adequacy of scientific theories. This thesis argues that the fundamental distinction at issue, a division between true scientific theories and ones which are empirically adequate, is best explored in terms of methodological differences. In analogy with the realism debate, there are at least two methodological strategies. Rather than focusing on scientific theories as wholes, this thesis takes as units of analysis physical principles which are systematic empirical generalizations. The first possible strategy, the conservative, takes the assumption that the empirical adequacy of a theory in one domain serves as good evidence for such adequacy in other domains. This then motivates the application of the principle to new domains. The second strategy, the innovative, assumes that empirical adequacy in one domain does not justify the expectation of adequacy in other domains. New principles are offered as explanations in the new domain. The final part of the thesis is the application of this framework to two examples. On the first, Lorentz's use of the aether is reconstructed in terms of the conservative strategy with respect to
Theoretical studies of combustion dynamics
Bowman, J.M.
1993-12-01
The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.
Theoretical Studies of Reaction Surfaces
2007-11-02
31 Aug 97 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS AASERT93 THEORETICAL STUDIES OF REACTION SURFACES F49620-93-1-0556 3484/XS 6. AUTHOR(S) 61103D DR...DUNCAN AVE ROOM B115 BOLLING AFB DC 20332- 8050 DR MICHAEL R. BERMAN 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION i AVAILABILITY STATEMENT Approved f or pill...reaction14 , and solvation of electrolytes1 5 . The EFP method described in the previous section has one drawback: the repulsive 3 potential relies on
Theoretical Studies on Cluster Compounds
NASA Astrophysics Data System (ADS)
Lin, Zhenyang
Available from UMI in association with The British Library. Requires signed TDF. The Thesis describes some theoretical studies on ligated and bare clusters. Chapter 1 gives a review of the two theoretical models, Tensor Surface Harmonic Theory (TSH) and Jellium Model, accounting for the electronic structures of ligated and bare clusters. The Polyhedral Skeletal Electron Pair Theory (PSEPT), which correlates the structures and electron counts (total number of valence electrons) of main group and transition metal ligated clusters, is briefly described. A structural jellium model is developed in Chapter 2 which accounts for the electronic structures of clusters using a crystal-field perturbation. The zero-order potential we derive is of central-field form, depends on the geometry of the cluster, and has a well-defined relationship to the full nuclear-electron potential. Qualitative arguments suggest that this potential produces different energy level orderings for clusters with a nucleus with large positive charge at the centre of the cluster. Analysis of the effects of the non-spherical perturbation on the spherical jellium shell structures leads to the conclusion that for a cluster with a closed shell electronic structure a high symmetry arrangement which is approximately or precisely close packed will be preferred. It also provides a basis for rationalising those structures of clusters with incomplete shell electronic configurations. In Chapter 3, the geometric conclusions derived in the structural jellium model are developed in more detail. The group theoretical consequences of the Tensor Surface Harmonic Theory are developed in Chapter 4 for (ML_2) _{rm n}, (ML_4) _{rm n} and (ML_5 ) _{rm n} clusters where either the xz and yz or x^2 -y^2 and xy components to L_sp{rm d}{pi } and L_sp{rm d} {delta} do not contribute equally to the bonding. The closed shell requirements for such clusters are defined and the orbital symmetry constraints pertaining to the
Theoretical insights into interprofessional education.
Hean, Sarah; Craddock, Deborah; Hammick, Marilyn
2012-01-01
This article argues for the need for theory in the practice of interprofessional education. It highlights the range of theories available to interprofessional educators and promotes the practical application of these to interprofessional learning and teaching. It summarises the AMEE Guides in Medical Education publication entitled Theoretical Insights into Interprofessional Education: AMEE Guide No. 62, where the practical application of three theories, social capital, social constructivism and a sociological perspective of interprofessional education are discussed in-depth through the lens of a case study. The key conclusions of these discussions are presented in this article.
Some thoughts on theoretical physics
NASA Astrophysics Data System (ADS)
Tsallis, Constantino
2004-12-01
Some thoughts are presented on the inter-relation between beauty and truth in science in general and theoretical physics in particular. Some conjectural procedures that can be used to create new ideas, concepts and results are illustrated in both Boltzmann-Gibbs and nonextensive statistical mechanics. The sociological components of scientific progress and its unavoidable and benefic controversies are, mainly through existing literary texts, briefly addressed as well. Short essay based on the plenary talk given at the International Workshop on Trends and Perspectives in Extensive and Non-Extensive Statistical Mechanics, held in November 19-21, 2003, in Angra dos Reis, Brazil.
Probabilistic description of traffic flow
NASA Astrophysics Data System (ADS)
Mahnke, R.; Kaupužs, J.; Lubashevsky, I.
2005-03-01
A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.
XML Translator for Interface Descriptions
NASA Technical Reports Server (NTRS)
Boroson, Elizabeth R.
2009-01-01
A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.
Accurately measuring dynamic coefficient of friction in ultraform finishing
NASA Astrophysics Data System (ADS)
Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.
2013-09-01
UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.
Nonexposure Accurate Location K-Anonymity Algorithm in LBS
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060
Nonexposure accurate location K-anonymity algorithm in LBS.
Jia, Jinying; Zhang, Fengli
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGRICULTURE WATER RESOURCES RIVER BASIN INVESTIGATIONS AND SURVEYS Floodplain Management Assistance § 621.20 Description. Floodplain management studies provide needed information and assistance to local and...
Theoretical perspectives on strange physics
Ellis, J.
1983-04-01
Kaons are heavy enough to have an interesting range of decay modes available to them, and light enough to be produced in sufficient numbers to explore rare modes with satisfying statistics. Kaons and their decays have provided at least two major breakthroughs in our knowledge of fundamental physics. They have revealed to us CP violation, and their lack of flavor-changing neutral interactions warned us to expect charm. In addition, K/sup 0/-anti K/sup 0/ mixing has provided us with one of our most elegant and sensitive laboratories for testing quantum mechanics. There is every reason to expect that future generations of kaon experiments with intense sources would add further to our knowledge of fundamental physics. This talk attempts to set future kaon experiments in a general theoretical context, and indicate how they may bear upon fundamental theoretical issues. A survey of different experiments which would be done with an Intense Medium Energy Source of Strangeness, including rare K decays, probes of the nature of CP isolation, ..mu.. decays, hyperon decays and neutrino physics is given. (WHK)
Theoretical perspectives on narrative inquiry.
Emden, C
1998-04-01
Narrative inquiry is gaining momentum in the field of nursing. As a research approach it does not have any single heritage of methodology and its practitioners draw upon diverse sources of influence. Central to all narrative inquiry however, is attention to the potential of stories to give meaning to people's lives, and the treatment of data as stories. This is the first of two papers on the topic and addresses the theoretical influences upon a particular narrative inquiry into nursing scholars and scholarship. The second paper, Conducting a narrative analysis, describes the actual narrative analysis as it was conducted in this same study. Together, the papers provide sufficient detail for others wishing to pursue a similar approach to do so, or to develop the ideas and procedures according to their own way of thinking. Within this first theoretical paper, perspectives from Jerome Bruner (1987) and Wade Roof (1993) are outlined. These relate especially to the notion of stories as 'imaginative constructions' and as 'cultural narratives' and as such, highlight the profound importance of stories as being individually and culturally meaningful. As well, perspectives on narrative inquiry from nursing literature are highlighted. Narrative inquiry in this instance lies within the broader context of phenomenology.
Memory conformity affects inaccurate memories more than accurate memories.
Wright, Daniel B; Villalba, Daniella K
2012-01-01
After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.
Accurate Fiber Length Measurement Using Time-of-Flight Technique
NASA Astrophysics Data System (ADS)
Terra, Osama; Hussein, Hatem
2016-06-01
Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.
Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian
2015-09-01
Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to
NASA Astrophysics Data System (ADS)
Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.
2016-10-01
In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.
Accurate calculation of the intensity dependence of the refractive index using polarized basis sets.
Baranowska-Łączkowska, Angelika; Łączkowski, Krzysztof Z; Fernández, Berta
2012-01-14
Using the single and double excitation coupled cluster level of theory (CCSD) and the density functional theory/Becke 3-parameter Lee-Yang and Parr (DFT/B3LYP) methods, we test the performance of the Pol, ZPol, and LPol-n (n = ds, dl, fs, fl) basis sets in the accurate description of the intensity dependence of the refractive index in the Ne atom, and the N(2) and the CO molecules. Additionally, we test the aug-pc-n (n = 1, 2) basis sets of Jensen, and the SVPD, TZVPD, and QZVPD bases by Rappoport and Furche. Tests involve calculations of dynamic polarizabilities and frequency dependent second hyperpolarizabilities. The results are interpreted in terms of the medium constants entering the expressions for optically induced birefringences. In all achiral systems, the performance of the LPol-n sets is very good. Also the aug-pc-2 set yields promising results. Accurate CCSD results available in the literature allow us to select the best basis sets in order to carry out DFT/B3LYP calculations of medium constants in larger molecules. As applications, we show results for (R)-fluoro-oxirane and (R)-methyloxirane.
Accurate Cross-section Calculations for Low-Energy Electron-Atom Collisions
Zatsarinny, Oleg; Bartschat, Klaus
2011-05-11
We describe a recently developed fully relativistic B-spline R-matrix method for atomic structure as well as calculations for electron and photon collision with atoms and ions. The method is based on the solution of the many-electron Fock-Dirac equation and allows to employ non-orthogonal sets of atomic orbitals. A B-spline basis is used to generate both the target description and the R-matrix basis functions in the inner region. Employing B-splines of different orders for the large and small components prevents the appearance of spurious states in the spectrum of the Dirac equation. Using term-dependent and thus nonorthogonal sets of one-electron functions enables us to generate accurate and flexible representations of the target states and the scattering function. Our method is based upon the Dirac-Coulomb Hamiltonian and thus may be employed for any complex atom or ion, without the use of phenomenological core potentials. Example results from recent applications of the method for accurate calculations of low-energy electron scattering from noble gases are presented. In most cases we obtained a substantial improvement over results obtained in previous Breit-Pauli R-matrix calculations.
NASA Astrophysics Data System (ADS)
Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L.; Perdew, John P.
2016-09-01
One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.
MOSAIK: a hash-based algorithm for accurate next-generation sequencing short-read mapping.
Lee, Wan-Ping; Stromberg, Michael P; Ward, Alistair; Stewart, Chip; Garrison, Erik P; Marth, Gabor T
2014-01-01
MOSAIK is a stable, sensitive and open-source program for mapping second and third-generation sequencing reads to a reference genome. Uniquely among current mapping tools, MOSAIK can align reads generated by all the major sequencing technologies, including Illumina, Applied Biosystems SOLiD, Roche 454, Ion Torrent and Pacific BioSciences SMRT. Indeed, MOSAIK was the only aligner to provide consistent mappings for all the generated data (sequencing technologies, low-coverage and exome) in the 1000 Genomes Project. To provide highly accurate alignments, MOSAIK employs a hash clustering strategy coupled with the Smith-Waterman algorithm. This method is well-suited to capture mismatches as well as short insertions and deletions. To support the growing interest in larger structural variant (SV) discovery, MOSAIK provides explicit support for handling known-sequence SVs, e.g. mobile element insertions (MEIs) as well as generating outputs tailored to aid in SV discovery. All variant discovery benefits from an accurate description of the read placement confidence. To this end, MOSAIK uses a neural-network based training scheme to provide well-calibrated mapping quality scores, demonstrated by a correlation coefficient between MOSAIK assigned and actual mapping qualities greater than 0.98. In order to ensure that studies of any genome are supported, a training pipeline is provided to ensure optimal mapping quality scores for the genome under investigation. MOSAIK is multi-threaded, open source, and incorporated into our command and pipeline launcher system GKNO (http://gkno.me).
Taxonomy of Macromotettixoides with the description of a new species (Tetrigidae, Metrodorinae)
Zha, Ling-Sheng; Yu, Feng-Ming; Boonmee, Saranyaphat; Eungwanichayapant, Prapassorn D.; Wen, Ting-Chi
2017-01-01
Abstract Descriptions of the flying organs and generic characteristics of the genus Macromotettixoides Zheng, Wei & Jiang are currently imprecise. Macromotettixoides is reviewed and compared with allied genera. A re-description is undertaken and a determination key is provided to Macromotettixoides. Macromotettixoides parvula Zha & Wen, sp. n. from the Guizhou Karst Region, China, is described and illustrated with photographs. Observations on the ecology and habits of the new species are recorded. Four current species of Hyboella Hancock are transferred to Macromotettixoides. Variations of the flying organs and tegminal sinus in the Tetrigidae are discussed, which will help to describe them accurately. PMID:28228664
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
Must Kohn-Sham oscillator strengths be accurate at threshold?
Yang Zenghui; Burke, Kieron; Faassen, Meta van
2009-09-21
The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.
Accurate torque-speed performance prediction for brushless dc motors
NASA Astrophysics Data System (ADS)
Gipper, Patrick D.
Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
In-line sensor for accurate rf power measurements
NASA Astrophysics Data System (ADS)
Gahan, D.; Hopkins, M. B.
2005-10-01
An in-line sensor has been constructed with 50Ω characteristic impedance to accurately measure rf power dissipated in a matched or unmatched load with a view to being implemented as a rf discharge diagnostic. The physical construction and calibration technique are presented. The design is a wide band, hybrid directional coupler/current-voltage sensor suitable for fundamental and harmonic power measurements. A comparison with a standard wattmeter using dummy load impedances shows that this in-line sensor is significantly more accurate in mismatched conditions.
In-line sensor for accurate rf power measurements
Gahan, D.; Hopkins, M.B.
2005-10-15
An in-line sensor has been constructed with 50 {omega} characteristic impedance to accurately measure rf power dissipated in a matched or unmatched load with a view to being implemented as a rf discharge diagnostic. The physical construction and calibration technique are presented. The design is a wide band, hybrid directional coupler/current-voltage sensor suitable for fundamental and harmonic power measurements. A comparison with a standard wattmeter using dummy load impedances shows that this in-line sensor is significantly more accurate in mismatched conditions.
Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air
NASA Technical Reports Server (NTRS)
Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.
2007-01-01
The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
Interpretable Decision Sets: A Joint Framework for Description and Prediction.
Lakkaraju, Himabindu; Bach, Stephen H; Jure, Leskovec
2016-08-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.
Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL
NASA Astrophysics Data System (ADS)
Ciambur, B. C.
2015-09-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL
Ciambur, B. C.
2015-09-10
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
Chatlapalli, S; Nazeran, H; Melarkod, V; Krishnam, R; Estrada, E; Pamula, Y; Cabrera, S
2004-01-01
The electrocardiogram (ECG) signal is used extensively as a low cost diagnostic tool to provide information concerning the heart's state of health. Accurate determination of the QRS complex, in particular, reliable detection of the R wave peak, is essential in computer based ECG analysis. ECG data from Physionet's Sleep-Apnea database were used to develop, test, and validate a robust heart rate variability (HRV) signal derivation algorithm. The HRV signal was derived from pre-processed ECG signals by developing an enhanced Hilbert transform (EHT) algorithm with built-in missing beat detection capability for reliable QRS detection. The performance of the EHT algorithm was then compared against that of a popular Hilbert transform-based (HT) QRS detection algorithm. Autoregressive (AR) modeling of the HRV power spectrum for both EHT- and HT-derived HRV signals was achieved and different parameters from their power spectra as well as approximate entropy were derived for comparison. Poincare plots were then used as a visualization tool to highlight the detection of the missing beats in the EHT method After validation of the EHT algorithm on ECG data from the Physionet, the algorithm was further tested and validated on a dataset obtained from children undergoing polysomnography for detection of sleep disordered breathing (SDB). Sensitive measures of accurate HRV signals were then derived to be used in detecting and diagnosing sleep disordered breathing in children. All signal processing algorithms were implemented in MATLAB. We present a description of the EHT algorithm and analyze pilot data for eight children undergoing nocturnal polysomnography. The pilot data demonstrated that the EHT method provides an accurate way of deriving the HRV signal and plays an important role in extraction of reliable measures to distinguish between periods of normal and sleep disordered breathing (SDB) in children.
Gas-Phase Theoretical Kinetics for Astrochemistry
NASA Astrophysics Data System (ADS)
Klippenstein, Stephen
2013-05-01
We will survey a number of our applications of ab initio theoretical kinetics to reactions of importance to astrochemistry. Illustrative examples will be taken from our calculations for (i) interstellar chemistry, (ii) Titan's atmospheric chemistry, and (iii) the chemistry of extrasolar giant planets. For low temperature interstellar chemistry, careful consideration of the long-range expansion of the potential allows for quantitative predictions of the kinetics. Our recent calculations for the reactions of H3+ with O(3P) and with CO suggest an increase of the predicted destruction rate of H3+ by a factor of 2.5 to 3.0 for temperatures that are typical of dense clouds. Further consideration of the interplay between spin-orbit and multipole terms for open-shell atomic fragments allows us to predict the kinetics for a number of the reactions that have been listed as important reactions for interstellar chemical modeling [V. Wakelam, I. W. M. Smith, E. Herbst, J. Troe, W. Geppert, et al. Space Science Rev., 156, 13-72, 2010]. Our calculations for Titan's atmosphere demonstrate the importance of radiative emission as a stabilization process in the low-pressure environment of Titan's upper atmosphere. Theory has also helped to illuminate the role of various reactions in both Titan's atmosphere and in extrasolar planetary atmospheres. Comparisons between theory and experiment have provided a more detail understanding of the kinetics of PAH dimerization. High level predictions of thermochemical properties are remarkably accurate, and allow us to provide important data for studying P chemistry in planetary atmospheres. Finally, our study of O(3P) + C3 provides an example of a case where theory provides suggestive but not definitive results, and further experiments are clearly needed.
Gas Phase Theoretical Kinetics for Astrochemistry
NASA Astrophysics Data System (ADS)
Klippenstein, Stephen J.; Georgievskii, Y.; Harding, L. B.
2012-05-01
We will survey a number of our applications of ab initio theoretical kinetics to reactions of importance to astrochemistry. Illustrative examples will be taken from our calculations for (i) interstellar chemistry, (ii) Titan’s atmospheric chemistry, and (iii) the chemistry of extrasolar giant planets. For low temperature interstellar chemistry, careful consideration of the long-range expansion of the potential allows for quantitative predictions of the kinetics. Our recent calculations for the reactions of H3+ with O(3P) and with CO suggest an increase of the predicted destruction rate of H3+ by a factor of 2.5 to 3.0 for temperatures that are typical of dense clouds. Further consideration of the interplay between spin-orbit and multipole terms for open-shell atomic fragments allows us to predict the kinetics for a number of the reactions that have been listed as important reactions for interstellar chemical modeling [V. Wakelam, I. W. M. Smith, E. Herbst, J. Troe, W. Geppert, et al. Space Science Rev., 156, 13-72, 2010]. Our calculations for Titan’s atmosphere demonstrate the importance of radiative emission as a stabilization process in the low-pressure environment of Titan’s upper atmosphere. Theory has also helped to illuminate the role of various reactions in both Titan’s atmosphere and in extrasolar planetary atmospheres. Comparisons between theory and experiment have provided a more detail understanding of the kinetics of PAH dimerization. High level predictions of thermochemical properties are remarkably accurate, and allow us to provide important data for studying P chemistry in planetary atmospheres. Finally, our study of O(3P) + C3 provides an example of a case where theory provides suggestive but not definitive results, and further experiments are clearly needed.
Theoretical spectra of floppy molecules
NASA Astrophysics Data System (ADS)
Chen, Hua
2000-09-01
Detailed studies of the vibrational dynamics of floppy molecules are presented. Six-D bound-state calculations of the vibrations of rigid water dimer based on several anisotropic site potentials (ASP) are presented. A new sequential diagonalization truncation approach was used to diagonalize the angular part of the Hamiltonian. Symmetrized angular basis and a potential optimized discrete variable representation for intermonomer distance coordinate were used in the calculations. The converged results differ significantly from the results presented by Leforestier et al. [J. Chem. Phys. 106 , 8527 (1997)]. It was demonstrated that ASP-S potential yields more accurate tunneling splittings than other ASP potentials used. Fully coupled 4D quantum mechanical calculations were performed for carbon dioxide dimer using the potential energy surface given by Bukowski et al [J. Chem. Phys., 110, 3785 (1999)]. The intermolecular vibrational frequencies and symmetry adapted force constants were estimated and compared with experiments. The inter-conversion tunneling dynamics was studied using the calculated virtual tunneling splittings. Symmetrized Radau coordinates and the sequential diagonalization truncation approach were formulated for acetylene. A 6D calculation was performed with 5 DVR points for each stretch coordinate, and an angular basis that is capable of converging the angular part of the Hamiltonian to 30 cm-1 for internal energies up to 14000 cm-1. The probability at vinylidene configuration were evaluated. It was found that the eigenstates begin to extend to vinylidene configuration from about 10000 cm-1, and the ra, coordinate is closely related to the vibrational dynamics at high energy. Finally, a direct product DVR was defined for coupled angular momentum operators, and the SDT approach were formulated. They were applied in solving the angular part of the Hamiltonian for carbon dioxide dimer problem. The results show the method is capable of giving very accurate
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Description. 300.3 Section 300.3 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) GOVERNMENT NATIONAL MORTGAGE ASSOCIATION, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT GENERAL § 300.3 Description....
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Description. 300.3 Section 300.3 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) GOVERNMENT NATIONAL MORTGAGE ASSOCIATION, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT GENERAL § 300.3 Description....
Descriptive Summary of Georgia Tech's Semiotics Lab.
ERIC Educational Resources Information Center
Pearson, Charls
This document is a descriptive summary of the Georgia Institute of Technology's semiotics laboratory. A review of the goals and objectives of the laboratory is followed by a description of the facilities, including the computer software. The capabilities and uses of the laboratory are outlined for classroom experiments, instructional experiments,…
14 CFR 1259.300 - Description.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Description. 1259.300 Section 1259.300 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NATIONAL SPACE GRANT COLLEGE AND FELLOWSHIP PROGRAM National Needs Grants § 1259.300 Description. National needs awards may be awarded by...
37 CFR 1.435 - The description.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false The description. 1.435... COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES International Processing Provisions The International Application § 1.435 The description. (a) The application must meet the requirements as to the content and...
48 CFR 5416.203-1 - Description.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Description. 5416.203-1 Section 5416.203-1 Federal Acquisition Regulations System DEFENSE LOGISTICS AGENCY, DEPARTMENT OF DEFENSE TYPES OF CONTRACTS Fixed Price Contracts 5416.203-1 Description. (a)(S-90) Adjustments based...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Description. 100.2 Section 100.2 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade BUREAU OF THE CENSUS, DEPARTMENT OF COMMERCE SEAL § 100.2 Description. Seal: On a shield an open book beneath which is a lamp...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Description. 100.2 Section 100.2 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade BUREAU OF THE CENSUS, DEPARTMENT OF COMMERCE SEAL § 100.2 Description. Seal: On a shield an open book beneath which is a lamp...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Description. 100.2 Section 100.2 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade BUREAU OF THE CENSUS, DEPARTMENT OF COMMERCE SEAL § 100.2 Description. Seal: On a shield an open book beneath which is a lamp...
14 CFR 1259.600 - Panel description.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Panel description. 1259.600 Section 1259... AND FELLOWSHIP PROGRAM Space Grant Review Panel § 1259.600 Panel description. An independent committee, the Space Grant Review Panel, which is not subject to the Federal Advisory Committee Act, shall...
Descriptive Review of the Child: Aisha
ERIC Educational Resources Information Center
Crowley, Christopher B.
2008-01-01
This essay is a descriptive review of the child. It was written in preparation for a collaborative, inquiry-based protocol that was developed at the Prospect School in North Bennington, VT. Focusing on Aisha, a twelfth-grade student, this descriptive review discusses the following: physical presence and gesture, disposition and temperament,…
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Description. 300.3 Section 300.3 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) GOVERNMENT NATIONAL MORTGAGE ASSOCIATION, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT GENERAL § 300.3 Description....