Fundamental Physical Constants
National Institute of Standards and Technology Data Gateway
SRD 121 CODATA Fundamental Physical Constants (Web, free access) This site, developed in the Physics Laboratory at NIST, addresses three topics: fundamental physical constants, the International System of Units (SI), which is the modern metric system, and expressing the uncertainty of measurement results.
Are Fundamental Constants Really Constant?
ERIC Educational Resources Information Center
Swetman, T. P.
1972-01-01
Dirac's classical conclusions, that the values of e2, M and m are constants and the quantity of G decreases with time. Evoked considerable interest among researchers and traces historical development by which further experimental evidence points out that both e and G are constant values. (PS)
Olive, Keith A.; Peloso, Marco; Uzan, Jean-Philippe
2011-02-15
We consider the signatures of a domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of the fine-structure constant, {alpha}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in {alpha} relative to the terrestrial value, depending on our relative position with respect to the wall. This wall could resolve the contradiction between claims of a variation of {alpha} based on Keck/Hires data and of the constancy of {alpha} based on Very Large Telescope data. We derive the properties of the wall and the parameters of the underlying microscopic model required to reproduce the possible spatial variation of {alpha}. We discuss the constraints on the existence of the low-energy domain wall and describe its observational implications concerning the variation of the fundamental constants.
Quantum electrodynamics and fundamental constants
NASA Astrophysics Data System (ADS)
Wundt, Benedikt Johannes Wilhelm
The unprecedented precision achieved both in the experimental measurements as well as in the theoretical description of atomic bound states make them an ideal study object for fundamental physics and the determination of fundamental constants. This requires a careful study of the effects from quantum electrodynamics (QED) on the interaction between the electron and the nucleus. The two theoretical approaches for the evaluation of QED corrections are presented and discussed. Due to the presence of two energy scales from the binding potential and the radiation field, an overlapping parameter has to be used in both approaches in order to separate the energy scales. The different choices for the overlapping parameter in the two methods are further illustrated in a model example. With the nonrelativistic theory, relativistic corrections in order ( Zalpha)2 to the two-photon decay rate of ionic states are calculated, as well as the leading radiative corrections of alpha( Zalpha)2ln[(Zalpha)-2 ]. It is shown that the corrections is gauge-invariant under a "hybrid" gauge transformation between Coulomb and Yennie gauge. Furthermore, QED corrections for Rydberg states in one-electron ions are investigated. The smallness of the corrections and the absence of nuclear size corrections enable very accurate theoretical predictions. Measuring transition frequencies and comparing them to the theoretical predictions, QED theory can be tested more precisely. In turn, this could yield a more accurate value for the Rydberg constant. Using a transition in a nucleus with a well determined mass, acting as a reference, a comparison to transition in other nuclei can even allow to determined nuclear masses. Finally, in order to avoid an additional uncertainty in nuclei with non zero nuclear spin, QED self-energy corrections to the hyperfine structure up to order alpha(Zalpha)2Delta EHFS are determined for highly excited Rydberg states.
New Quasar Studies Keep Fundamental Physical Constant Constant
NASA Astrophysics Data System (ADS)
2004-03-01
fundamental constant at play here, alpha. However, the observed distribution of the elements is consistent with calculations assuming that the value of alpha at that time was precisely the same as the value today. Over the 2 billion years, the change of alpha has therefore to be smaller than about 2 parts per 100 millions. If present at all, this is a rather small change indeed. But what about changes much earlier in the history of the Universe? To measure this we must find means to probe still further into the past. And this is where astronomy can help. Because, even though astronomers can't generally do experiments, the Universe itself is a huge atomic physics laboratory. By studying very remote objects, astronomers can look back over a long time span. In this way it becomes possible to test the values of the physical constants when the Universe had only 25% of is present age, that is, about 10,000 million years ago. Very far beacons To do so, astronomers rely on spectroscopy - the measurement of the properties of light emitted or absorbed by matter. When the light from a flame is observed through a prism, a rainbow is visible. When sprinkling salt on the flame, distinct yellow lines are superimposed on the usual colours of the rainbow, so-called emission lines. Putting a gas cell between the flame and the prism, one sees however dark lines onto the rainbow: these are absorption lines. The wavelength of these emission and absorption lines is directly related to the energy levels of the atoms in the salt or in the gas. Spectroscopy thus allows us to study atomic structure. The fine structure of atoms can be observed spectroscopically as the splitting of certain energy levels in those atoms. So if alpha were to change over time, the emission and absorption spectra of these atoms would change as well. One way to look for any changes in the value of alpha over the history of the Universe is therefore to measure the spectra of distant quasars, and compare the wavelengths of
Man's Size in Terms of Fundamental Constants.
ERIC Educational Resources Information Center
Press, William H.
1980-01-01
Reviews calculations that derive an order of magnitude expression for the size of man in terms of fundamental constants, assuming that man satifies these three properties: he is made of complicated molecules; he requires an atmosphere which is not hydrogen and helium; he is as large as possible. (CS)
Search for a Variation of Fundamental Constants
NASA Astrophysics Data System (ADS)
Ubachs, W.
2013-06-01
Since the days of Dirac scientists have speculated about the possibility that the laws of nature, and the fundamental constants appearing in those laws, are not rock-solid and eternal but may be subject to change in time or space. Such a scenario of evolving constants might provide an answer to the deepest puzzle of contemporary science, namely why the conditions in our local Universe allow for extreme complexity: the fine-tuning problem. In the past decade it has been established that spectral lines of atoms and molecules, which can currently be measured at ever-higher accuracies, form an ideal test ground for probing drifting constants. This has brought this subject from the realm of metaphysics to that of experimental science. In particular the spectra of molecules are sensitive for probing a variation of the proton-electron mass ratio μ, either on a cosmological time scale, or on a laboratory time scale. A comparison can be made between spectra of molecular hydrogen observed in the laboratory and at a high redshift (z=2-3), using the Very Large Telescope (Paranal, Chile) and the Keck telescope (Hawaii). This puts a constraint on a varying mass ratio Δμ/μ at the 10^{-5} level. The optical work can also be extended to include CO molecules. Further a novel direction will be discussed: it was discovered that molecules exhibiting hindered internal rotation have spectral lines in the radio-spectrum that are extremely sensitive to a varying proton-electron mass ratio. Such lines in the spectrum of methanol were recently observed with the radio-telescope in Effelsberg (Germany). F. van Weerdenburg, M.T. Murphy, A.L. Malec, L. Kaper, W. Ubachs, Phys. Rev. Lett. 106, 180802 (2011). A. Malec, R. Buning, M.T. Murphy, N. Milutinovic, S.L. Ellison, J.X. Prochaska, L. Kaper, J. Tumlinson, R.F. Carswell, W. Ubachs, Mon. Not. Roy. Astron. Soc. 403, 1541 (2010). E.J. Salumbides, M.L. Niu, J. Bagdonaite, N. de Oliveira, D. Joyeux, L. Nahon, W. Ubachs, Phys. Rev. A 86, 022510
Spatial and temporal variations of fundamental constants
NASA Astrophysics Data System (ADS)
Levshakov, S. A.; Agafonova, I. I.; Molaro, P.; Reimers, D.
2010-11-01
Spatial and temporal variations in the electron-to-proton mass ratio, μ, and in the fine-structure constant, α, are not present in the Standard Model of particle physics but they arise quite naturally in grant unification theories, multidimensional theories and in general when a coupling of light scalar fields to baryonic matter is considered. The light scalar fields are usually attributed to a negative pressure substance permeating the entire visible Universe and known as dark energy. This substance is thought to be responsible for a cosmic acceleration at low redshifts, z < 1. A strong dependence of μ and α on the ambient matter density is predicted by chameleon-like scalar field models. Calculations of atomic and molecular spectra show that different transitions have different sensitivities to changes in fundamental constants. Thus, measuring the relative line positions, Δ V, between such transitions one can probe the hypothetical variability of physical constants. In particular, interstellar molecular clouds can be used to test the matter density dependence of μ, since gas density in these clouds is ~15 orders of magnitude lower than that in terrestrial environment. We use the best quality radio spectra of the inversion transition of NH3 (J,K)=(1,1) and rotational transitions of other molecules to estimate the radial velocity offsets, Δ V ≡ Vrot - Vinv. The obtained value of Δ V shows a statistically significant positive shift of 23±4stat±3sys m s-1 (1σ). Being interpreted in terms of the electron-to-proton mass ratio variation, this gives Δμ/μ = (22±4stat±3sys)×10-9. A strong constraint on variation of the quantity F = α2/μ in the Milky Way is found from comparison of the fine-structure transition J=1-0 in atomic carbon C i with the low-J rotational lines in carbon monoxide 13CO arising in the interstellar molecular clouds: |Δ F/F| < 3×10-7. This yields |Δ α/α| < 1.5×10-7 at z = 0. Since extragalactic absorbers have gas densities
Fundamental Approach to the Cosmological Constant Issue
NASA Astrophysics Data System (ADS)
Carmeli, Moshe
We use a Riemannian four-dimensional presentation for gravitation in which the coordinates are distances and velocity rather than the traditional space and time. We solve the field equations and show that there are three possibilities for the Universe to expand. The theory describes the Universe as having a three-phase evolution with a decelerating expansion, followed by a constant and an accelerating expansion, and it predicts that the Universe is now in the latter phase. It is shown, assuming Ωm = 0.245, that the time at which the Universe goes over from a decelerating to an accelerating expansion, occurs at 8.5 Gyr ago, at which time the cosmic radiation temperature was 146K. Recent observations show that the Universe's growth is accelerating. Our theory confirms these recent experimental results. The theory predicts also that now there is a positive pressure in the Universe. Although the theory has no cosmological constant, we extract from it its equivalence and show that Λ = 1.934 × 10-35 s-2. This value of Λ is in excellent agreement with measurements. It is also shown that the three-dimensional space of the Universe is Euclidean, as the Boomerang experiment shows.
Fundamental constants: The teamwork of precision
NASA Astrophysics Data System (ADS)
Myers, Edmund G.
2014-02-01
A new value for the atomic mass of the electron is a link in a chain of measurements that will enable a test of the standard model of particle physics with better than part-per-trillion precision. See Letter p.467
Systematic harmonic power laws inter-relating multiple fundamental constants
NASA Astrophysics Data System (ADS)
Chakeres, Donald; Buckhanan, Wayne; Andrianarijaona, Vola
2017-01-01
Power laws and harmonic systems are ubiquitous in physics. We hypothesize that 2, π, the electron, Bohr radius, Rydberg constant, neutron, fine structure constant, Higgs boson, top quark, kaons, pions, muon, Tau, W, and Z when scaled in a common single unit are all inter-related by systematic harmonic powers laws. This implies that if the power law is known it is possible to derive a fundamental constant's scale in the absence of any direct experimental data of that constant. This is true for the case of the hydrogen constants. We created a power law search engine computer program that randomly generated possible positive or negative powers searching when the product of logical groups of constants equals 1, confirming they are physically valid. For 2, π, and the hydrogen constants the search engine found Planck's constant, Coulomb's energy law, and the kinetic energy law. The product of ratios defined by two constants each was the standard general format. The search engine found systematic resonant power laws based on partial harmonic fraction powers of the neutron for all of the constants with products near 1, within their known experimental precision, when utilized with appropriate hydrogen constants. We conclude that multiple fundamental constants are inter-related within a harmonic power law system.
Quantum electrodynamics, high-resolution spectroscopy and fundamental constants
NASA Astrophysics Data System (ADS)
Karshenboim, Savely G.; Ivanov, Vladimir G.
2017-01-01
Recent progress in high-resolution spectroscopy has delivered us a variety of accurate optical results, which can be used for the determination of the atomic fundamental constants and for constraining their possible time variation. We present a brief overview of the results discussing in particular, the determination of the Rydberg constant, the relative atomic weight of the electron and proton, their mass ratio and the fine structure constant. Many individual results on those constants are obtained with use of quantum electrodynamics, and we discuss which sectors of QED are involved. We derive constraints on a possible time variation of the fine structure constants and me/mp.
Differential Mobility Spectrometry: Preliminary Findings on Determination of Fundamental Constants
NASA Technical Reports Server (NTRS)
Limero, Thomas; Cheng, Patti; Boyd, John
2007-01-01
The electron capture detector (ECD) has been used for 40+ years (1) to derive fundamental constants such as a compound's electron affinity. Given this historical perspective, it is not surprising that differential mobility spectrometry (DMS) might be used in a like manner. This paper will present data from a gas chromatography (GC)-DMS instrument that illustrates the potential capability of this device to derive fundamental constants for electron-capturing compounds. Potential energy curves will be used to provide possible explanation of the data.
Redefinition of SI Units Based on Fundamental Physical Constants
NASA Astrophysics Data System (ADS)
Fujii, Kenichi
The definitions of some units of the International System are likely to be revised as early as 2011 by basing them on fixed values of fundamental constants of nature, provided experimental realizations are demonstrated with sufficiently small uncertainties. As regards the kilogram, experiments aiming at linking it to the Avogadro constant and the Planck constant are under way in several laboratories. Details are given on the experimental techniques developed to achieve the target. The other units likely to be redefined are the ampere, the kelvin and the mole. Advantages and disadvantages of different alternatives for revised definitions are discussed.
The determination of best values of the fundamental physical constants.
Taylor, Barry N
2005-09-15
The purpose of this paper is to provide an overview of how a self-consistent set of 'best values' of the fundamental physical constants for use worldwide by all of science and technology is obtained from all of the relevant data available at a given point in time. The basis of the discussion is the 2002 Committee on Data for Science and Technology (CODATA) least-squares adjustment of the values of the constants, the most recent such study available, which was carried out under the auspices of the CODATA Task group on fundamental constants. A detailed description of the 2002 CODATA adjustment, which took into account all relevant data available by 31 December 2002, plus selected data that became available by Fall of 2003, may be found in the January 2005 issue of the Reviews of Modern Physics. Although the latter publication includes the full set of CODATA recommended values of the fundamental constants resulting from the 2002 adjustment, the set is also available electronically at http://physics.nist.gov/constants.
Planck intermediate results. XXIV. Constraints on variations in fundamental constants
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Butler, R. C.; Calabrese, E.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombo, L. P. L.; Couchot, F.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Diego, J. M.; Dole, H.; Doré, O.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Fabre, O.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Pratt, G. W.; Prunet, S.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Ristorcelli, I.; Rocha, G.; Roudier, G.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Uzan, J.-P.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Yvon, D.; Zacchei, A.; Zonca, A.
2015-08-01
Any variation in the fundamental physical constants, more particularly in the fine structure constant, α, or in the mass of the electron, me, affects the recombination history of the Universe and cause an imprint on the cosmic microwave background angular power spectra. We show that the Planck data allow one to improve the constraint on the time variation of the fine structure constant at redshift z ~ 103 by about a factor of 5 compared to WMAP data, as well as to break the degeneracy with the Hubble constant, H0. In addition to α, we can set a constraint on the variation in the mass of the electron, me, and in the simultaneous variation of the two constants. We examine in detail the degeneracies between fundamental constants and the cosmological parameters, in order to compare the limits obtained from Planck and WMAP and to determine the constraining power gained by including other cosmological probes. We conclude that independent time variations of the fine structure constant and of the mass of the electron are constrained by Planck to Δα/α = (3.6 ± 3.7) × 10-3 and Δme/me = (4 ± 11) × 10-3 at the 68% confidence level. We also investigate the possibility of a spatial variation of the fine structure constant. The relative amplitude of a dipolar spatial variation in α (corresponding to a gradient across our Hubble volume) is constrained to be δα/α = (-2.4 ± 3.7) × 10-2. Appendices are available in electronic form at http://www.aanda.org
Early universe constraints on time variation of fundamental constants
Landau, Susana J.; Mosquera, Mercedes E.; Scoccola, Claudia G.; Vucetich, Hector
2008-10-15
We study the time variation of fundamental constants in the early Universe. Using data from primordial light nuclei abundances, cosmic microwave background, and the 2dFGRS power spectrum, we put constraints on the time variation of the fine structure constant {alpha} and the Higgs vacuum expectation value
The Relation between Fundamental Constants and Particle Physics Parameters
NASA Astrophysics Data System (ADS)
Thompson, Rodger
2017-01-01
The observed constraints on the variability of the proton to electron mass ratio $\\mu$ and the fine structure constant $\\alpha$ are used to establish constraints on the variability of the Quantum Chromodynamic Scale and a combination of the Higgs Vacuum Expectation Value and the Yukawa couplings. Further model dependent assumptions provide constraints on the Higgs VEV and the Yukawa couplings separately. A primary conclusion is that limits on the variability of dimensionless fundamental constants such as $\\mu$ and $\\alpha$ provide important constraints on the parameter space of new physics and cosmologies.
Recommended Values of the Fundamental Physical Constants: A Status Report
Taylor, Barry N.; Cohen, E. Richard
1990-01-01
We summarize the principal advances made in the fundamental physical constants field since the completion of the 1986 CODATA least-squares adjustment of the constants and discuss their implications for both the 1986 set of recommended values and the next least-squares adjustment. In general, the new results lead to values of the constants with uncertainties 5 to 7 times smaller than the uncertainties assigned the 1986 values. However, the changes in the values themselves are less than twice the 1986 assigned one-standard-deviation uncertainties and thus are not highly significant. Although much new data has become available since 1986, three new results dominate the analysis: a value of the Planck constant obtained from a realization of the watt; a value of the fine-structure constant obtained from the magnetic moment anomaly of the electron; and a value of the molar gas constant obtained from the speed of sound in argon. Because of their dominant role in determining the values and uncertainties of many of the constants, it is highly desirable that additional results of comparable uncertainty that corroborate these three data items be obtained before the next adjustment is carried out. Until then, the 1986 CODATA set of recommended values will remain the set of choice. PMID:28179787
Machine Shop Fundamentals: Part I.
ERIC Educational Resources Information Center
Kelly, Michael G.; And Others
These instructional materials were developed and designed for secondary and adult limited English proficient students enrolled in machine tool technology courses. Part 1 includes 24 lessons covering introduction, safety and shop rules, basic machine tools, basic machine operations, measurement, basic blueprint reading, layout, and bench tools.…
Dynamical dark energy and variation of fundamental "constants"
NASA Astrophysics Data System (ADS)
Stern, Steffen
2008-12-01
In this thesis we study the influence of a possible variation of fundamental "constants" on the process of Big Bang Nucleosynthesis (BBN). Our findings are combined with further studies on variations of constants in other physical processes to constrain models of grand unification (GUT) and quintessence. We will find that the 7Li problem of BBN can be ameliorated if one allows for varying constants, where especially varying light quark masses show a strong influence. Furthermore, we show that recent studies of varying constants are in contradiction with each other and BBN in the framework of six exemplary GUT scenarios, if one assumes monotonic variation with time. We conclude that there is strong tension between recent claims of varying constants, hence either some claims have to be revised, or there are much more sophisticated GUT relations (and/or non-monotonic variations) realized in nature. The methods introduced in this thesis prove to be powerful tools to probe regimes well beyond the Standard Model of particle physics or the concordance model of cosmology, which are currently inaccessible by experiments. Once the first irrefutable proofs of varying constants are available, our method will allow for probing the consistency of models beyond the standard theories like GUT or quintessence and also the compatibility between these models.
ESO Future Facilities to Probe Fundamental Physical Constants
NASA Astrophysics Data System (ADS)
Molaro, Paolo; Liske, Jochen
Following HARPS, two ESO projects are aimed at the ambitious goal of trying to reach the highest possible precision in measuring the radial velocity of astronomical sources. ESPRESSO spectrograph, located at the incoherent combined 4VLT focus, but able to work either with one or all VLT units, and CODEX for E-ELT will mark ESO roadmap towards the cm s - 1level of precision and possibly to an unlimited temporal baseline. By providing photon noise limited measures their promise is to improve the present limits in the variability of fundamental physical constants by one and two orders of magnitude, respectively, thus allowing for instance to verify the claim discussed at this conference by John Webb of a possible spatial dipole in the variation of the fine structure constant.
Is there further evidence for spatial variation of fundamental constants?
NASA Astrophysics Data System (ADS)
Berengut, J. C.; Flambaum, V. V.; King, J. A.; Curran, S. J.; Webb, J. K.
2011-06-01
Indications of spatial variation of the fine-structure constant, α, based on study of quasar absorption systems have recently been reported [J. K. Webb, J. A. King, M. T. Murphy, V. V. Flambaum, R. F. Carswell, and M. B. Bainbridge, arXiv:1008.3907.]. The physics that causes this α-variation should have other observable manifestations, and this motivates us to look for complementary astrophysical effects. In this paper we propose a method to test whether spatial variation of fundamental constants existed during the epoch of big bang nucleosynthesis and study existing measurements of deuterium abundance for a signal. We also examine existing quasar absorption spectra data that are sensitive to variation of the electron-to-proton mass ratio μ and x=α2μgp for spatial variation.
Evaluation of uncertainty in the adjustment of fundamental constants
NASA Astrophysics Data System (ADS)
Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza
2016-02-01
Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.
Laboratory Limits for Temporal Variations of Fundamental Constants:. AN Update
NASA Astrophysics Data System (ADS)
Peik, E.; Lipphardt, B.; Schnatz, H.; Tamm, C.; Weyers, S.; Wynands, R.
2008-09-01
Precision comparisons of different atomic frequency standards over a period of a few years can be used for a sensitive search for temporal variations of fundamental constants. We present recent frequency measurements of the 688 THz transition in the 171Yb+ ion. For this transition frequency a record over six years is now available, showing that a possible frequency drift relative to a cesium clock can be constrained to (-0.54 ± 0.97) Hz/yr, i.e. at the level of 2 · 10-15 per year. Combined with precision frequency measurements of an optical frequency in 199Hg+ and of the hyperfine ground state splitting in 87Rb a stringent limit on temporal variations of the fine structure constant α: d ln α/dt = (-0.26 ± 0.39) · 10-15 yr-1 and a model-dependent limit for variations of the proton-to-electron mass ratio μ in the present epoch can be derived: d ln μ/dt = (-1.2 ± 2.2) · 10-15 yr-1. We discuss these results in the context of astrophysical observations that apparently indicate changes in both of these constants over the last 5-10 billion years.
Base units of the SI, fundamental constants and modern quantum physics.
Bordé, Christian J
2005-09-15
Over the past 40 years, a number of discoveries in quantum physics have completely transformed our vision of fundamental metrology. This revolution starts with the frequency stabilization of lasers using saturation spectroscopy and the redefinition of the metre by fixing the velocity of light c. Today, the trend is to redefine all SI base units from fundamental constants and we discuss strategies to achieve this goal. We first consider a kinematical frame, in which fundamental constants with a dimension, such as the speed of light c, the Planck constant h, the Boltzmann constant k(B) or the electron mass m(e) can be used to connect and redefine base units. The various interaction forces of nature are then introduced in a dynamical frame, where they are completely characterized by dimensionless coupling constants such as the fine structure constant alpha or its gravitational analogue alpha(G). This point is discussed by rewriting the Maxwell and Dirac equations with new force fields and these coupling constants. We describe and stress the importance of various quantum effects leading to the advent of this new quantum metrology. In the second part of the paper, we present the status of the seven base units and the prospects of their possible redefinitions from fundamental constants in an experimental perspective. The two parts can be read independently and they point to these same conclusions concerning the redefinitions of base units. The concept of rest mass is directly related to the Compton frequency of a body, which is precisely what is measured by the watt balance. The conversion factor between mass and frequency is the Planck constant, which could therefore be fixed in a realistic and consistent new definition of the kilogram based on its Compton frequency. We discuss also how the Boltzmann constant could be better determined and fixed to replace the present definition of the kelvin.
NASA Astrophysics Data System (ADS)
Berengut, J. C.; Flambaum, V. V.; Kava, E. M.
2011-10-01
Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes of experimental interest including 201,199Hg and 87,85Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.
Berengut, J. C.; Flambaum, V. V.; Kava, E. M.
2011-10-15
Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes of experimental interest including {sup 201,199}Hg and {sup 87,85}Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.
The fundamental constants of nature from lattice gauge theory simulations
Mackenzie, Paul B.; /Fermilab
2005-01-01
The fundamental laws of nature as we now know them are governed the fundamental parameters of the Standard Model. Some of these, such as the masses of the quarks, have been hidden from direct observation by the confinement of quarks. They are now being revealed through large scale numerical simulation of lattice gauge theory.
CODATA recommended values of the fundamental physical constants: 2014*
NASA Astrophysics Data System (ADS)
Mohr, Peter J.; Newell, David B.; Taylor, Barry N.
2016-07-01
This paper gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. Details of the data selection and methodology of the adjustment are described. The recommended values may also be found at physics.nist.gov/constants.
CODATA Recommended Values of the Fundamental Physical Constants: 2014*
NASA Astrophysics Data System (ADS)
Mohr, Peter J.; Newell, David B.; Taylor, Barry N.
2016-12-01
This paper gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. Details of the data selection and methodology of the adjustment are described. The recommended values may also be found at http://physics.nist.gov/constants.
Search for variations of fundamental constants using atomic fountain clocks.
Marion, H; Pereira Dos Santos, F; Abgrall, M; Zhang, S; Sortais, Y; Bize, S; Maksimovic, I; Calonico, D; Grünert, J; Mandache, C; Lemonde, P; Santarelli, G; Laurent, Ph; Clairon, A; Salomon, C
2003-04-18
Over five years, we have compared the hyperfine frequencies of 133Cs and 87Rb atoms in their electronic ground state using several laser-cooled 133Cs and 87Rb atomic fountains with an accuracy of approximately 10(-15). These measurements set a stringent upper bound to a possible fractional time variation of the ratio between the two frequencies: d/dt ln([(nu(Rb))/(nu(Cs))]=(0.2+/-7.0)x 10(-16) yr(-1) (1sigma uncertainty). The same limit applies to a possible variation of the quantity (mu(Rb)/mu(Cs))alpha(-0.44), which involves the ratio of nuclear magnetic moments and the fine structure constant.
Trapped Hydrogen Spectroscopy: Fundamental Constants and Atomic Clocks
NASA Astrophysics Data System (ADS)
Willmann, Lorenz
2002-05-01
Ultra high resolution spectroscopy was an essential ingredient in the realisation and observation of Bose-Einstein condensation of atomic hydrogen(D.G. Fried, T. Killian, L. Willmann, D. Landhuis, S. Moss, D. Kleppner, and T. Greytak, Phys. Rev. Lett. 81), 3807 (1998). That experiment is a good starting point to explore the possibilities for future spectroscopy of trapped ultracold hydrogen. Of particular interest are two aspects. Firstly, the exploitation of the intrinsically small linewidth of the 1S-2S transition of only 1.3 Hz as an optical frequency standard. Secondly, the precision determination of the 2S-nS energy splittings in hydrogen, which can be used to determine the Rydberg constant, the Lamb shift or the proton charge radius. We will combine these two aspects in the experiment. The absolut value of the hydrogen 1S-2S transition frequency(M. Niering, R. Holzwarth, J. Reichert, P. Pokasov, Th. Udem, M. Weitz, T. W. Hänsch, P. Lemonde, G. Santarelli, M. Abgrall, P. Laurent, C. Salomon, and A. Clairon, Phys. Rev. Lett. 84), 5496 (2000) serves as an optical frequency standard for the measurements of the 2S-nS transition frequencies. The frequencies will be linked by a frequency comb generated by a mode locked laser. Currently, a femto second laser is being set up in collaboration with the group of F. Kärtner at MIT. The source of trapped atoms in the metastable 2S state is laser excitation of the 1S-2S transition, thus the 2S-nS spectroscopy can be done at the same time and in the same trapping field to reduce systematic effects.
Constraints on alternate universes: stars and habitable planets with different fundamental constants
NASA Astrophysics Data System (ADS)
Adams, Fred C.
2016-02-01
This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant α and the gravitational structure constant αG, and find the region in the α-αG plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper delineates the portion of the α-αG plane that satisfies all of these constraints. The results indicate that viable universes—with working stars and habitable planets—can exist within a parameter space where the structure constants α and αG vary by several orders of magnitude. These constraints also provide upper bounds on the structure constants (α,αG) and their ratio. We find the limit αG/α lesssim 10-34, which shows that habitable universes must have a large hierarchy between the strengths of the gravitational force and the electromagnetic force.
Constraints on alternate universes: stars and habitable planets with different fundamental constants
Adams, Fred C.
2016-02-01
This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant α and the gravitational structure constant α{sub G}, and find the region in the α-α{sub G} plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper delineates the portion of the α-α{sub G} plane that satisfies all of these constraints. The results indicate that viable universes—with working stars and habitable planets—can exist within a parameter space where the structure constants α and α{sub G} vary by several orders of magnitude. These constraints also provide upper bounds on the structure constants (α,α{sub G}) and their ratio. We find the limit α{sub G}/α ∼< 10{sup −34}, which shows that habitable universes must have a large hierarchy between the strengths of the gravitational force and the electromagnetic force.
Fundamental molecular physics and chemistry, part 1
NASA Astrophysics Data System (ADS)
Stehney, A. F.; Inokuti, M.
1983-12-01
Scientifically, the work of the program deals with aspects of the physics and chemistry of molecules related to their interactions with photons, electrons, and other external agents. These areas of study were chosen in view of our goals; that is to say, they were chosen so that the eventual outcome of the work meets some of the needs of the US Department of Energy (DOE) and of other government agencies that support the research. First, cross sections for electron and photon interactions with molecules were determined theoretically and experimently, because those cross sections are indispensable for detailed microscopic analyses of the earliest processes of radiation action on any molecular substance, including biological materials. Those analyses in turn provide a sound basis for radiology and radiation dosimetry. Second, the spectroscopy of certain molecules and of small clusters of molecules were studied because this topic is fundamental to the full understanding of atmospheric-pollutant chemistry.
[Aerosinusitis: part 1: Fundamentals, pathophysiology and prophylaxis].
Weber, R; Kühnel, T; Graf, J; Hosemann, W
2014-01-01
The relevance of aerosinusitis stems from the high number of flight passengers and the impaired fitness for work of the flight personnel. The frontal sinus is more frequently affected than the maxillary sinus and the condition generally occurs during descent. Sinonasal diseases and anatomic variations leading to obstruction of paranasal sinus ventilation favor the development of aerosinusitis. This Continuing Medical Education (CME) article is based on selective literature searches of the PubMed database (search terms: "aerosinusitis", "barosinusitis", "barotrauma" AND "sinus", "barotrauma" AND "sinusitis", "sinusitis" AND "flying" OR "aviator"). Additionally, currently available monographs and further articles that could be identified based on the publication reviews were also included. Part 1 presents the pathophysiology, symptoms, risk factors, epidemiology and prophylaxis of aerosinusitis. In part 2, diagnosis, conservative and surgical treatment will be discussed.
Quasar searches for variations in fundamental constants: the need for laboratory spectroscopy
NASA Astrophysics Data System (ADS)
Murphy, Michael Thomas
2015-08-01
I will briefly review the main advances in the search for cosmological variations in the fundamental constants of Nature using quasars that rely on, and have sometimes driven, improvements in laboratory spectroscopy. These focus on just two main fundamental parameters - the fine-structure constant and the proton-electron mass ratio - but require laboratory measurements, from the radio through to the ultraviolet, of molecules, atoms and their ions. Although many limitations have been removed by concerted laboratory efforts, some still remain. Still greater precision maybe be required by frequency-comb calibration of future astronomical spectrographs (astrocombs) and the Atacama Large Millimeter/submillimeter Array (ALMA).
NASA Technical Reports Server (NTRS)
Huang, Xinchuan; Fortenberry, Ryan C.; Lee, Timothy J.
2013-01-01
The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(subJ) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(exp-1), and the vibrational configuration interaction computed result is 3330.9 cm(exp-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the ISM and the laboratory.
Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD+
Biesheuvel, J.; Karr, J.-Ph.; Hilico, L.; Eikema, K. S. E.; Ubachs, W.; Koelemeij, J. C. J.
2016-01-01
The simplest molecules in nature, molecular hydrogen ions in the form of H2+ and HD+, provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD+ by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws. PMID:26815886
Biesheuvel, J; Karr, J-Ph; Hilico, L; Eikema, K S E; Ubachs, W; Koelemeij, J C J
2016-01-27
The simplest molecules in nature, molecular hydrogen ions in the form of H2(+) and HD(+), provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD(+) by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws.
Gyromagnetic factors and atomic clock constraints on the variation of fundamental constants
NASA Astrophysics Data System (ADS)
Luo, Feng; Olive, Keith A.; Uzan, Jean-Philippe
2011-11-01
We consider the effect of the coupled variations of fundamental constants on the nucleon magnetic moment. The nucleon g-factor enters into the interpretation of the measurements of variations in the fine-structure constant, α, in both the laboratory (through atomic clock measurements) and in astrophysical systems (e.g. through measurements of the 21 cm transitions). A null result can be translated into a limit on the variation of a set of fundamental constants, that is usually reduced to α. However, in specific models, particularly unification models, changes in α are always accompanied by corresponding changes in other fundamental quantities such as the QCD scale, ΛQCD. This work tracks the changes in the nucleon g-factors induced from changes in ΛQCD and the light quark masses. In principle, these coupled variations can improve the bounds on the variation of α by an order of magnitude from existing atomic clock and astrophysical measurements. Unfortunately, the calculation of the dependence of g-factors on fundamental parameters is notoriously model-dependent.
Gyromagnetic factors and atomic clock constraints on the variation of fundamental constants
Luo Feng; Olive, Keith A.; Uzan, Jean-Philippe
2011-11-01
We consider the effect of the coupled variations of fundamental constants on the nucleon magnetic moment. The nucleon g-factor enters into the interpretation of the measurements of variations in the fine-structure constant, {alpha}, in both the laboratory (through atomic clock measurements) and in astrophysical systems (e.g. through measurements of the 21 cm transitions). A null result can be translated into a limit on the variation of a set of fundamental constants, that is usually reduced to {alpha}. However, in specific models, particularly unification models, changes in {alpha} are always accompanied by corresponding changes in other fundamental quantities such as the QCD scale, {Lambda}{sub QCD}. This work tracks the changes in the nucleon g-factors induced from changes in {Lambda}{sub QCD} and the light quark masses. In principle, these coupled variations can improve the bounds on the variation of {alpha} by an order of magnitude from existing atomic clock and astrophysical measurements. Unfortunately, the calculation of the dependence of g-factors on fundamental parameters is notoriously model-dependent.
Stadnik, Y V; Flambaum, V V
2015-04-24
Any slight variations in the fundamental constants of nature, which may be induced by dark matter or some yet-to-be-discovered cosmic field, would characteristically alter the phase of a light beam inside an interferometer, which can be measured extremely precisely. Laser and maser interferometry may be applied to searches for the linear-in-time drift of the fundamental constants, detection of topological defect dark matter through transient-in-time effects, and for a relic, coherently oscillating condensate, which consists of scalar dark matter fields, through oscillating effects. Our proposed experiments require either minor or no modifications of existing apparatus, and offer extensive reach into important and unconstrained spaces of physical parameters.
Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?
Stadnik, Y V; Flambaum, V V
2015-11-13
We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ, which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ, produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial (4)He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial (4)He abundance.
Dependence of macrophysical phenomena on the values of the fundamental constants
NASA Astrophysics Data System (ADS)
Press, W. H.; Lightman, A. P.
1983-12-01
Using simple arguments, it is considered how the fundamental constants determine the scales of various macroscopic phenomena, including the properties of solid matter; the distinction between rocks, asteroids, planets, and stars; the conditions on habitable planets; the length of the day and year; and the size and athletic ability of human beings. Most of the results, where testable, are accurate to within a couple of orders of magnitude.
Truppe, S.; Hendricks, R.J.; Tokunaga, S.K.; Lewandowski, H.J.; Kozlov, M.G.; Henkel, Christian; Hinds, E.A.; Tarbutt, M.R.
2013-01-01
Many modern theories predict that the fundamental constants depend on time, position or the local density of matter. Here we develop a spectroscopic method for pulsed beams of cold molecules, and use it to measure the frequencies of microwave transitions in CH with accuracy down to 3 Hz. By comparing these frequencies with those measured from sources of CH in the Milky Way, we test the hypothesis that fundamental constants may differ between the high- and low-density environments of the Earth and the interstellar medium. For the fine structure constant we find Δα/α=(0.3±1.1) × 10−7, the strongest limit to date on such a variation of α. For the electron-to-proton mass ratio we find Δμ/μ=(−0.7±2.2) × 10−7. We suggest how dedicated astrophysical measurements can improve these constraints further and can also constrain temporal variation of the constants. PMID:24129439
Competing bounds on the present-day time variation of fundamental constants
Dent, Thomas; Stern, Steffen; Wetterich, Christof
2009-04-15
We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time-varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the weak equivalence principle also lead to comparable bounds on present variations of constants. The 'winner in sensitivity' depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. Weak equivalence principle tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favor dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.
A Different Look at Dark Energy and the Time Variation of Fundamental Constants
Weinstein, Marvin; /SLAC
2011-02-07
This paper makes the simple observation that a fundamental length, or cutoff, in the context of Friedmann-Lemaitre-Robertson-Walker (FRW) cosmology implies very different things than for a static universe. It is argued that it is reasonable to assume that this cutoff is implemented by fixing the number of quantum degrees of freedom per co-moving volume (as opposed to a Planck volume) and the relationship of the vacuum-energy of all of the fields in the theory to the cosmological constant (or dark energy) is re-examined. The restrictions that need to be satisfied by a generic theory to avoid conflicts with current experiments are discussed, and it is shown that in any theory satisfying these constraints knowing the difference between w and minus one allows one to predict w. It is argued that this is a robust result and if this prediction fails the idea of a fundamental cutoff of the type being discussed can be ruled out. Finally, it is observed that, within the context of a specific theory, a co-moving cutoff implies a predictable time variation of fundamental constants. This is accompanied by a general discussion of why this is so, what are the strongest phenomenological limits upon this predicted variation, and which limits are in tension with the idea of a co-moving cutoff. It is pointed out, however, that a careful comparison of the predicted time variation of fundamental constants is not possible without restricting to a particular model field-theory and that is not done in this paper.
High Resolution Microwave Spectroscopy of CH as a Search for Variation of Fundamental Constants
NASA Astrophysics Data System (ADS)
Truppe, S.; Hendricks, R. J.; Tokunaga, S. K.; Hinds, E. A.; Tarbutt, M. R.
2013-06-01
The Standard Model of particle physics assumes that fundamental, dimensionless constants like the fine-structure constant, α, or the ratio of the proton to electron mass, μ, remain constant through time and space. Laboratory experiments have set tight bounds on variations of such constants on a short time scale. Astronomical observations, however, provide vital information about possible changes on long time scales. Recent measurements using quasar absorption spectra provide some evidence for a space-time variation of the fine-structure constant α. It is thus important to verify this discovery by using an entirely different method. Recently the prospect of using rotational microwave spectra of molecules as a probe of fundamental constants variation has attracted much attention. Generally these spectra depend on μ, but if fine and hyperfine structure is involved they also become sensitive to variations of α and the nuclear g-factor. Recent calculations show that the Λ-doublet and rotational spectra of CH are particularly sensitive to possible variations of μ and α. We present recent laboratory based high-resolution spectra of the Λ-doublet transition frequencies of the {F}_2, J=1/2 and {F}_1, J=3/2 states of CH, X^{2}{Π} (v=0) at 3.3GHz and 0.7GHz respectively, with {F} labelling the different spin-orbit manifolds of CH. We also present a measurement of the transition frequency between the two spin-orbit manifolds {F}_2, J=1/2 and {F}_1, J=3/2 at 530GHz. By using a molecular beam of CH in combination with a laser-microwave double-resonance technique and Ramsey's method of separated oscillatory fields, we have measured these transition frequencies to unprecedented accuracy. Hence CH can now be used as a sensitive probe to detect changes in fundamental constants by comparing lab based frequencies to radio-astronomical observations from distant gas clouds. T. Rosenband et al., Science {319}(5871), 1808, 2008 J. K. Webb et al., Physical Review Letters {107
Spectroscopy of antiprotonic helium atoms and its contribution to the fundamental physical constants
Hayano, Ryugo S.
2010-01-01
Antiprotonic helium atom, a metastable neutral system consisting of an antiproton, an electron and a helium nucleus, was serendipitously discovered, and has been studied at CERN’s antiproton decelerator facility. Its transition frequencies have recently been measured to nine digits of precision by laser spectroscopy. By comparing these experimental results with three-body QED calculations, the antiproton-to-electron massratio was determined as 1836.152674(5). This result contributed to the CODATA recommended values of the fundamental physical constants. PMID:20075605
Placing constraints on the time-variation of fundamental constants using atomic clocks
NASA Astrophysics Data System (ADS)
Nisbet-Jones, Peter
2015-05-01
Optical atomic frequency standards, such as those based on a single trapped ion of 171Yb+, now demonstrate systematic frequency uncertainties in the 10-17 -10-18 range. These standards rely on the principle that the unperturbed energy levels in atoms are fixed and can thus provide absolute frequency references. A frequency standard's uncertainty is therefore limited by the uncertainty in realising the idealized unperturbed environment. There exists the possibility however that the unperturbed level spacing is not fixed. Some theories that go beyond the Standard Model involve a time-variation of the fundamental ``constants'' - such as the fine structure constant - which determine these energy levels. Measurements of spectral lines in radiation emitted from distant galaxies around 1010 years ago are inconclusive, with some results suggesting the existence of a time-variation, and others observing nothing. By virtue of their very small measurement uncertainty atomic-clock experiments can, in timescales of only a few years, perform tests of present-day variation that are complementary to astrophysical data. Comparisons of frequency measurements between two or more atomic ``clock'' transitions that have different sensitivities to these constants enables us to directly measure any present-day time-variation. Combining recent results from the NPL 171Yb+ clock with measurements from other experiments worldwide places upper limits on the present-day time-variation of the proton-to-electron mass ratio μ and the fine-structure constant α of μ˙ / μ = 0 . 2 (1 . 1) ×10-16 yr-1 and μ˙ / μ = - 0 . 7 (2 . 1) ×10-17 .
An upper limit to the variation in the fundamental constants at redshift z = 5.2
NASA Astrophysics Data System (ADS)
Levshakov, S. A.; Combes, F.; Boone, F.; Agafonova, I. I.; Reimers, D.; Kozlov, M. G.
2012-04-01
Aims: We constrain a hypothetical variation in the fundamental physical constants over the course of cosmic time. Methods: We use unique observations of the CO(7-6) rotational line and the [C i] 3P2 - 3P1 3P2 fine-structure line towards a lensed galaxy at redshift z = 5.2 to constrain temporal variations in the constant F = α2/μ, where μ is the electron-to-proton mass ratio and α is the fine-structure constant. The relative change in F between z = 0 and z = 5.2, ΔF/F = (Fobs - Flab)/Flab, is estimated from the radial velocity offset, ΔV = Vrot - Vfs, between the rotational transitions in carbon monoxide and the fine-structure transition in atomic carbon. Results: We find a conservative value ΔV = (1 ± 5) km s-1 (1σ C.L.), which when interpreted in terms of ΔF/F gives ΔF/F < 2 × 10-5. Independent methods restrict the μ-variations at the level of Δμ/μ < 1 × 10-7 at z = 0.7 (look-back time tz0.7 = 6.4 Gyr). Assuming that temporal variations in μ, if any, are linear, this leads to an upper limit on Δμ/μ < 2 × 10-7 at z = 5.2 (tz5.2 = 12.9 Gyr). From both constraints on ΔF/F and Δμ/μ , one obtains for the relative change in α the estimate Δα/α < 8 × 10-6, which is at present the tightest limit on Δα/α at early cosmological epochs.
Fundamentals of Physics, Part 1 (Chapters 1-11)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2003-12-01
. 10-8 Torque. 10-9 Newton's Second Law for Rotation. 10-10 Work and Rotational Kinetic Energy. Review & Summary. Questions. Problems. Chapter 11.Rolling, Torque, and Angular Momentum. When a jet-powered car became supersonic in setting the land-speed record, what was the danger to the wheels? 11-1 What Is Physics? 11-2 Rolling as Translation and Rotation Combined. 11-3 The Kinetic Energy of Rolling. 11-4 The Forces of Rolling. 11-5 The Yo-Yo. 11-6 Torque Revisited. 11-7 Angular Momentum. 11-8 Newton's Second Law in Angular Form. 11-9 The Angular Momentum of a System of Particles. 11-10 The Angular Momentum of a Rigid Body Rotating About a Fixed Axis. 11-11 Conservation of Angular Momentum. 11-12 Precession of a Gyroscope. Review & Summary. Questions. Problems. Appendix A: The International System of Units (SI). Appendix B: Some Fundamental Constants of Physics. Appendix C: Some Astronomical Data. Appendix D: Conversion Factors. Appendix E: Mathematical Formulas. Appendix F: Properties of the Elements. Appendix G: Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics.
Silveira, Joshua A; Michelmann, Karsten; Ridgeway, Mark E; Park, Melvin A
2016-04-01
Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory.
Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics
NASA Astrophysics Data System (ADS)
Silveira, Joshua A.; Michelmann, Karsten; Ridgeway, Mark E.; Park, Melvin A.
2016-04-01
Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory.
Broeckhoven, K; Verstraeten, M; Choikhet, K; Dittmann, M; Witt, K; Desmet, G
2011-02-25
We report on a general theoretical assessment of the potential kinetic advantages of running LC gradient elution separations in the constant-pressure mode instead of in the customarily used constant-flow rate mode. Analytical calculations as well as numerical simulation results are presented. It is shown that, provided both modes are run with the same volume-based gradient program, the constant-pressure mode can potentially offer an identical separation selectivity (except from some small differences induced by the difference in pressure and viscous heating trajectory), but in a significantly shorter time. For a gradient running between 5 and 95% of organic modifier, the decrease in analysis time can be expected to be of the order of some 20% for both water-methanol and water-acetonitrile gradients, and only weakly depending on the value of V(G)/V₀ (or equivalently t(G)/t₀). Obviously, the gain will be smaller when the start and end composition lie closer to the viscosity maximum of the considered water-organic modifier system. The assumptions underlying the obtained results (no effects of pressure and temperature on the viscosity or retention coefficient) are critically reviewed, and can be inferred to only have a small effect on the general conclusions. It is also shown that, under the adopted assumptions, the kinetic plot theory also holds for operations where the flow rate varies with the time, as is the case for constant-pressure operation. Comparing both operation modes in a kinetic plot representing the maximal peak capacity versus time, it is theoretically predicted here that both modes can be expected to perform equally well in the fully C-term dominated regime (where H varies linearly with the flow rate), while the constant pressure mode is advantageous for all lower flow rates. Near the optimal flow rate, and for linear gradients running from 5 to 95% organic modifier, time gains of the order of some 20% can be expected (or 25-30% when accounting for
NASA Technical Reports Server (NTRS)
Inostroza, Natalia; Fortenberry, Ryan C.; Huang, Xinchuan; Lee, Timothy J.
2013-01-01
Through established, highly-accurate ab initio quartic force fields (QFFs), a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1(sup 1) 1A' and bent 2(sup 1)A' DCCN, H(C13)CCN, HC(C-13)N, and HCC(N-15) isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1 to 3.2 / cm range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly-dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X 3A0 HCCN.
Inostroza, Natalia; Fortenberry, Ryan C.; Lee, Timothy J.; Huang, Xinchuan
2013-12-01
Through established, highly accurate ab initio quartic force fields, a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1 {sup 1} A' and bent 2 {sup 1} A' DCCN, H{sup 13}CCN, HC{sup 13}CN, and HCC{sup 15}N isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good, with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1-3.2 cm{sup –1} range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X {sup 3} A' HCCN.
Fundamentals of Physics, Part 2 (Chapters 12-20)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2003-12-01
Engines. 20-8 A Statistical View of Entropy. Review & Summary Questions Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
NASA Astrophysics Data System (ADS)
Molaro, Paolo
An ideal instrument to probe fundamental constants such as the fine structure constant and the electron-to-proton mass ratio by means of absorption lines in QSOs spectra is a spectrograph which combine high throughput, high resolution and high stability and is compulsorily attached to a telescope with a large photon collecting area. Both the ESPRESSO proposal for the incoherent combined VLT focus and CODEX for the E-ELT keep these recipes and, although they are not optimized for this purpose, they hold the promise to improve the present limits by about two orders of magnitude. Thus either these physical constants are varying within this range or they would likely escape astronomical detection.
Quinn, Terry; Burnett, Keith
2005-09-15
This is a short introductory note to the texts of lectures presented at a Royal Society Discussion meeting held on 14-15 February 2005 and now published in this issue of Philosophical Transactions A. It contains a brief resumé of the papers in the order they were presented at the meeting. This issue contains the texts of all of the presentations except those of Christophe Salomon, on cold atom clocks and tests of fundamental theory, and Francis Everitt, on Gravity Probe B, which were, unfortunately, not available.
Fundamentals of Physics, Part 3 (Chapters 22-33)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2004-03-01
magnetic .eld used in an MRI scan cause a patient to be burned? 30-1 What Is Physics? 30-2 Two Experiments. 30-3 Faraday's Law of Induction. 30-4 Lenz's Law. 30-5 Induction and Energy Transfers. 30-6 Induced Electric Fields. 30-7 Inductors and Inductance. 30-8 Self-Induction. 30-9 RL Circuits. 30-10 Energy Stored in a Magnetic Field. 30-11 Energy Density of a Magnetic Field. 30-12 Mutual Induction. Review & Summary. Questions. Problems. Chapter 31. Electromagnetic Oscillations and Alternating Current. How did a solar eruption knock out the power-grid system of Quebec? 31-1 What Is Physics? 31-2 LC Oscillations, Qualitatively. 31-3 The Electrical-Mechanical Analogy. 31-4 LC Oscillations, Quantitatively. 31-5 Damped Oscillations in an RLC Circuit. 31-6 Alternating Current. 31-7 Forced Oscillations. 31-8 Three Simple Circuits. 31-9 The Series RLC Circuit. 31-10 Power in Alternating-Current Circuits. 31-11 Transformers. Review & Summary. Questions. Problems. Chapter 32. Maxwell's Equations; Magnetism of Matter. How can a mural painting record the direction of Earth's magnetic field? 32-1 What Is Physics? 32-2 Gauss' Law for Magnetic Fields. 32-3 Induced Magnetic Fields. 32-4 Displacement Current. 32-5 Maxwell's Equations. 32-6 Magnets. 32-7 Magnetism and Electrons. 32-8 Magnetic Materials. 32-9 Diamagnetism. 32-10 Paramagnetism. 32-11 Ferromagnetism. Review & Summary. Questions. Problems. Appendices. A. The International System of Units (SI). B. Some Fundamental Constants of Physics. C. Some Astronomical Data. D. Conversion Factors. E. Mathematical Formulas. F. Properties of the Elements. G. Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
New Limits on Coupling of Fundamental Constants to Gravity Using {sup 87}Sr Optical Lattice Clocks
Blatt, S.; Ludlow, A. D.; Campbell, G. K.; Thomsen, J. W.; Zelevinsky, T.; Boyd, M. M.; Ye, J.; Baillard, X.; Fouche, M.; Le Targat, R.; Brusch, A.; Lemonde, P.; Takamoto, M.; Hong, F.-L.; Katori, H.; Flambaum, V. V.
2008-04-11
The {sup 1}S{sub 0}-{sup 3}P{sub 0} clock transition frequency {nu}{sub Sr} in neutral {sup 87}Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1x10{sup -15} level makes {nu}{sub Sr} the best agreed-upon optical atomic frequency. We combine periodic variations in the {sup 87}Sr clock frequency with {sup 199}Hg{sup +} and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant {alpha}, electron-proton mass ratio {mu}, and light quark mass. Furthermore, after {sup 199}Hg{sup +}, {sup 171}Yb{sup +}, and H, we add {sup 87}Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of {alpha} and {mu}.
New limits on coupling of fundamental constants to gravity using 87Sr optical lattice clocks.
Blatt, S; Ludlow, A D; Campbell, G K; Thomsen, J W; Zelevinsky, T; Boyd, M M; Ye, J; Baillard, X; Fouché, M; Le Targat, R; Brusch, A; Lemonde, P; Takamoto, M; Hong, F-L; Katori, H; Flambaum, V V
2008-04-11
The 1S0-3P0 clock transition frequency nuSr in neutral 87Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1 x 10(-15) level makes nuSr the best agreed-upon optical atomic frequency. We combine periodic variations in the 87Sr clock frequency with 199Hg+ and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant alpha, electron-proton mass ratio mu, and light quark mass. Furthermore, after 199Hg+, 171Yb+, and H, we add 87Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of alpha and mu.
Limits on variations in fundamental constants from 21-cm and ultraviolet Quasar absorption lines.
Tzanavaris, P; Webb, J K; Murphy, M T; Flambaum, V V; Curran, S J
2005-07-22
Quasar absorption spectra at 21-cm and UV rest wavelengths are used to estimate the time variation of x [triple-bond] alpha(2)g(p)mu, where alpha is the fine structure constant, g(p) the proton g factor, and m(e)/m(p) [triple-bond] mu the electron/proton mass ratio. Over a redshift range 0.24 < or = zeta(abs) < or = 2.04, (Deltax/x)(weighted)(total) = (1.17 +/- 1.01) x 10(-5). A linear fit gives x/x = (-1.43 +/- 1.27) x 10(-15) yr(-1). Two previous results on varying alpha yield the strong limits Deltamu/mu = (2.31 +/- 1.03) x 10(-5) and Deltamu/mu=(1.29 +/- 1.01) x10(-5). Our sample, 8 x larger than any previous, provides the first direct estimate of the intrinsic 21-cm and UV velocity differences 6 km s(-1).
Du, Lin; Mackeprang, Kasper; Kjaergaard, Henrik G
2013-07-07
We have measured gas phase vibrational spectra of the bimolecular complex formed between methanol (MeOH) and dimethylamine (DMA) up to about 9800 cm(-1). In addition to the strong fundamental OH-stretching transition we have also detected the weak second overtone NH-stretching transition. The spectra of the complex are obtained by spectral subtraction of the monomer spectra from spectra recorded for the mixture. For comparison, we also measured the fundamental OH-stretching transition in the bimolecular complex between MeOH and trimethylamine (TMA). The enthalpies of hydrogen bond formation (ΔH) for the MeOH-DMA and MeOH-TMA complexes have been determined by measurements of the fundamental OH-stretching transition in the temperature range from 298 to 358 K. The enthalpy of formation is found to be -35.8 ± 3.9 and -38.2 ± 3.3 kJ mol(-1) for MeOH-DMA and MeOH-TMA, respectively, in the 298 to 358 K region. The equilibrium constant (Kp) for the formation of the MeOH-DMA complex has been determined from the measured and calculated transition intensities of the OH-stretching fundamental transition and the NH-stretching second overtone transition. The transition intensities were calculated using an anharmonic oscillator local mode model with dipole moment and potential energy curves calculated using explicitly correlated coupled cluster methods. The equilibrium constant for formation of the MeOH-DMA complex was determined to be 0.2 ± 0.1 atm(-1), corresponding to a ΔG value of about 4.0 kJ mol(-1).
NASA Astrophysics Data System (ADS)
Pašteka, L. F.; Borschevsky, A.; Flambaum, V. V.; Schwerdtfeger, P.
2015-07-01
We investigate a number of diatomic molecular ions to search for strongly enhanced effects of variation of fundamental constants important for physics beyond the standard model. The relative enhancements due to fine structure and electron-to-proton mass ratio variation occur in transitions between nearly degenerate levels of different nature. Since the trapping techniques for molecular ions have already been developed, the proposed molecules HBr+, HI+, Br2+ , I2+ , IBr+, ICl+, and IF+ are very promising candidates for future high-resolution experiments.
Identification of Parts Failures. FOS: Fundamentals of Service.
ERIC Educational Resources Information Center
John Deere Co., Moline, IL.
This parts failures identification manual is one of a series of power mechanics texts and visual aids covering theory of operation, diagnosis of trouble problems, and repair of automotive and off-the-road construction and agricultural equipment. Materials provide basic information with many illustrations for use by vocational students and teachers…
Writing biomedical manuscripts part I: fundamentals and general rules.
Ohwovoriole, A E
2011-01-01
It is a professional obligation for health researchers to investigate and communicate their findings to the medical community. The writing of a publishable scientific manuscript can be a daunting task for the beginner and to even some established researchers. Many manuscripts fail to get off the ground and/or are rejected. The writing task can be made easier and the quality improved by using and following simple rules and leads that apply to general scientific writing .The manuscript should follow a standard structure:(e.g. (Abstract) plus Introduction, Methods, Results, and Discussion/Conclusion, the IMRAD model. The authors must also follow well established fundamentals of good communication in science and be systematic in approach. The manuscript must move from what is currently known to what was unknown that was investigated using a hypothesis, research question or problem statement. Each section has its own style of structure and language of presentation. The beginning of writing a good manuscript is to do a good study design and to pay attention to details at every stage. Many manuscripts are rejected because of errors that can be avoided if the authors follow simple guidelines and rules. One good way to avoid potential disappointment in manuscript writing is to follow the established general rules along with those of the journal in which the paper is to be published. An important injunction is to make the writing precise, clear, parsimonious, and comprehensible to the intended audience. The purpose of this article is to arm and encourage potential biomedical authors with tools and rules that will enable them to write contemporary manuscripts, which can stand the rigorous peer review process. The expectations of standard journals, and common pitfalls the major elements of a manuscript are covered.
Higgs potential from extended Brans–Dicke theory and the time-evolution of the fundamental constants
NASA Astrophysics Data System (ADS)
Solà, Joan; Karimkhani, Elahe; Khodam-Mohammadi, A.
2017-01-01
Despite the enormous significance of the Higgs potential in the context of the standard model of electroweak interactions and in grand unified theories, its ultimate origin is fundamentally unknown and must be introduced by hand in accordance with the underlying gauge symmetry and the requirement of renormalizability. Here we propose a more physical motivation for the structure of the Higgs potential, which we derive from a generalized Brans–Dicke (BD) theory containing two interacting scalar fields. One of these fields is coupled to curvature as in the BD formulation, whereas the other is coupled to gravity both derivatively and non-derivatively through the curvature scalar and the Ricci tensor. By requiring that the cosmological solutions of the model are consistent with observations, we show that the effective scalar field potential adopts the Higgs potential form with a mildly time-evolving vacuum expectation value. This residual vacuum dynamics could be responsible for the possible time variation of the fundamental constants, and is reminiscent of former Bjorken’s ideas on the cosmological constant problem.
NASA Astrophysics Data System (ADS)
Tobar, M. E.; Stanwix, P. L.; McFerran, J. J.; Guéna, J.; Abgrall, M.; Bize, S.; Clairon, A.; Laurent, Ph.; Rosenbusch, P.; Rovera, D.; Santarelli, G.
2013-06-01
The frequencies of three separate Cs fountain clocks and one Rb fountain clock have been compared to various hydrogen masers to search for periodic changes correlated with the changing solar gravitational potential at the Earth and boost with respect to the cosmic microwave background rest frame. The data sets span over more than 8 yr. The main sources of long-term noise in such experiments are the offsets and linear drifts associated with the various H-masers. The drift can vary from nearly immeasurable to as high as 1.3×10-15 per day. To circumvent these effects, we apply a numerical derivative to the data, which significantly reduces the standard error when searching for periodic signals. We determine a standard error for the putative local position invariance coefficient with respect to gravity for a Cs-fountain H-maser comparison of |βH-βCs|≤4.8×10-6 and |βH-βRb|≤10-5 for a Rb-Fountain H-maser comparison. From the same data, the putative boost local position invariance coefficients were measured to a precision of up to parts in 1011 with respect to the cosmic microwave background rest frame. By combining these boost invariance experiments to a cryogenic sapphire oscillator vs H-maser comparison, independent limits on all nine coefficients of the boost-violation vector with respect to fundamental constant invariance, Bα, Be, and Bq (fine structure constant, electron mass, and quark mass, respectively), were determined to a precision of parts up to 1010.
NASA Astrophysics Data System (ADS)
Campoamor-Stursberg, R.
2014-04-01
It is shown that for any α,β in {R} and kin {Z}, the Hamiltonian Hk=p1p2 -α q2^{(2k+1)}q1^{(-2k-3)}-β /2 q2kq1^{(-k-2)} is super-integrable, possessing fundamental constants of motion of degrees 2 and 2k + 2 in the momenta.
Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts
Tamosiunaite, Minija; Sutterlütti, Rahel M.; Stein, Simon C.; Wörgötter, Florentin
2015-01-01
Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them. PMID:26441797
Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts.
Tamosiunaite, Minija; Sutterlütti, Rahel M; Stein, Simon C; Wörgötter, Florentin
2015-01-01
Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them.
NASA Astrophysics Data System (ADS)
Atanasov, Atanas Todorov
2016-12-01
Here is developed the hypothesis that the cell parameters of unicellular organisms (Prokaryotes and Eukaryotes) are determined by the gravitational constant (G, N.m2 /kg2), Planck constant (h, J.s) and growth rate of cells. By scaling analyses it was shown that the growth rate vgr(m/s) of unicellular bacteria and protozoa is relatively constant parameter, ranging in a narrow window of 10-12 - 10-10 m/s, in comparison to the diapason of cell mass, ranging 10 orders of magnitudes from 10-17 kg in bacteria to 10-7 kg in amoebas. By dimensional analyses it was shown that the combination between the growth rate of cells, gravitational constant and Planck constant gives equations with dimension of mass M(vgr)=(h.vgr/G)½ in kg, length L(v gr)=(hṡG/vgr3)1/2 in meter, time T(vgr)=(hṡG/vgr5)1/2 in seconds, and density ρ ((vgr)=vgr.3.5/hG2 in kg/m3 . For growth rate vgr in diapason of 1×10-11 m/s - 1×10-9.5 m/s the calculated numerical values for mass (3×10-18 -1×10-16 kg), length (5×10-8 -1×10-5 m), time (1×102 -1×106 s) and density (1×10-1 - 1×104 kg/m3) overlaps with diapason of experimentally measured values for cell mass (3×10-18 -1×10-15 kg), volume to surface ratio (1×10-7 -1×10-4 m), doubling time (1×103 -1×107 s), and density (1050 - 1300 kg/m3) in bacteria and protozoa. These equations show that appearance of the first living cells could be mutually connected to the physical constants.
NASA Astrophysics Data System (ADS)
Stadnik, Y. V.; Flambaum, V. V.
2016-06-01
We outline laser interferometer measurements to search for variation of the electromagnetic fine-structure constant α and particle masses (including a nonzero photon mass). We propose a strontium optical lattice clock—silicon single-crystal cavity interferometer as a small-scale platform for these measurements. Our proposed laser interferometer measurements, which may also be performed with large-scale gravitational-wave detectors, such as LIGO, Virgo, GEO600, or TAMA300, may be implemented as an extremely precise tool in the direct detection of scalar dark matter that forms an oscillating classical field or topological defects.
The fundamental nature of life as a chemical system: the part played by inorganic elements.
Williams, Robert J P
2002-02-01
In this article we show why inorganic metal elements from the environment were an essential part of the origin of living aqueous systems of chemicals in flow. Unavoidably such systems have many closely fixed parameters, related to thermodynamic binding constants, for the interaction of the essential exchangeable inorganic metal elements with both inorganic and organic non-metal materials. The binding constants give rise to fixed free metal ion concentration profiles for different metal ions and ligands in the cytoplasm of all cells closely related to the Irving-Williams series. The amounts of bound elements depend on the organic molecules present as well as these free ion concentrations. This system must have predated coding which is probably only essential for reproductive life. Later evolution in changing chemical environments became based on the development of extra cytoplasmic compartments containing quite different energised free (and bound) element contents but in feed-back communication with the central primitive cytoplasm which changed little. Hence species multiplied late in evolution in large part due to the coupling with the altered inorganic environment.
Chen, Jacqueline H.; Hawkes, Evatt R.; Sankaran, Ramanan; Mason, Scott D.; Im, Hong G.
2006-04-15
The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry with a view to providing better understanding and modeling of combustion processes in homogeneous charge compression-ignition engines. Numerical diagnostics are developed to analyze the mode of combustion and the dependence of overall ignition progress on initial mixture conditions. The roles of dissipation of heat and mass are divided conceptually into transport within ignition fronts and passive scalar dissipation, which modifies the statistics of the preignition temperature field. Transport within ignition fronts is analyzed by monitoring the propagation speed of ignition fronts using the displacement speed of a scalar that tracks the location of maximum heat release rate. The prevalence of deflagrative versus spontaneous ignition front propagation is found to depend on the local temperature gradient, and may be identified by the ratio of the instantaneous front speed to the laminar deflagration speed. The significance of passive scalar mixing is examined using a mixing timescale based on enthalpy fluctuations. Finally, the predictions of the multizone modeling strategy are compared with the DNS, and the results are explained using the diagnostics developed. (author)
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Dateo, Christopher E.
2005-01-01
The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, denoted CCSD(T), has been used, in conjunction with approximate integral techniques, to compute highly accurate rovibrational spectroscopic constants of cyclopropenylidene, C3H2. The approximate integral technique was proposed in 1994 by Rendell and Lee in order to avoid disk storage and input/output bottlenecks, and today it will also significantly aid in the development of algorithms for distributed memory, massively parallel computer architectures. It is shown in this study that use of approximate integrals does not impact the accuracy of CCSD(T) calculations. In addition, the most accurate spectroscopic data yet for C3H2 is presented based on a CCSD(T)/cc-pVQZ quartic force field that is modified to include the effects of core-valence electron correlation. Cyclopropenylidene is of great astronomical and astrobiological interest because it is the smallest aromatic ringed compound to be positively identified in the interstellar medium, and is thus involved in the prebiotic processing of carbon and hydrogen. The singles and doubles coupled-cluster method that includes a perturbational estimate of
Reduction of iron-oxide-carbon composites: part I. Estimation of the rate constants
Halder, S.; Fruehan, R.J.
2008-12-15
A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO{sub 2} and wustite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wustite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wustite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wustite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (> 1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.
NASA Astrophysics Data System (ADS)
Baldacci, A.; Stoppa, P.; Visinoni, R.; Wugt Larsen, R.
2012-09-01
The high resolution infrared absorption spectrum of CH2D81Br has been recorded by Fourier transform spectroscopy in the range 550-1075 cm-1, with an unapodized resolution of 0.0025 cm-1, employing a synchrotron radiation source. This spectral region is characterized by the ν6 (593.872 cm-1), ν5 (768.710 cm-1) and ν9 (930.295 cm-1) fundamental bands. The ground state constants up to sextic centrifugal distortion terms have been obtained for the first time by ground-state combination differences from the three bands and subsequently employed for the evaluation of the excited state parameters. Watson's A-reduced Hamiltonian in the Ir representation has been used in the calculations. The ν 6 = 1 level is essentially free from perturbation whereas the ν 5 = 1 and ν 9 = 1 states are mutually interacting through a-type Coriolis coupling. Accurate spectroscopic parameters of the three excited vibrational states and a high-order coupling constant which takes into account the interaction between ν5 and ν9 have been determined.
ERIC Educational Resources Information Center
Environmental Protection Agency, Research Triangle Park, NC. Air Pollution Training Inst.
This workbook is part five of a self-instructional course prepared for the United States Environmental Protection Agency. The student proceeds at his own pace and when questions are asked, after answering, he either turns to the next page to check his response or refers to the previously covered material. The purpose of this course is to prepare…
40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X
Code of Federal Regulations, 2014 CFR
2014-07-01
... chloride 79-44-7 Dimethyldisulfide 624-92-0 Dimethylformamide 68-12-2 1,1-Dimethylhydrazine 57-14-7... Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...
40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X
Code of Federal Regulations, 2013 CFR
2013-07-01
... chloride 79-44-7 Dimethyldisulfide 624-92-0 Dimethylformamide 68-12-2 1,1-Dimethylhydrazine 57-14-7... Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...
40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X
Code of Federal Regulations, 2011 CFR
2011-07-01
... chloride 79-44-7 Dimethyldisulfide 624-92-0 Dimethylformamide 68-12-2 1,1-Dimethylhydrazine 57-14-7... Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...
NASA Technical Reports Server (NTRS)
Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.
1987-01-01
An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.
On decay constants and orbital distance to the Sun—part I: alpha decay
NASA Astrophysics Data System (ADS)
Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.
2017-02-01
Claims that proximity to the Sun causes variation of decay constants at permille level have been investigated for alpha decaying nuclides. Repeated decay rate measurements of 209Po, 226Ra, 228Th, 230U, and 241Am sources were performed over periods of 200 d up to two decades at various nuclear metrology institutes around the globe. Residuals from the exponential decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ in amplitude and phase from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. The most stable activity measurements of α decaying sources set an upper limit between 0.0006% and 0.006% to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months. Oscillations in phase with Earth’s orbital distance to the sun could not be observed within 10-5-10-6 range precision.
On decay constants and orbital distance to the Sun—part II: beta minus decay
NASA Astrophysics Data System (ADS)
Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.
2017-02-01
Claims that proximity to the Sun causes variations of decay constants at the permille level have been investigated for beta-minus decaying nuclides. Repeated activity measurements of 3H, 14C, 60Co, 85Kr, 90Sr, 124Sb, 134Cs, 137Cs, and 154Eu sources were performed over periods of 259 d up to 5 decades at various nuclear metrology institutes. Residuals from the exponential decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ in amplitude and phase from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. Oscillations in phase with Earth’s orbital distance to the Sun could not be observed within 10-4-10-5 range precision. The most stable activity measurements of β - decaying sources set an upper limit of 0.003%-0.007% to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months.
NASA Astrophysics Data System (ADS)
Hwang, Seho; Shin, Jehyun; Kim, Jongman; Won, Byeongho; Song, Wonkyoung; Kim, Changryol; Ki, Jungseok
2014-05-01
One of the most important physical properties is the measurement of the elastic constants of the formation in the evaluation of shale gas. Normally the elastic constants by geophysical well logging and the laboratory test are used in the design of hydraulic fracturing . The three inches diameter borehole of the depth of 505 m for the evaluation of shale gas drilled and was fully cored at the Haenan Basin, southwestern part of Korea Peninsula. We performed a various laboratory tests and geophysical well logging using slime hole logging system. Geophysical well logs include the radioactive logs such as natural gamma log, density log and neutron log, and monopole and dipole sonic log, and image logs. Laboratory tests are the axial compression test, elastic wave velocities and density, and static elastic constants measurements for 21 shale and sandstone cores. We analyzed the relationships between the physical properties by well logs and laboratory test as well as static elastic constants by laboratory tests. In the case of an sonic log using a monopole source of main frequency 23 kHz, measuring P-wave velocity was performed reliably. When using the dipole excitation of low frequency, the signal to noise ratio of the measured shear wave was very low. But when measuring using time mode in a predetermined depth, the signal to noise ratio of measured data relatively improved to discriminate the shear wave. P-wave velocities by laboratory test and sonic logging agreed well overall, but S-wave velocities didn't. The reason for the discrepancy between the laboratory test and sonic log is mainly the low signal to noise ratio of sonic log data by low frequency dipole source, and measuring S-wave in the small diameter borehole is still challenge. The relationship between the P-wave velocity and two dynamic elastic constants, Young's modulus and Poisson's ratio, shows a good correlation. And the relationship between the static elastic constants and dynamic elastic constants also
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
Singh, Bhupinder; Kumar, Rajiv; Ahuja, Naveen
2005-01-01
, postulation of mathematical models for various chosen response characteristics, fitting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology. The broad topic of DoE optimization methodology is covered in two parts. Part I of the review attempts to provide thought-through and thorough information on diverse DoE aspects organized in a seven-step sequence. Besides dealing with basic DoE terminology for the novice, the article covers the niceties of several important experimental designs, mathematical models, and optimum search techniques using numeric and graphical methods, with special emphasis on computer-based approaches, artificial neural networks, and judicious selection of designs and models.
Morgera, S D
1987-01-01
Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.
Windberger, A; Crespo López-Urrutia, J R; Bekker, H; Oreshkina, N S; Berengut, J C; Bock, V; Borschevsky, A; Dzuba, V A; Eliav, E; Harman, Z; Kaldor, U; Kaul, S; Safronova, U I; Flambaum, V V; Keitel, C H; Schmidt, P O; Ullrich, J; Versolato, O O
2015-04-17
We measure optical spectra of Nd-like W, Re, Os, Ir, and Pt ions of particular interest for studies of a possibly varying fine-structure constant. Exploiting characteristic energy scalings we identify the strongest lines, confirm the predicted 5s-4f level crossing, and benchmark advanced calculations. We infer two possible values for optical M2/E3 and E1 transitions in Ir^{17+} that have the highest predicted sensitivity to a variation of the fine-structure constant among stable atomic systems. Furthermore, we determine the energies of proposed frequency standards in Hf^{12+} and W^{14+}.
Greenbury, S. F.; Ahnert, S. E.
2015-01-01
Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype–phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into ‘constrained' and ‘unconstrained' sequences, in the broadest possible sense. As ‘constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. ‘Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with ‘coding' and ‘non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063
Greenbury, S F; Ahnert, S E
2015-12-06
Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype-phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into 'constrained' and 'unconstrained' sequences, in the broadest possible sense. As 'constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. 'Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with 'coding' and 'non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps.
NASA Technical Reports Server (NTRS)
Warren, Wayne H., Jr.
1990-01-01
The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.
Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; Curran, Henry J.
2016-08-25
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO_{2}, and CH_{3} radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO_{2} radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.
Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; ...
2016-08-25
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO2, and CH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO2more » radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.« less
Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J
2016-09-15
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.
Nifant'eva, T I; Shkinev, V M; Spivakov, B Y; Burba, P
1999-02-01
The assessment of conditional stability constants of aquatic humic substance (HS) metal complexes is overviewed with special emphasis on the application of ultrafiltration methods. Fundamentals and limitations of stability functions in the case of macromolecular and polydisperse metal-HS species in aquatic environments are critically discussed. The review summarizes the advantages and application of ultrafiltration for metal-HS complexation studies, discusses the comparibility and reliability of stability constants. The potential of ultrafiltration procedures for characterizing the lability of metal-HS species is also stressed.
Fundamental ecology is fundamental.
Courchamp, Franck; Dunne, Jennifer A; Le Maho, Yvon; May, Robert M; Thébaud, Christophe; Hochberg, Michael E
2015-01-01
The primary reasons for conducting fundamental research are satisfying curiosity, acquiring knowledge, and achieving understanding. Here we develop why we believe it is essential to promote basic ecological research, despite increased impetus for ecologists to conduct and present their research in the light of potential applications. This includes the understanding of our environment, for intellectual, economical, social, and political reasons, and as a major source of innovation. We contend that we should focus less on short-term, objective-driven research and more on creativity and exploratory analyses, quantitatively estimate the benefits of fundamental research for society, and better explain the nature and importance of fundamental ecology to students, politicians, decision makers, and the general public. Our perspective and underlying arguments should also apply to evolutionary biology and to many of the other biological and physical sciences.
Weiss, A.; Henkel, C.; Menten, K. M.; Walter, F.; Downes, D.; Cox, P.; Carrili, C. L.
2012-07-10
We report on sensitive observations of the CO(J = 7{yields}6) and C I({sup 3}P{sub 2} {yields} {sup 3}P{sub 1}) transitions in the z = 2.79 QSO host galaxy RXJ0911.4+0551 using the IRAM Plateau de Bure interferometer. Our extremely high signal-to-noise spectra combined with the narrow CO line width of this source (FWHM = 120 km s{sup -1}) allows us to estimate sensitive limits on the spacetime variations of the fundamental constants using two emission lines. Our observations show that the C I and CO line shapes are in good agreement with each other but that the C I line profile is of the order of 10% narrower, presumably due to the lower opacity in the latter line. Both lines show faint wings with velocities up to {+-}250 km s{sup -1}, indicative of a molecular outflow. As such, the data provide direct evidence for negative feedback in the molecular gas phase at high redshift. Our observations allow us to determine the observed frequencies of both transitions with so far unmatched accuracy at high redshift. The redshift difference between the CO and C I lines is sensitive to variations of {Delta}F/F with F = {alpha}{sup 2}/{mu} where {alpha} is the fine structure constant and {mu} is the electron-to-proton mass ratio. We find {Delta}F/F (6.9 {+-} 3.7) Multiplication-Sign 10{sup -6} at a look-back time of 11.3 Gyr, which, within the uncertainties, is consistent with no variations of the fundamental constants.
Methanol as A Tracer of Fundamental Constants
NASA Astrophysics Data System (ADS)
Levshakov, S. A.; Kozlov, M. G.; Reimers, D.
2011-09-01
The methanol molecule CH3OH has a complex microwave spectrum with a large number of very strong lines. This spectrum includes purely rotational transitions as well as transitions with contributions of the internal degree of freedom associated with the hindered rotation of the OH group. The latter takes place due to the tunneling of hydrogen through the potential barriers between three equivalent potential minima. Such transitions are highly sensitive to changes in the electron-to-proton mass ratio, μ = m e/m p, and have different responses to μ-variations. The highest sensitivity is found for the mixed rotation-tunneling transitions at low frequencies. Observing methanol lines provides more stringent limits on the hypothetical variation of μ than ammonia observation with the same velocity resolution. We show that the best-quality radio astronomical data on methanol maser lines constrain the variability of μ in the Milky Way at the level of |Δμ/μ| < 28 × 10-9 (1σ) which is in line with the previously obtained ammonia result, |Δμ/μ| < 29 × 10-9 (1σ). This estimate can be further improved if the rest frequencies of the CH3OH microwave lines will be measured more accurately.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
NASA Technical Reports Server (NTRS)
Nelson, C. C.; Childs, D. W.; Nicks, C.; Elrod, D.
1985-01-01
The leakage and rotordynamic coefficients of constant-clearance and convergent-tapered annular gas seals were measured in an experimental test facility. The results are presented along with the theoretically predicted values. Of particular interest is the prediction that optimally tapered seals have significantly larger direct siffness than straight seals. The experimental results verify this prediction. Generally the theory does quite well, but fails to predict the large increase in direct stiffness when the fluid is pre-rotated.
Seethapathy, Suresh; Górecki, Tadeusz
2010-12-10
Polydimethylsiloxane (PDMS) has low permeability towards water vapour and low energy of activation of permeation towards volatile organic compounds (VOCs) when compared to many other polymers. Suitability of the material for use in permeation-type passive air samplers was tested as it theoretically should reduce uptake rate variations due to temperature changes and eliminate or reduce complications arising from sorbent saturation by water vapour. The calibration constants of a simple autosampler vial-based permeation passive sampler equipped with a PDMS membrane (Waterloo Membrane Sampler(®)) were determined for various analytes at different temperatures. From the data, the activation energy of permeation for PDMS towards the analytes was determined. The analytes studied belonged to various classes of compounds with wide ranging polarities, including n-alkanes, aromatic hydrocarbons, esters and alcohols. The results confirmed Arrhenius-type relationship between temperature and calibration constant and the energy of activation of permeation for PDMS ranged from -5kJ/mole for butylbenzene to -17kJ/mole for sec-butylacetate. Calibration constants of the samplers towards n-alkanes and aromatic hydrocarbons determined at humidities between 30% and 91% indicated no statistically significant variations in the uptake rate with changes in humidity for 9 of the 11 analytes studied. The results confirmed the suitability of the sampler for deployment in high humidity areas and under varying temperature conditions.
Geraedts, K; Maes, A
2008-09-01
The interaction between colloidal Tc(IV) species and colloidal Gorleben humic substances (HS) was quantified after application of the La-precipitation method on supernatant solutions obtained under various experimental conditions but at constant ionic strength of the Gorleben groundwater (0.04M). The determined interaction constant LogKHS (2.3+/-0.3) remained unchanged over a large range of Tc(IV) and HS concentrations and was independent of the pH of the original supernatant solution (pH range 6-10), Tc(IV)-HS loading (10(-3)-10(-6)molTcg(-1) HS) and the nature of the reducing surface (Magnetite, Pyrite and Gorleben sand) used for the pertechnetate reduction. The LogKHS value determined by the La-precipitation method is lower than the LogK value obtained from a previous study where the interaction between colloidal Tc(IV) species and Gorleben humic substances was quantified using a modified Schubert approach (2.6+/-0.3). The La-precipitation method allows to accurately determine the amount of Tc(IV) associated with HS but leads to a (small) overestimation of the free inorganic Tc(IV) species.
Jakovljević, Miro
2013-09-01
Psychopharmacotherapy is a fascinating field that can be understood in many different ways. It is both a science and an art of communication with a heavily subjective dimension. The advent of a significant number of the effective and well tolerated mental health medicines during and after 1990s decade of the brain has increased our possibilities to treat major mental disorders in more successful ways with much better treatment outcome including full recovery. However, there is a huge gap between our possibilities for achieving high treatment effectiveness and not satisfying results in day-to-day clinical practice. Creative approach to psychopharmacotherapy could advance everyday clinical practice and bridge the gap. Creative psychopharmacotherapy is a concept that incorporates creativity as its fundamental tool. Creativity involves the intention and ability to transcend limiting traditional ideas, rules, patterns and relationships and to create meaningful new ideas, interpretations, contexts and methods in clinical psychopharmacology.
NASA Technical Reports Server (NTRS)
Collins, D. J.; Coles, D. E.; Hicks, J. W.
1978-01-01
Experiments were carried out to test the accuracy of laser Doppler instrumentation for measurement of Reynolds stresses in turbulent boundary layers in supersonic flow. Two facilities were used to study flow at constant pressure. In one facility, data were obtained on a flat plate at M sub e = 0.1, with Re theta up to 8,000. In the other, data were obtained on an adiabatic nozzle wall at M sub e = 0.6, 0.8, 1.0, 1.3, and 2.2, with Re theta = 23,000 and 40,000. The mean flow as observed using Pitot tube, Preston tube, and floating element instrumentation is described. Emphasis is on the use of similarity laws with Van Driest scaling and on the inference of the shearing stress profile and the normal velocity component from the equations of mean motion. The experimental data are tabulated.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Johnson, David K.; Serin, Nadir; Risha, Grant A.; Merkle, Charles L.; Venkateswaran, Sankaran
1996-01-01
This final report summarizes the major findings on the subject of 'Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Processes with Applications to Hybrid Rocket Motors', performed from 1 April 1994 to 30 June 1996. Both experimental results from Task 1 and theoretical/numerical results from Task 2 are reported here in two parts. Part 1 covers the experimental work performed and describes the test facility setup, data reduction techniques employed, and results of the test firings, including effects of operating conditions and fuel additives on solid fuel regression rate and thermal profiles of the condensed phase. Part 2 concerns the theoretical/numerical work. It covers physical modeling of the combustion processes including gas/surface coupling, and radiation effect on regression rate. The numerical solution of the flowfield structure and condensed phase regression behavior are presented. Experimental data from the test firings were used for numerical model validation.
On decay constants and orbital distance to the Sun—part III: beta plus and electron capture decay
NASA Astrophysics Data System (ADS)
Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.
2017-02-01
The hypothesis that seasonal changes in proximity to the Sun cause variation of decay constants at permille level has been tested for radionuclides disintegrating through electron capture and beta plus decay. Activity measurements of 22Na, 54Mn, 55Fe, 57Co, 65Zn, 82+85Sr, 90Sr, 109Cd, 124Sb, 133Ba, 152Eu, and 207Bi sources were repeated over periods from 200 d up to more than four decades at 14 laboratories across the globe. Residuals from the exponential nuclear decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. Oscillations in phase with Earth’s orbital distance to the sun could not be observed within 10-4-10-5 range precision. The most stable activity measurements of β + and EC decaying sources set an upper limit of 0.006% or less to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months.
Webber, D M; Tishchenko, V; Peng, Q; Battu, S; Carey, R M; Chitwood, D B; Crnkovic, J; Debevec, P T; Dhamija, S; Earle, W; Gafarov, A; Giovanetti, K; Gorringe, T P; Gray, F E; Hartwig, Z; Hertzog, D W; Johnson, B; Kammel, P; Kiburg, B; Kizilgul, S; Kunkle, J; Lauss, B; Logashenko, I; Lynch, K R; McNabb, R; Miller, J P; Mulhauser, F; Onderwater, C J G; Phillips, J; Rath, S; Roberts, B L; Winter, P; Wolfe, B
2011-01-28
We report a measurement of the positive muon lifetime to a precision of 1.0 ppm; it is the most precise particle lifetime ever measured. The experiment used a time-structured, low-energy muon beam and a segmented plastic scintillator array to record more than 2×10(12) decays. Two different stopping target configurations were employed in independent data-taking periods. The combined results give τ(μ(+)) (MuLan)=2 196 980.3(2.2) ps, more than 15 times as precise as any previous experiment. The muon lifetime gives the most precise value for the Fermi constant: G(F) (MuLan)=1.166 378 8(7)×10(-5) GeV(-2) (0.6 ppm). It is also used to extract the μ(-)p singlet capture rate, which determines the proton's weak induced pseudoscalar coupling g(P).
Takács-Novák, K; Tam, K Y
2000-01-01
The acid-base equilibria of several diprotic amphoteric drugs, namely, niflumic acid, norfloxacin, piroxicam, pyridoxine and 2-methyl-4-oxo-3H-quinazoline-3-acetic acid have been characterized in terms of microconstants and tautomeric ratios. A multiwavelength spectrophotometric (WApH) titration method for determination of acid dissociation constants (pKa values) of ionizable compounds developed previously was applied for this purpose. Microspeciation was investigated by three approaches: (1) selective monitoring of ionizable group by spectrophotometry, (2) deductive method and (3) k(z) method for determination of tautomeric ratio from co-solvent mixtures. The formulation for (3) has been derived and found to invoke fewer assumptions than a reported procedure (K. Takács-Novák, A. Avdeef, K.J Box, B. Podányi, G. Szász, J. Pharm. Biomed. Anal., 12 (1994) 1369-1377). It has been shown that the WApH technique, for such types of ampholytes, is able to deduce the microconstants and tautomeric ratios which are in good agreement with literature data.
Sierra-Ramírez, Rocío; Garcia, Laura A; Holtzapple, Mark Thomas
2011-07-01
Kinetic models applied to oxygen bleaching of paper pulp focus on the degradation of polymers, either lignin or carbohydrates. Traditionally, they separately model different moieties that degrade at three different rates: rapid, medium, and slow. These models were successfully applied to lignin and carbohydrate degradation of poplar wood submitted to oxidative pretreatment with lime at the following conditions: temperature 110-180°C, total pressure 7.9-21.7 bar, and excess lime loading of 0.5 g Ca(OH)2 per gram dry biomass. These conditions were held constant for 1-6 h. The models properly fit experimental data and were used to determine pretreatment selectivity in two fashions: differential and integral. By assessing selectivity, the detrimental effect of pretreatment on carbohydrates at high temperatures and at low lignin content was determined. The models can be used to identify pretreatment conditions that selectively remove lignin while preserving carbohydrates. Lignin removal≥50% with glucan preservation≥90% was observed for differential glucan selectivities between ∼10 and ∼30 g lignin degraded per gram glucan degraded. Pretreatment conditions complying with these reference values were preferably observed at 140°C, total pressure≥14.7 bars, and for pretreatment times between 2 and 6 h depending on the total pressure (the higher the pressure, the less time). They were also observed at 160°C, total pressure of 14.7 and 21.7 bars, and pretreatment time of 2 h. Generally, at 110°C lignin removal is insufficient and at 180°C carbohydrates do not preserve well.
NASA Technical Reports Server (NTRS)
Carroll, J. A.
1986-01-01
Some fundamental aspects of tethers are presented and briefly discussed. The effects of gravity gradients, dumbbell libration in circular orbits, tether control strategies and impact hazards for tethers are among those fundamentals. Also considered are aerodynamic drag, constraints in momentum transfer applications and constraints with permanently deployed tethers. The theoretical feasibility of these concepts are reviewed.
NASA Astrophysics Data System (ADS)
Razoumny, Yury N.
2016-11-01
This paper opens a series of articles expounding the fundamentals of the route theory for satellite constellation design for Earth discontinuous coverage. In Part 1 of the series the analytical model for Earth coverage by satellites' swath conforming to the essential of discontinuous coverage, in contrast to continuous coverage, is presented. The analytic relations are consecutively derived for calculation of single- and multi-satellite Earth surface latitude coverage as well as for generating full set of typical satellite visibility zone time streams realized in the repeating latitude coverage pattern for given arbitrary satellite constellation. The analytic relations mentioned are used for developing the method for analysis of discontinuous coverage of fixed arbitrary Earth region for given satellite constellation using both deterministic and stochastic approaches. The method provides analysis of the revisit time for given satellite constellation, as a result of high speed (fractions of a second or seconds) computer calculations in a wide range of possible revisit time variations for different practical purposes with high accuracy which is at least on par with that provided by known numerical simulating methods based on direct modeling of the satellite observation mission, or in a number of cases is even superior to it.
Redmond, W H
2001-01-01
This chapter outlines current marketing practice from a managerial perspective. The role of marketing within an organization is discussed in relation to efficiency and adaptation to changing environments. Fundamental terms and concepts are presented in an applied context. The implementation of marketing plans is organized around the four P's of marketing: product (or service), promotion (including advertising), place of delivery, and pricing. These are the tools with which marketers seek to better serve their clients and form the basis for competing with other organizations. Basic concepts of strategic relationship management are outlined. Lastly, alternate viewpoints on the role of advertising in healthcare markets are examined.
Field Theory of Fundamental Interactions
NASA Astrophysics Data System (ADS)
Wang, Shouhong; Ma, Tian
2017-01-01
First, we present two basic principles, the principle of interaction dynamics (PID) and the principle of representation invariance (PRI). Intuitively, PID takes the variation of the action under energy-momentum conservation constraint. We show that the PID is the requirement of the presence of dark matter and dark energy, the Higgs field and the quark confinement. PRI requires that the SU(N) gauge theory be independent of representations of SU(N). It is clear that PRI is the logic requirement of any gauge theory. With PRI, we demonstrate that the coupling constants for the strong and the weak interactions are the main sources of these two interactions, reminiscent of the electric charge. Second, we emphasize that symmetry principles-the principle of general relativity and the principle of Lorentz invariance and gauge invariance-together with the simplicity of laws of nature, dictate the actions for the four fundamental interactions. Finally, we show that the PID and the PRI, together with the symmetry principles give rise to a unified field model for the fundamental interactions, which is consistent with current experimental observations and offers some new physical predictions. The research is supported in part by the National Science Foundation (NSF) grant DMS-1515024, and by the Office of Naval Research (ONR) grant N00014-15-1-2662.
ERIC Educational Resources Information Center
Marine Corps Inst., Washington, DC.
Developed as part of the Marine Corps Institute (MCI) correspondence training program, this course on food service fundamentals is designed to provide a general background in the basic aspects of the food service program in the Marine Corps; it is adaptable for nonmilitary instruction. Introductory materials include specific information for MCI…
Fundamentals of Library Instruction
ERIC Educational Resources Information Center
McAdoo, Monty L.
2012-01-01
Being a great teacher is part and parcel of being a great librarian. In this book, veteran instruction services librarian McAdoo lays out the fundamentals of the discipline in easily accessible language. Succinctly covering the topic from top to bottom, he: (1) Offers an overview of the historical context of library instruction, drawing on recent…
Cosmology with varying constants.
Martins, Carlos J A P
2002-12-15
The idea of possible time or space variations of the 'fundamental' constants of nature, although not new, is only now beginning to be actively considered by large numbers of researchers in the particle physics, cosmology and astrophysics communities. This revival is mostly due to the claims of possible detection of such variations, in various different contexts and by several groups. I present the current theoretical motivations and expectations for such variations, review the current observational status and discuss the impact of a possible confirmation of these results in our views of cosmology and physics as a whole.
NASA Technical Reports Server (NTRS)
Gupta, P. K.; Tessarzik, J. M.; Cziglenyi, L.
1974-01-01
Dynamic properties of a commerical polybutadiene compound were determined at a constant temperature of 32 C by a forced-vibration resonant mass type of apparatus. The constant thermal state of the elastomer was ensured by keeping the ambient temperature constant and by limiting the power dissipation in the specimen. Experiments were performed with both compression and shear specimens at several preloads (nominal strain varying from 0 to 5 percent), and the results are reported in terms of a complex stiffness as a function of frequency. Very weak frequency dependence is observed and a simple power law type of correlation is shown to represent the data well. Variations in the complex stiffness as a function of preload are also found to be small for both compression and shear specimens.
NASA Technical Reports Server (NTRS)
Wang, Jai-Ching; Watring, Dale A.; Lehoczky, Sandor L.; Su, Ching-Hua; Gillies, Don; Szofran, Frank
1999-01-01
Infrared detector materials, such as Hg(1-x)Cd(x)Te, Hg(1-x)Zn(x)Te have energy gaps almost linearly proportional to its composition. Due to the wide separation of liquidus and solidus curves of their phase diagram, there are compositional segregations in both of axial and radial directions of these crystals grown in the Bridgman system unidirectionally with constant growth rate. It is important to understand the mechanisms which affect lateral segregation such that large uniform radial composition crystal is possible. Following Coriell, etc's treatment, we have developed a theory to study the effect of a curved melt-solid interface shape on the lateral composition distribution. The system is considered to be cylindrical system with azimuthal symmetric with a curved melt-solid interface shape which can be expressed as a linear combination of a series of Bessell's functions. The results show that melt-solid interface shape has a dominate effect on lateral composition distribution of these systems. For small values of b, the solute concentration at the melt-solid interface scales linearly with interface shape with a proportional constant of the product of b and (1 - k), where b = VR/D, with V as growth velocity, R as sample radius, D as diffusion constant and k as distribution constant. A detailed theory will be presented. A computer code has been developed and simulations have been performed and compared with experimental results. These will be published in another paper.
Webb, R.A.
1995-12-01
The need to have accurate petroleum measurement is obvious. Petroleum measurement is the basis of commerce between oil producers, royalty owners, oil transporters, refiners, marketers, the Department of Revenue, and the motoring public. Furthermore, petroleum measurements are often used to detect operational problems or unwanted releases in pipelines, tanks, marine vessels, underground storage tanks, etc. Therefore, consistent, accurate petroleum measurement is an essential part of any operation. While there are several methods and different types of equipment used to perform petroleum measurement, the basic process stays the same. The basic measurement process is the act of comparing an unknown quantity, to a known quantity, in order to establish its magnitude. The process can be seen in a variety of forms; such as measuring for a first-down in a football game, weighing meat and produce at the grocery, or the use of an automobile odometer.
NASA Astrophysics Data System (ADS)
Krykunov, Mykhaylo; Seth, Michael; Ziegler, Tom; Autschbach, Jochen
2007-12-01
A time-dependent density functional theory (TDDFT) formalism with damping for the calculation of the magnetic optical rotatory dispersion and magnetic circular dichroism (MCD) from the complex Verdet constant is presented. For a justification of such an approach, we have derived the TDDFT analog of the sum-over-states formula for the Verdet constant. The results of the MCD calculations by this method for ethylene, furan, thiophene, selenophene, tellurophene, and pyrrole are in good agreement with our previous theoretical sum-over-states MCD spectra. For the π →π* transition of propene, we have obtained a positive Faraday B term. It is located between the two negative B terms. This finding is in agreement with experiment in the range of 6-8eV.
Bou Malham, I; Letellier, P; Turmine, M
2007-04-15
The autoprotolysis constants (K(s)) of water - 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)) mixtures were determined at 298K over the composition range of 0 to 77.43vol.% bmimBF(4) using potentiometric method with a glass electrode. A slight increase in the autoprotolysis constant was observed when the salt was added to the water. The value of the ionic product of the medium then decreases as the bmimBF(4) content increases from about 20vol.%. The acid-base properties of these media were perfectly described by Bahe's approaches that were completed by Varela et al. concerning structured electrolyte solutions with large short-range interactions.
Dielectric Constant of Suspensions
NASA Astrophysics Data System (ADS)
Mendelson, Kenneth S.; Ackmann, James J.
1997-03-01
We have used a finite element method to calculate the dielectric constant of a cubic array of spheres. Extensive calculations support preliminary conclusions reported previously (K. Mendelson and J. Ackmann, Bull. Am. Phys. Soc. 41), 657 (1996).. At frequencies below 100 kHz the real part of the dielectric constant (ɛ') shows oscillations as a function of the volume fraction of suspension. These oscillations disappear at low conductivities of the suspending fluid. Measurements of the dielectric constant (J. Ackmann, et al., Ann. Biomed. Eng. 24), 58 (1996). (H. Fricke and H. Curtis, J. Phys. Chem. 41), 729 (1937). are not sufficiently sensitive to show oscillations but appear to be consistent with the theoretical results.
Energy conservation and constants variation.
NASA Astrophysics Data System (ADS)
Kraiselburd, L.; Miller Bertolami, M. M.; Sisterna, P.; Vucetich, H.
If fundamental constants vary, the internal energy of macroscopic bodies should change. This should produce observable effects. It is shown that those effects can produce upper bounds on the variation of much lower than those coming from Eötvös experiments.
Balfour, Susan
2012-02-01
This article, Part 1 of a 2-part series, provides an overview of the current Medicare compliance climate and its implications for hospice providers. Content focuses on the 7 elements of a comprehensive compliance framework as defined by the Health and Human Services Office of the Inspector General in its 1999 Compliance Guidance for Hospices. A brief case example is provided and serves to set the stage for Part 2, which will explore hospice-specific risk areas and specific risk-reduction strategies.
Fundamental Physics and Precision Measurements
NASA Astrophysics Data System (ADS)
Hänsch, T. W.
2006-11-01
"Very high precision physics has always appealed to me. The steady improvement in technologies that afford higher and higher precision has been a regular source of excitement and challenge during my career. In science, as in most things, whenever one looks at something more closely, new aspects almost always come into play …" With these word from the book "How the Laser happened", Charles H. Townes expresses a passion for precision that is now shared by many scientists. Masers and lasers have become indispensible tools for precision measurements. During the past few years, the advent of femtosecond laser frequency comb synthesizers has revolutionized the art of directly comparing optical and microwave frequencies. Inspired by the needs of precision laser spectroscopy of the simple hydrogen atom, such frequency combs are now enabling ultra-precise spectroscopy over wide spectral ranges. Recent laboratory experiments are already setting stringent limits for possible slow variations of fundamental constants. Laser frequency combs also provide the long missing clockwork for optical atomic clocks that may ultimately reach a precision of parts in 1018 and beyond. Such tools will open intriguing new opportunities for fundamental experiments including new tests of special and general relativity. In the future, frequency comb techniques may be extended into the extreme ultraviolet and soft xray regime, opening a vast new spectral territory to precision measurements. Frequency combs have also become a key tool for the emerging new field of attosecond science, since they can control the electric field of ultrashort laser pulses on an unprecedented time scale. The biggest surprise in these endeavours would be if we found no surprise.
Peng, Ya; Jiang, Zhong'an; Chen, Jushi
2017-03-23
The mechanism and kinetics of gas-phase hydrogen-abstraction by the O((3)P) from methane are investigated using ab initio calculations and dynamical methods. Not only are the electronic structure properties including the optimized geometries, relative energies, and vibrational frequencies of all the stationary points obtained from state-averaged complete active space self-consistent field calculations, but also the single-point energies for all points on the intrinsic reaction coordinate are evaluated using the internally contracted multireference configuration interaction approach with modified optimized cc-pCVDZ basis sets. Our calculations give a fairly accurate description of the regions around the (3)A″ transition state in the O((3)P) attacking a near-collinear H-CH3 direction with a barrier height of 12.53 kcal/mol, which is lower than those reported before. Subsequently, thermal rate constants for this hydrogen-abstraction are calculated using the canonical unified statistical theory method with the temperature ranging from 298 K to 1000 K. These calculated rate constants are in agreement with experiments. The present work reveals the reaction mechanism of hydrogen-abstraction by the O((3)P) from methane, and it is helpful for the understanding of methane combustion.
Li, Aihua; Meyre, David
2014-01-01
With the decrease in sequencing costs, personalized genome sequencing will eventually become common in medical practice. We therefore write this series of three reviews to help non-geneticist clinicians to jump into the fast-moving field of personalized medicine. In the first article of this series, we reviewed the fundamental concepts in molecular genetics. In this second article, we cover the key concepts and methods in genetic epidemiology including the classification of genetic disorders, study designs and their implementation, genetic marker selection, genotyping and sequencing technologies, gene identification strategies, data analyses and data interpretation. This review will help the reader critically appraise a genetic association study. In the next article, we will discuss the clinical applications of genetic epidemiology in the personalized medicine area. PMID:25598767
Integrable Cosmological Models in DD and Variations of Fundamental Constants
NASA Astrophysics Data System (ADS)
Melnikov, V. N.
Discovery of present acceleration of the Universe, dark matter and dark energy problems are great challenges to modern physics, which may bring to a new revolution. Integrable multidimensional models of gravitation and cosmology make up one of the proper approaches to study basic issues and, in particular, strong field objects, the Early and present Universe and black hole physics 1,2. Problems of the absolute G measurements and its possible time and range variations, which are reflections of the unification problem are discussed. A need for further measurements of G and its possible variations (also in space) is pointed out.
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.
2014-01-01
Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 deg is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.
2015-01-01
Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 degrees is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.
Li, Aihua; Meyre, David
2014-01-01
With the decrease in sequencing cost and the rise of companies providing sequencing services, it is likely that personalized whole-genome sequencing will eventually become an instrument of common medical practice. We write this series of three reviews to help non-geneticist clinicians get ready for the major breakthroughs that are likely to occur in the coming years in the fast-moving field of personalized medicine. This first paper focuses on the fundamental concepts of molecular genetics. We review how recombination occurs during meiosis, how de novo genetic variations including single nucleotide polymorphisms (SNPs), insertions and deletions are generated and how they are inherited from one generation to the next. We detail how genetic variants can impact protein expression and function, and summarize the main characteristics of the human genome. We also explain how the achievements of the Human Genome Project, the HapMap Project, and more recently, the 1000 Genomes Project, have boosted the identification of genetic variants contributing to common diseases in human populations. The second and third papers will focus on genetic epidemiology and clinical applications in personalized medicine. PMID:25132812
NASA Astrophysics Data System (ADS)
Razoumny, Yury N.
2016-12-01
Continuing the series of papers with description of the fundamentals of the Route Theory for satellite constellation design, the general method for minimization of the satellite swath width required under given constraint on the maximum revisit time (MRT), the main quality characteristic of the satellite constellation discontinuous coverage, is presented. The interrelation between MRT and multiplicity of the periodic coverage - the minimum number of the observation sessions realized for the points of observation region during the satellite tracks' repetition period - is revealed and described. In particular, it is shown that a change of MRT can occur only at points of coverage multiplicity changing. Basic elements of multifold Earth coverage theory are presented and used for obtaining analytical relations for the minimum swath width providing given multifold coverage. The satellite swath width calculation procedure for the multifold coverage of rotating Earth using the iterations on the sphere of stationary coverage is developed. The numerical results for discontinuous coverage with minimal satellite swath, including comparison with some known particular cases and implementations of the method, are presented.
NASA Technical Reports Server (NTRS)
Dimotakis, P. E.; Collins, D. J.; Lang, D. B.
1979-01-01
A description of both the mean and the fluctuating components of the flow, and of the Reynolds stress as observed using a dual forward scattering laser-Doppler velocimeter is presented. A detailed description of the instrument and of the data analysis techniques were included in order to fully document the data. A detailed comparison was made between the laser-Doppler results and those presented in Part 1, and an assessment was made of the ability of the laser-Doppler velocimeter to measure the details of the flows involved.
NASA Astrophysics Data System (ADS)
Freedman, Wendy; Madore, Barry; Mager, Violet; Persson, Eric; Rigby, Jane; Sturch, Laura
2008-12-01
We present a plan to measure a value of the Hubble constant having a final systematic uncertainty of only 3% by taking advantage of Spitzer's unique mid-infrared capabilities. This involves using IRAC to undertake a fundamental recalibration of the Cepheid distance scale and progressively moving it out to pure Hubble flow by an application of a revised mid-IR Tully-Fisher relation. The calibration and application, in one coherent and self-consistent program, will go continuously from distances of parsecs to several hundred megaparsecs. It will provide a first-ever mid-IR calibration of Cepheids in the Milky Way, LMC and Key Project spiral galaxies and a first-ever measurement and calibration of the TF relation at mid-infrared wavelengths, and finally a calibration of Type Ia SNe. Most importantly this program will be undertaken with a single instrument, on a single telescope, working exclusively at mid-infrared wavelengths that are far removed from the obscuring effects of dust extinction. Using Spitzer in this focused way will effectively eliminate all of the major systematics in the Cepheid and TF distance scales that have been the limiting factors in all previous applications, including the HST Key Project. By executing this program, based exclusively on Spitzer data, we will deliver a value of the Hubble constant, having a statistical precision better than 11%, with all currently known systematics quantified and constrained to a level of less than 3%. A value of Ho determined to this level of systematic accuracy is required for up-coming cosmology experiments, including Planck. A more accurate value of the Hubble constant will directly result in other contingently measured cosmological parameters (e.g., Omega_m, Omega_L, & w) having their covariant uncertainties reduced significantly now. Any further improvements using this route will have to await JWST, for which this study is designed to provide a lasting and solid foundation, and ultimately a value of Ho
Water dimer equilibrium constant of saturated vapor
NASA Astrophysics Data System (ADS)
Malomuzh, N. P.; Mahlaichuk, V. N.; Khrapatyi, S. V.
2014-08-01
The value and temperature dependence of the dimerization constant for saturated water vapor are determined. A general expression that links the second virial coefficient and the dimerization constant is obtained. It is shown that the attraction between water monomers and dimers is fundamental, especially at T > 350 K. The range of application for the obtained results is determined.
Harmonic undulator radiations with constant magnetic field
NASA Astrophysics Data System (ADS)
Jeevakhan, Hussain; Mishra, G.
2015-01-01
Harmonic undulators has been analysed in the presence of constant magnetic field along the direction of main undulator field. The spectrum modifications in harmonic undulator radiations and intensity degradation as a function of constant magnetic field magnitude at fundamental and third harmonics have been evaluated with a numerical integration method and generalised Bessel function. The role of harmonic field to overcome the intensity reduction due to constant magnetic field and energy spread in electron beam has also been demonstrated.
Varying Constants, Gravitation and Cosmology.
Uzan, Jean-Philippe
2011-01-01
Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.
NASA Astrophysics Data System (ADS)
Mangano, Gianpiero; Lizzi, Fedele; Porzio, Alberto
2015-12-01
Motivated by the Dirac idea that fundamental constants are dynamical variables and by conjectures on quantum structure of space-time at small distances, we consider the possibility that Planck constant ℏ is a time depending quantity, undergoing random Gaussian fluctuations around its measured constant mean value, with variance σ2 and a typical correlation timescale Δt. We consider the case of propagation of a free particle and a one-dimensional harmonic oscillator coherent state, and show that the time evolution in both cases is different from the standard behavior. Finally, we discuss how interferometric experiments or exploiting coherent electromagnetic fields in a cavity may put effective bounds on the value of τ = σ2Δt.
Nuclei and Fundamental Symmetries
NASA Astrophysics Data System (ADS)
Haxton, Wick
2016-09-01
Nuclei provide marvelous laboratories for testing fundamental interactions, often enhancing weak processes through accidental degeneracies among states, and providing selection rules that can be exploited to isolate selected interactions. I will give an overview of current work, including the use of parity violation to probe unknown aspects of the hadronic weak interaction; nuclear electric dipole moment searches that may shed light on new sources of CP violation; and tests of lepton number violation made possible by the fact that many nuclei can only decay by rare second-order weak interactions. I will point to opportunities in both theory and experiment to advance the field. Based upon work supported in part by the US Department of Energy, Office of Science, Office of Nuclear Physics and SciDAC under Awards DE-SC00046548 (Berkeley), DE-AC02-05CH11231 (LBNL), and KB0301052 (LBNL).
The Reciprocal of the Fundamental Theorem of Riemannian Geometry
NASA Astrophysics Data System (ADS)
Calderon, Hector
2008-05-01
The fundamental theorem of Riemannian geometry is inverted for analytic Christoffel symbols. The inversion formula, henceforth dubbed Ricardo's formula, is obtained without ancillary assumptions and it is well suited to compute the uncertainty in the metric that arises from the uncertainty in the measurement of positions. The solution is given up to a constant conformal factor, in part, because there are no experiments that can fix such factor without probing the whole universe. Ricardo's formula excludes some pathological examples and works for manifolds of any dimension and metrics of any signature.
Dielectric Constant and Loss Data. Part 4
1980-12-01
loss liquids: the use of a glass or plastic capillary tube mounted in a plane one-quarter wavelength froi. the short in the standing wave system. The...glass, Corning 26 "Cervit" Glass-i, Owens-Illinois 27 Silicon carbide, Vesuvius Crucible 27 Silicon carbide + glass matrix, ITT Gilfillan 27 Silicon...tan 6 T°C K tan6 22 6.38 .050 400 8.52 .145 100 6.87 .066 444 8.88 .184 200 7.37 .081 538 9.12 .346 300 7.85 .097 Silicon carbide Vesuvius Crucible
Dielectric Constant and Loss Data, Part 3
1977-05-01
to the general public, including foreign nations. This technical report has been reviewed and is approved. JOHN C. OLSON Project Engineer FOR TIlE...PERFORIIINO ORGANIZATION NA E AND ADDRESS 10. ROORAM ELEMENT PROJECT , TASKC It. CONTROLLING OFFICE NAME AND ADDRESS Laboratory for Insulation Research...Massachusetts, by W. B. Westphal. This work was performed between I July 1974 and 31 December 1976 under Contract F33615-75-C-5020, Project No. 7371, Task No
Dielectric Constant and Loss Data Part 2
1975-12-01
fluoride, single crystal, Melamine - formaldehyde resins, Columbia Univ., P.R.-75 IV-21,22,112; V-8,88 Manganese-magnesium ferrite, Melamine GMG, IV-2i...Polybutadiene-Astroquartz 3.164-li, Whictaker Corp. 46 Polybutadiene-Kevlar 3.164-10, I # 46 Polyether sulfone (dry sample), " " 47 Polyphenylquinoxalize resin...116 3.¶) , 00565 71 3. 01 G00483 46 Polyether sulfone k.ry sample) Whittaker Corporation SN 300-P, 24 GlHz, 24 0 C C • tan 6 3.26 .0108
Combustion Fundamentals Research
NASA Technical Reports Server (NTRS)
1983-01-01
Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.
Exchange Rates and Fundamentals.
ERIC Educational Resources Information Center
Engel, Charles; West, Kenneth D.
2005-01-01
We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…
Millikan's measurement of Planck's constant
NASA Astrophysics Data System (ADS)
Franklin, Allan
2013-12-01
Robert Millikan is famous for measuring the charge of the electron. His result was better than any previous measurement and his method established that there was a fundamental unit of charge, or charge quantization. He is less well-known for his measurement of Planck's constant, although, as discussed below, he is often mistakenly given credit for providing significant evidence in support of Einstein's photon theory of light.1 His Nobel Prize citation was "for his work on the elementary electric charge of electricity and the photoelectric effect," an indication of the significance of his work on the photoelectric effect.
Assessing uncertainty in physical constants
NASA Astrophysics Data System (ADS)
Henrion, Max; Fischhoff, Baruch
1986-09-01
Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.
Mass spectrometry at and below 0.1 parts per billion
Bradley, M.; Palmer, F.; Pritchard, D.E.
1994-12-31
The single ion Penning trap mass spectrometer at M.I.T. can compare masses to within 0.1 parts per billion. We have created a short table of fundamental atomic masses and made measurements useful for calibrating the X-ray standard, and determining Avogadro`s number, the molar Plank constant, and the fine structure constant.
Cosmological constant, fine structure constant and beyond
NASA Astrophysics Data System (ADS)
Wei, Hao; Zou, Xiao-Bo; Li, Hong-Yu; Xue, Dong-Ze
2017-01-01
In the present work, we consider the cosmological constant model Λ ∝ α ^{-6}, which is well motivated from three independent approaches. As is well known, the hint of varying fine structure constant α was found in 1998. If Λ ∝ α ^{-6} is right, it means that the cosmological constant Λ should also be varying. Here, we try to develop a suitable framework to model this varying cosmological constant Λ ∝ α ^{-6}, in which we view it from an interacting vacuum energy perspective. Then we consider the observational constraints on these models by using the 293 Δ α /α data from the absorption systems in the spectra of distant quasars. We find that the model parameters can be tightly constrained to the very narrow ranges of O(10^{-5}) typically. On the other hand, we can also view the varying cosmological constant model Λ ∝ α ^{-6} from another perspective, namely it can be equivalent to a model containing "dark energy" and "warm dark matter", but there is no interaction between them. We find that this is also fully consistent with the observational constraints on warm dark matter.
Fundamental properties of PTCDI-C8 semiconductor for optoelectronic and photonic applications
NASA Astrophysics Data System (ADS)
Erdoǧan, Erman; Gündüz, Bayram
2017-02-01
In this study, we investigated fundamental properties such as electrical and optical properties of the N,N'-Dioctyl-3,4,9,10 perylenedicarboximide (PTCDI-C8) Organic Semiconductor (OSC) material for optoelectronic and photonic applications. The important spectral parameters such as mass extinction coefficient and transmittance spectrum of the PTCDI-C8 molecule were calculated. Optical properties such as refractive index, optical band gap, real and imaginary parts of dielectric constants of the PTCDI-C8 were obtained. The electrical and optical conductance properties were also investigated. The advantages and disadvantages of obtained fundamental parameters were determined for optoelectronic and photonic applications.
NASA Astrophysics Data System (ADS)
Gitlin, M. S.
2017-02-01
The first part of the review is presented which is dedicated to the time-resolved method of imaging and measuring the spatial distribution of the intensity of millimeter waves by using visible continuum (VC) emitted by the positive column (PC) of a dc discharge in a mixture of cesium vapor with xenon. The review focuses on the operating principles, fundamentals, and applications of this new technique. The design of the discharge tube and experimental setup used to create a wide homogeneous plasma slab with the help of the Cs-Xe discharge at a gas pressure of 45 Torr are described. The millimeter-wave effects on the plasma slab are studied experimentally. The mechanism of microwave-induced variations in the VC brightness and the causes of violation of the local relation between the VC brightness and the intensity of millimeter waves are discussed. Experiments on the imaging of the field patterns of horn antennas and quasi-optical beams demonstrate that this technique can be used for good-quality imaging of millimeter-wave beams in the entire millimeter-wavelength band. The method has a microsecond temporal resolution and a spatial resolution of about 2 mm. Energy sensitivities of about 10 μJ/cm2 in the Ka-band and about 200 μJ/cm2 in the D-band have been demonstrated.
(In)validity of the constant field and constant currents assumptions in theories of ion transport.
Syganow, A; von Kitzing, E
1999-01-01
Constant electric fields and constant ion currents are often considered in theories of ion transport. Therefore, it is important to understand the validity of these helpful concepts. The constant field assumption requires that the charge density of permeant ions and flexible polar groups is virtually voltage independent. We present analytic relations that indicate the conditions under which the constant field approximation applies. Barrier models are frequently fitted to experimental current-voltage curves to describe ion transport. These models are based on three fundamental characteristics: a constant electric field, negligible concerted motions of ions inside the channel (an ion can enter only an empty site), and concentration-independent energy profiles. An analysis of those fundamental assumptions of barrier models shows that those approximations require large barriers because the electrostatic interaction is strong and has a long range. In the constant currents assumption, the current of each permeating ion species is considered to be constant throughout the channel; thus ion pairing is explicitly ignored. In inhomogeneous steady-state systems, the association rate constant determines the strength of ion pairing. Among permeable ions, however, the ion association rate constants are not small, according to modern diffusion-limited reaction rate theories. A mathematical formulation of a constant currents condition indicates that ion pairing very likely has an effect but does not dominate ion transport. PMID:9929480
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2003-01-01
No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Craw, James M. (Technical Monitor)
1995-01-01
We prove known identities for the Khinchin constant and develop new identities for the more general Hoelder mean limits of continued fractions. Any of these constants can be developed as a rapidly converging series involving values of the Riemann zeta function and rational coefficients. Such identities allow for efficient numerical evaluation of the relevant constants. We present free-parameter, optimizable versions of the identities, and report numerical results.
Fundamentals of Condensed Matter Physics
NASA Astrophysics Data System (ADS)
Cohen, Marvin L.; Louie, Steven G.
2016-05-01
Part I. Basic Concepts: Electrons and Phonons: 1. Concept of a solid: qualitative introduction and overview; 2. Electrons in crystals; 3. Electronic energy bands; 4. Lattice vibrations and phonons; Part II. Electron Intercations, Dynamics and Responses: 5. Electron dynamics in crystals; 6. Many-electron interactions: the interacting electron gas and beyond; 7. Density functional theory; 8. The dielectric function for solids; Part III. Optical and Transport Phenomena: 9. Electronic transitions and optical properties of solids; 10. Electron-phonon interactions; 11. Dynamics of crystal electrons in a magnetic field; 12. Fundamentals of transport phenomena in solids; Part IV. Superconductivity, Magnetism, and Lower Dimensional Systems: 13. Using many-body techniques; 14. Superconductivity; 15. Magnetism; 16. Reduced-dimensional systems and nanostructures; Index.
Optical constants of solid methane
NASA Technical Reports Server (NTRS)
Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.
1989-01-01
Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented of the optical constants of solid methane for the 0.4 to 2.6 micron region. K is reported for both the amorphous and the crystalline (annealed) states. Using the previously measured values of the real part of the refractive index, n, of liquid methane at 110 K n is computed for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH4.
Universal constants and equations of turbulent motion
NASA Astrophysics Data System (ADS)
Baumert, Helmut
2011-11-01
For turbulence at high Reynolds number we present an analogy with the kinetic theory of gases, with dipoles made of vortex tubes as frictionless, incompressible but deformable quasi-particles. Their movements are governed by Helmholtz' elementary vortex rules applied locally. A contact interaction or ``collision'' leads either to random scatter of a trajectory or to the formation of two likewise rotating, fundamentally unstable whirls forming a dissipative patch slowly rotating around its center of mass, the latter almost at rest. This approach predicts von Karman's constant as 1/sqrt(2 pi) = 0.399 and the spatio-temporal dynamics of energy-containing time and length scales controlling turbulent mixing [Baumert 2005, 2009]. A link to turbulence spectra was missing so far. In the present contribution it is shown that the above image of dipole movements is compatible with Kolmogorov's spectra if dissipative patches, beginning as two likewise rotating eddies, evolve locally into a space-filling bearing in the sense of Herrmann [1990], i.e. into an ``Apollonian gear.'' Its parts and pieces are are frictionless, excepting the dissipative scale of size zero. Our approach predicts the dimensionless pre-factor in the 3D Eulerian wavenumber spectrum (in terms of pi) as 1.8, and in the Lagrangian frequency spectrum as the integer number 2. Our derivations are free of empirical relations and rest on geometry, methods from many-particle physics, and on elementary conservation laws only. Department of the Navy Grant, ONR Global
Optical constants of solid methane
NASA Technical Reports Server (NTRS)
Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.
1990-01-01
Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented on the optical constants of solid methane for the 0.4 to 2.6 micrometer region. Deposition onto a substrate at 10 K produces glassy (semi-amorphous) material. Annealing this material at approximately 33 K for approximately 1 hour results in a crystalline material as seen by sharper, more structured bands and negligible background extinction due to scattering. The constant k is reported for both the amorphous and the crystalline (annealed) states. Typical values (at absorption maxima) are in the .001 to .0001 range. Below lambda = 1.1 micrometers the bands are too weak to be detected by transmission through the films less than or equal to 215 micrometers in thickness, employed in the studies to date. Using previously measured values of the real part of the refractive index, n, of liquid methane at 110 K, n is computed for solid methane using the Lorentz-Lorenz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for
"Recognizing Numerical Constants"
NASA Technical Reports Server (NTRS)
Bailey, David H.; Craw, James M. (Technical Monitor)
1995-01-01
The advent of inexpensive, high performance computer and new efficient algorithms have made possible the automatic recognition of numerically computed constants. In other words, techniques now exist for determining, within certain limits, whether a computed real or complex number can be written as a simple expression involving the classical constants of mathematics. In this presentation, some of the recently discovered techniques for constant recognition, notably integer relation detection algorithms, will be presented. As an application of these methods, the author's recent work in recognizing "Euler sums" will be described in some detail.
Astronomical reach of fundamental physics
NASA Astrophysics Data System (ADS)
Burrows, Adam S.; Ostriker, Jeremiah P.
2014-02-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
Astronomical reach of fundamental physics.
Burrows, Adam S; Ostriker, Jeremiah P
2014-02-18
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
Fundamental studies of polymer filtration
Smith, B.F.; Lu, M.T.; Robison, T.W.; Rogers, Y.C.; Wilson, K.V.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The objectives of this project were (1) to develop an enhanced fundamental understanding of the coordination chemistry of hazardous-metal-ion complexation with water-soluble metal-binding polymers, and (2) to exploit this knowledge to develop improved separations for analytical methods, metals processing, and waste treatment. We investigated features of water-soluble metal-binding polymers that affect their binding constants and selectivity for selected transition metal ions. We evaluated backbone polymers using light scattering and ultrafiltration techniques to determine the effect of pH and ionic strength on the molecular volume of the polymers. The backbone polymers were incrementally functionalized with a metal-binding ligand. A procedure and analytical method to determine the absolute level of functionalization was developed and the results correlated with the elemental analysis, viscosity, and molecular size.
Astronomical reach of fundamental physics
Burrows, Adam S.; Ostriker, Jeremiah P.
2014-01-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law. PMID:24477692
The time constant of the somatogravic illusion.
Correia Grácio, B J; de Winkel, K N; Groen, E L; Wentink, M; Bos, J E
2013-02-01
Without visual feedback, humans perceive tilt when experiencing a sustained linear acceleration. This tilt illusion is commonly referred to as the somatogravic illusion. Although the physiological basis of the illusion seems to be well understood, the dynamic behavior is still subject to discussion. In this study, the dynamic behavior of the illusion was measured experimentally for three motion profiles with different frequency content. Subjects were exposed to pure centripetal accelerations in the lateral direction and were asked to indicate their tilt percept by means of a joystick. Variable-radius centrifugation during constant angular rotation was used to generate these motion profiles. Two self-motion perception models were fitted to the experimental data and were used to obtain the time constant of the somatogravic illusion. Results showed that the time constant of the somatogravic illusion was on the order of two seconds, in contrast to the higher time constant found in fixed-radius centrifugation studies. Furthermore, the time constant was significantly affected by the frequency content of the motion profiles. Motion profiles with higher frequency content revealed shorter time constants which cannot be explained by self-motion perception models that assume a fixed time constant. Therefore, these models need to be improved with a mechanism that deals with this variable time constant. Apart from the fundamental importance, these results also have practical consequences for the simulation of sustained accelerations in motion simulators.
The cosmological constant problem
Dolgov, A.D.
1989-05-01
A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs.
ERIC Educational Resources Information Center
Eichinger, John
1996-01-01
Presents an activity in which students attempt to keep water at a constant temperature. Helps students in grades three to six hone their skills in prediction, observation, measurement, data collection, graphing, data analysis, and communication. (JRH)
History and progress on accurate measurements of the Planck constant
NASA Astrophysics Data System (ADS)
Steiner, Richard
2013-01-01
The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved
History and progress on accurate measurements of the Planck constant.
Steiner, Richard
2013-01-01
The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the
The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry’s Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aq...
Arguing against fundamentality
NASA Astrophysics Data System (ADS)
McKenzie, Kerry
This paper aims to open up discussion on the relationship between fundamentality and naturalism, and in particular on the question of whether fundamentality may be denied on naturalistic grounds. A historico-inductive argument for an anti-fundamentalist conclusion, prominent within the contemporary metaphysical literature, is examined; finding it wanting, an alternative 'internal' strategy is proposed. By means of an example from the history of modern physics - namely S-matrix theory - it is demonstrated that (1) this strategy can generate similar (though not identical) anti-fundamentalist conclusions on more defensible naturalistic grounds, and (2) that fundamentality questions can be empirical questions. Some implications and limitations of the proposed approach are discussed.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
NASA Technical Reports Server (NTRS)
Zuk, J.
1976-01-01
The fundamentals of fluid sealing, including seal operating regimes, are discussed and the general fluid-flow equations for fluid sealing are developed. Seal performance parameters such as leakage and power loss are presented. Included in the discussion are the effects of geometry, surface deformations, rotation, and both laminar and turbulent flows. The concept of pressure balancing is presented, as are differences between liquid and gas sealing. Mechanisms of seal surface separation, fundamental friction and wear concepts applicable to seals, seal materials, and pressure-velocity (PV) criteria are discussed.
Fundamentals of fluid lubrication
NASA Technical Reports Server (NTRS)
Hamrock, Bernard J.
1991-01-01
The aim is to coordinate the topics of design, engineering dynamics, and fluid dynamics in order to aid researchers in the area of fluid film lubrication. The lubrication principles that are covered can serve as a basis for the engineering design of machine elements. The fundamentals of fluid film lubrication are presented clearly so that students that use the book will have confidence in their ability to apply these principles to a wide range of lubrication situations. Some guidance on applying these fundamentals to the solution of engineering problems is also provided.
Unification of Fundamental Forces
NASA Astrophysics Data System (ADS)
Salam, Abdus; Taylor, Foreword by John C.
2005-10-01
Foreword John C. Taylor; 1. Unification of fundamental forces Abdus Salam; 2. History unfolding: an introduction to the two 1968 lectures by W. Heisenberg and P. A. M. Dirac Abdus Salam; 3. Theory, criticism, and a philosophy Werner Heisenberg; 4. Methods in theoretical physics Paul Adrian Maurice Dirac.
Basic Publication Fundamentals.
ERIC Educational Resources Information Center
Savedge, Charles E., Ed.
Designed for students who produce newspapers and newsmagazines in junior high, middle, and elementary schools, this booklet is both a scorebook and a fundamentals text. The scorebook provides realistic criteria for judging publication excellence at these educational levels. All the basics for good publications are included in the text of the…
ERIC Educational Resources Information Center
Smithsonian Institution, Washington, DC. National Reading is Fun-damental Program.
Reading Is Fundamental (RIF) is a national, nonprofit organization designed to motivate children to read by making a wide variety of inexpensive books available to them and allowing the children to choose and keep books that interest them. This annual report for 1977 contains the following information on the RIF project: an account of the…
Laser Fundamentals and Experiments.
ERIC Educational Resources Information Center
Van Pelt, W. F.; And Others
As a result of work performed at the Southwestern Radiological Health Laboratory with respect to lasers, this manual was prepared in response to the increasing use of lasers in high schools and colleges. It is directed primarily toward the high school instructor who may use the text for a short course in laser fundamentals. The definition of the…
Homeschooling and Religious Fundamentalism
ERIC Educational Resources Information Center
Kunzman, Robert
2010-01-01
This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to…
The Fundamental Property Relation.
ERIC Educational Resources Information Center
Martin, Joseph J.
1983-01-01
Discusses a basic equation in thermodynamics (the fundamental property relation), focusing on a logical approach to the development of the relation where effects other than thermal, compression, and exchange of matter with the surroundings are considered. Also demonstrates erroneous treatments of the relation in three well-known textbooks. (JN)
Technology Transfer Automated Retrieval System (TEKTRAN)
This study guide provides comments and references for professional soil scientists who are studying for the soil science fundamentals exam needed as the first step for certification. The performance objectives were determined by the Soil Science Society of America's Council of Soil Science Examiners...
Fundamentals of Electromagnetic Phenomena
NASA Astrophysics Data System (ADS)
Lorrain, Paul; Corson, Dale R.; Lorrain, Francois
Based on the classic Electromagnetic Fields and Waves by the same authors, Fundamentals of Electromagnetic Phenomena capitalizes on the older text's traditional strengths--solid physics, inventive problems, and an experimental approach--while offering a briefer, more accessible introduction to the basic principles of electromagnetism.
Fundamentals of Solid Lubrication
2012-03-01
NOTES 14. ABSTRACT During this program, we have worked to develop a fundamental understanding of the chemical and tribological issues related to...approach, tribological measurements performed over a range of length scales, and the correlation of the two classes of information. Research activities...correlated measurements of surface composition and environmentally specific tribological performance of thin film solid lubricants. • Correlate shear
Fundamentals of Diesel Engines.
ERIC Educational Resources Information Center
Marine Corps Inst., Washington, DC.
This student guide, one of a series of correspondence training courses designed to improve the job performance of members of the Marine Corps, deals with the fundamentals of diesel engine mechanics. Addressed in the three individual units of the course are the following topics: basic principles of diesel mechanics; principles, mechanics, and…
ERIC Educational Resources Information Center
Taylor, Kelley R.
2009-01-01
The 21st century has brought many technological, social, and economic changes--nearly all of which have affected schools and the students, administrators, and faculty members who are in them. Luckily, as some things change, other things remain the same. Such is true with the fundamental legal principles that guide school administrators' actions…
Peselnick, L.; Robie, R.A.
1962-01-01
The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics.
On the role of the Avogadro constant in redefining SI units for mass and amount of substance
NASA Astrophysics Data System (ADS)
Leonard, B. P.
2007-02-01
There is a common misconception that the Avogadro constant is one of the fundamental constants of nature, in the same category as the speed of light, the Planck constant and the invariant masses of atomic-scale particles. Although the absolute mass of any specified atomic-scale entity is an invariant universal constant of nature, the Avogadro constant relating this to a macroscopic quantity is not. Rather, it is a man-made construct, designed by convention to define a convenient unit relating the atomic and macroscopic scales. The misportrayal seems to stem from the widespread use of the term 'fixed-Avogadro-constant' for describing a redefinition of the kilogram that is, in fact, based on a fixed atomic-scale particle mass. This paper endeavours to clarify the role of the Avogadro constant in current definitions of SI units for mass and amount of substance as well as recently proposed redefinitions of these units—in particular, those based on fixing the numerical values of the Planck and Avogadro constants, respectively. Precise definitions lead naturally to a rational, straightforward and intuitively obvious construction of appropriate (exactly defined) atomic-scale units for these quantities. And this, in turn, suggests a direct and easily comprehended two-part statement of the fixed-Planck-constant kilogram definition involving a well-understood and physically meaningful de Broglie-Compton frequency.
Measuring Boltzmann's Constant with Carbon Dioxide
ERIC Educational Resources Information Center
Ivanov, Dragia; Nikolov, Stefan
2013-01-01
In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…
System Engineering Fundamentals
2001-01-01
currently valid OMB control number. 1. REPORT DATE JAN 2001 2. REPORT TYPE 3. DATES COVERED 00-00-2001 to 00-00-2001 4. TITLE AND SUBTITLE System...73 PART 3. SYSTEM ANALYSIS AND CONTROL Chapter 9. Work Breakdown Structure...divided into four parts: Introduction; Systems Engineering Process; Systems Analysis and Control ; and Planning, Organizing, and Managing. The first part
Fundamental properties of resonances.
Ceci, S; Hadžimehmedović, M; Osmanović, H; Percan, A; Zauner, B
2017-03-27
All resonances, from hydrogen nuclei excited by the high-energy gamma rays in deep space to newly discovered particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. However, two distinct sets of properties are used to describe resonances: the pole parameters (complex pole position and residue) and the Breit-Wigner parameters (mass, width, and branching fractions). There is an ongoing decades-old debate on which one of them should be abandoned. In this study of nucleon resonances appearing in the elastic pion-nucleon scattering we discover an intricate interplay of the parameters from both sets, and realize that neither set is completely independent or fundamental on its own.
Fundamental properties of resonances
Ceci, S.; Hadžimehmedović, M.; Osmanović, H.; Percan, A.; Zauner, B.
2017-01-01
All resonances, from hydrogen nuclei excited by the high-energy gamma rays in deep space to newly discovered particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. However, two distinct sets of properties are used to describe resonances: the pole parameters (complex pole position and residue) and the Breit-Wigner parameters (mass, width, and branching fractions). There is an ongoing decades-old debate on which one of them should be abandoned. In this study of nucleon resonances appearing in the elastic pion-nucleon scattering we discover an intricate interplay of the parameters from both sets, and realize that neither set is completely independent or fundamental on its own. PMID:28345595
Fundamentals of Polarized Light
NASA Technical Reports Server (NTRS)
Mishchenko, Michael
2003-01-01
The analytical and numerical basis for describing scattering properties of media composed of small discrete particles is formed by the classical electromagnetic theory. Although there are several excellent textbooks outlining the fundamentals of this theory, it is convenient for our purposes to begin with a summary of those concepts and equations that are central to the subject of this book and will be used extensively in the following chapters. We start by formulating Maxwell's equations and constitutive relations for time- harmonic macroscopic electromagnetic fields and derive the simplest plane-wave solution that underlies the basic optical idea of a monochromatic parallel beam of light. This solution naturally leads to the introduction of such fundamental quantities as the refractive index and the Stokes parameters. Finally, we define the concept of a quasi-monochromatic beam of light and discuss its implications.
Greg Hall, D
2011-01-01
Session 1 of the 2010 STP/IFSTP Joint Symposium on Toxicologic Neuropathology, titled "Fundamentals of Neurobiology," was organized to provide a foundation for subsequent sessions by presenting essential elements of neuroanatomy and nervous system function. A brief introduction to the session titled "Introduction to Correlative Neurobiology" was provided by Dr. Greg Hall (Eli Lilly and Company, Indianapolis, IN). Correlative neurobiology refers to considerations of the relationships between the highly organized and compartmentalized structure of nervous tissues and the functioning within this system.
Fundamental studies in geodynamics
NASA Technical Reports Server (NTRS)
Anderson, D. L.; Hager, B. H.; Kanamori, H.
1981-01-01
Research in fundamental studies in geodynamics continued in a number of fields including seismic observations and analysis, synthesis of geochemical data, theoretical investigation of geoid anomalies, extensive numerical experiments in a number of geodynamical contexts, and a new field seismic volcanology. Summaries of work in progress or completed during this report period are given. Abstracts of publications submitted from work in progress during this report period are attached as an appendix.
Fundamentals of petroleum maps
Mc Elroy, D.P.
1986-01-01
It's a complete guide to the fundamentals of reading, using, and making petroleum maps. The topics covered are well spotting, lease posting, contouring, hanging cross sections, and ink drafting. This book not only tells the how of petroleum mapping, but it also tells the why to better understand the principles and techniques. The books does not teach ''drafting,'' but does describe the proper care and use of drafting equipment for those who are totally new to the task.
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Neutrons and Fundamental Symmetries
Plaster, Bradley
2016-01-11
The research supported by this project addressed fundamental open physics questions via experiments with subatomic particles. In particular, neutrons constitute an especially ideal “laboratory” for fundamental physics tests, as their sensitivities to the four known forces of nature permit a broad range of tests of the so-called “Standard Model”, our current best physics model for the interactions of subatomic particles. Although the Standard Model has been a triumphant success for physics, it does not provide satisfactory answers to some of the most fundamental open questions in physics, such as: are there additional forces of nature beyond the gravitational, electromagnetic, weak nuclear, and strong nuclear forces?, or why does our universe consist of more matter than anti-matter? This project also contributed significantly to the training of the next generation of scientists, of considerable value to the public. Young scientists, ranging from undergraduate students to graduate students to post-doctoral researchers, made significant contributions to the work carried out under this project.
NASA Astrophysics Data System (ADS)
Burov, Alexey
Fundamental science is a hard, long-term human adventure that has required high devotion and social support, especially significant in our epoch of Mega-science. The measure of this devotion and this support expresses the real value of the fundamental science in public opinion. Why does fundamental science have value? What determines its strength and what endangers it? The dominant answer is that the value of science arises out of curiosity and is supported by the technological progress. Is this really a good, astute answer? When trying to attract public support, we talk about the ``mystery of the universe''. Why do these words sound so attractive? What is implied by and what is incompatible with them? More than two centuries ago, Immanuel Kant asserted an inseparable entanglement between ethics and metaphysics. Thus, we may ask: which metaphysics supports the value of scientific cognition, and which does not? Should we continue to neglect the dependence of value of pure science on metaphysics? If not, how can this issue be addressed in the public outreach? Is the public alienated by one or another message coming from the face of science? What does it mean to be politically correct in this sort of discussion?
Rare Isotopes and Fundamental Symmetries
NASA Astrophysics Data System (ADS)
Brown, B. Alex; Engel, Jonathan; Haxton, Wick; Ramsey-Musolf, Michael; Romalis, Michael; Savard, Guy
2009-01-01
Experiments searching for new interactions in nuclear beta decay / Klaus P. Jungmann -- The beta-neutrino correlation in sodium-21 and other nuclei / P. A. Vetter ... [et al.] -- Nuclear structure and fundamental symmetries/ B. Alex Brown -- Schiff moments and nuclear structure / J. Engel -- Superallowed nuclear beta decay: recent results and their impact on V[symbol] / J. C. Hardy and I. S. Towner -- New calculation of the isospin-symmetry breaking correlation to superallowed Fermi beta decay / I. S. Towner and J. C. Hardy -- Precise measurement of the [symbol]H to [symbol]He mass difference / D. E. Pinegar ... [et al.] -- Limits on scalar currents from the 0+ to 0+ decay of [symbol]Ar and isospin breaking in [symbol]Cl and [symbol]Cl / A. Garcia -- Nuclear constraints on the weak nucleon-nucleon interaction / W. C. Haxton -- Atomic PNC theory: current status and future prospects / M. S. Safronova -- Parity-violating nucleon-nucleon interactions: what can we learn from nuclear anapole moments? / B. Desplanques -- Proposed experiment for the measurement of the anapole moment in francium / A. Perez Galvan ... [et al.] -- The Radon-EDM experiment / Tim Chupp for the Radon-EDM collaboration -- The lead radius Eexperiment (PREX) and parity violating measurements of neutron densities / C. J. Horowitz -- Nuclear structure aspects of Schiff moment and search for collective enhancements / Naftali Auerbach and Vladimir Zelevinsky -- The interpretation of atomic electric dipole moments: Schiff theorem and its corrections / C. -P. Liu -- T-violation and the search for a permanent electric dipole moment of the mercury atom / M. D. Swallows ... [et al.] -- The new concept for FRIB and its potential for fundamental interactions studies / Guy Savard -- Collinear laser spectroscopy and polarized exotic nuclei at NSCL / K. Minamisono -- Environmental dependence of masses and coupling constants / M. Pospelov.
Can compactifications solve the cosmological constant problem?
Hertzberg, Mark P.; Masoumi, Ali
2016-06-30
Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain why Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.
Redshift in Hubble's constant.
NASA Astrophysics Data System (ADS)
Temple-Raston, M.
1997-01-01
A topological field theory with Bogomol'nyi solitons is examined. The Bogomol'nyi solitons have much in common with the instanton in Yang-Mills theory; consequently the author called them 'topological instantons'. When periodic boundary conditions are imposed, the field theory comments indirectly on the speed of light within the theory. In this particular model the speed of light is not a universal constant. This may or may not be relevant to the current debate in astronomy and cosmology over the large values of the Hubble constant obtained by the latest generation of ground- and space-based telescopes. An experiment is proposed to detect spatial variation in the speed of light.
Percolation with Constant Freezing
NASA Astrophysics Data System (ADS)
Mottram, Edward
2014-06-01
We introduce and study a model of percolation with constant freezing ( PCF) where edges open at constant rate , and clusters freeze at rate independently of their size. Our main result is that the infinite volume process can be constructed on any amenable vertex transitive graph. This is in sharp contrast to models of percolation with freezing previously introduced, where the limit is known not to exist. Our interest is in the study of the percolative properties of the final configuration as a function of . We also obtain more precise results in the case of trees. Surprisingly the algebraic exponent for the cluster size depends on the degree, suggesting that there is no lower critical dimension for the model. Moreover, even for , it is shown that finite clusters have algebraic tail decay, which is a signature of self organised criticality. Partial results are obtained on , and many open questions are discussed.
NASA Technical Reports Server (NTRS)
Sorensen, E
1940-01-01
The conventional axial blowers operate on the high-pressure principle. One drawback of this type of blower is the relatively low pressure head, which one attempts to overcome with axial blowers producing very high pressure at a given circumferential speed. The Schicht constant-pressure blower affords pressure ratios considerably higher than those of axial blowers of conventional design with approximately the same efficiency.
NASA Astrophysics Data System (ADS)
Yongquan, Han
2016-10-01
The ideal gas state equation is not applicable to ordinary gas, it should be applied to the Electromagnetic ``gas'' that is applied to the radiation, the radiation should be the ultimate state of matter changes or initial state, the universe is filled with radiation. That is, the ideal gas equation of state is suitable for the Singular point and the universe. Maybe someone consider that, there is no vessel can accommodate radiation, it is because the Ordinary container is too small to accommodate, if the radius of your container is the distance that Light through an hour, would you still think it can't accommodates radiation? Modern scientific determinate that the radius of the universe now is about 1027 m, assuming that the universe is a sphere whose volume is approximately: V = 4.19 × 1081 cubic meters, the temperature radiation of the universe (cosmic microwave background radiation temperature of the universe, should be the closest the average temperature of the universe) T = 3.15k, radiation pressure P = 5 × 10-6 N / m 2, according to the law of ideal gas state equation, PV / T = constant = 6 × 1075, the value of this constant is the universe, The singular point should also equal to the constant Author: hanyongquan
Jackson, Neal
2015-01-01
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0 values of around 72-74 km s(-1) Mpc(-1), with typical errors of 2-3 km s(-1) Mpc(-1). This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s(-1) Mpc(-1) and typical errors of 1-2 km s(-1) Mpc(-1). The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.
Beyond lensing by the cosmological constant
NASA Astrophysics Data System (ADS)
Faraoni, Valerio; Lapierre-Léonard, Marianne
2017-01-01
The long-standing problem of whether the cosmological constant affects directly the deflection of light caused by a gravitational lens is reconsidered. We use a new approach based on the Hawking quasilocal mass of a sphere grazed by light rays and on its splitting into local and cosmological parts. Previous literature restricted to the cosmological constant is extended to any form of dark energy accelerating the universe in which the gravitational lens is embedded.
Fundamental experiments in velocimetry
Briggs, Matthew Ellsworth; Hull, Larry; Shinas, Michael
2009-01-01
One can understand what velocimetry does and does not measure by understanding a few fundamental experiments. Photon Doppler Velocimetry (PDV) is an interferometer that will produce fringe shifts when the length of one of the legs changes, so we might expect the fringes to change whenever the distance from the probe to the target changes. However, by making PDV measurements of tilted moving surfaces, we have shown that fringe shifts from diffuse surfaces are actually measured only from the changes caused by the component of velocity along the beam. This is an important simplification in the interpretation of PDV results, arising because surface roughness randomizes the scattered phases.
NASA Astrophysics Data System (ADS)
Pisacane, Vincent L.
2005-06-01
Fundamentals of Space Systems was developed to satisfy two objectives: the first is to provide a text suitable for use in an advanced undergraduate or beginning graduate course in both space systems engineering and space system design. The second is to be a primer and reference book for space professionals wishing to broaden their capabilities to develop, manage the development, or operate space systems. The authors of the individual chapters are practicing engineers that have had extensive experience in developing sophisticated experimental and operational spacecraft systems in addition to having experience teaching the subject material. The text presents the fundamentals of all the subsystems of a spacecraft missions and includes illustrative examples drawn from actual experience to enhance the learning experience. It included a chapter on each of the relevant major disciplines and subsystems including space systems engineering, space environment, astrodynamics, propulsion and flight mechanics, attitude determination and control, power systems, thermal control, configuration management and structures, communications, command and telemetry, data processing, embedded flight software, survuvability and reliability, integration and test, mission operations, and the initial conceptual design of a typical small spacecraft mission.
Testing Our Fundamental Assumptions
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-06-01
Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these
Thermodynamics fundamentals of energy conversion
NASA Astrophysics Data System (ADS)
Dan, Nicolae
The work reported in the chapters 1-5 focuses on the fundamentals of heat transfer, fluid dynamics, thermodynamics and electrical phenomena related to the conversion of one form of energy to another. Chapter 6 is a re-examination of the fundamental heat transfer problem of how to connect a finite-size heat generating volume to a concentrated sink. Chapter 1 extends to electrical machines the combined thermodynamics and heat transfer optimization approach that has been developed for heat engines. The conversion efficiency at maximum power is 1/2. When, as in specific applications, the operating temperature of windings must not exceed a specified level, the power output is lower and efficiency higher. Chapter 2 addresses the fundamental problem of determining the optimal history (regime of operation) of a battery so that the work output is maximum. Chapters 3 and 4 report the energy conversion aspects of an expanding mixture of hot particles, steam and liquid water. At the elemental level, steam annuli develop around the spherical drops as time increases. At the mixture level, the density decreases while the pressure and velocity increases. Chapter 4 describes numerically, based on the finite element method, the time evolution of the expanding mixture of hot spherical particles, steam and water. The fluid particles are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Chapter 5 describes the process of thermal interaction between the molten material and water. In the second part of the chapter the model accounts for the irreversibility due to the flow of the mixture through the cracks of the mixing vessel. The approach presented in this chapter is based on exergy analysis and represents a departure from the line of inquiry that was followed in chapters 3-4. Chapter 6 shows that the geometry of the heat flow path between a volume and one point can be optimized in two fundamentally different ways. In the "growth" method the
Lubowitz, James H; Provencher, Matthew T; Brand, Jefferson C; Rossi, Michael J; Poehling, Gary G
2015-06-01
In 2015, Henry P. Hackett, Managing Editor, Arthroscopy, retires, and Edward A. Goss, Executive Director, Arthroscopy Association of North America (AANA), retires. Association is a positive constant, in a time of change. With change comes a need for continuing education, research, and sharing of ideas. While the quality of education at AANA and ISAKOS is superior and most relevant, the unique reason to travel and meet is the opportunity to interact with innovative colleagues. Personal interaction best stimulates new ideas to improve patient care, research, and teaching. Through our network, we best create innovation.
Division i: Fundamental Astronomy
NASA Astrophysics Data System (ADS)
McCarthy, Dennis D.; Klioner, Sergei A.; Vondrák, Jan; Evans, Dafydd Wyn; Hohenkerk, Catherine Y.; Hosokawa, Mizuhiko; Huang, Cheng-Li; Kaplan, George H.; Knežević, Zoran; Manchester, Richard N.; Morbidelli, Alessandro; Petit, Gérard; Schuh, Harald; Soffel, Michael H.; Zacharias, Norbert
2012-04-01
The goal of the division is to address the scientific issues that were developed at the 2009 IAU General Assembly in Rio de Janeiro. These are:•Astronomical constants-Gaussian gravitational constant, Astronomical Unit, GMSun, geodesic precession-nutation•Astronomical software•Solar System Ephemerides-Pulsar research-Comparison of dynamical reference frames•Future Optical Reference Frame•Future Radio Reference Frame•Exoplanets-Detection-Dynamics•Predictions of Earth orientation•Units of measurements for astronomical quantities in relativistic context•Astronomical units in the relativistic framework•Time-dependent ecliptic in the GCRS•Asteroid masses•Review of space missions•Detection of gravitational waves•VLBI on the Moon•Real time electronic access to UT1-UTCIn pursuit of these goals Division I members have made significant scientific and organizational progress, and are organizing a Joint Discussion on Space-Time Reference Systems for Future Research at the 2012 IAU General Assembly. The details of Division activities and references are provided in the individual Commission and Working Group reports in this volume. A comprehensive list of references related to the work of the Division is available at the IAU Division I website at http://maia.usno.navy.mil/iaudiv1/.
Fundamentals of zoological scaling
NASA Astrophysics Data System (ADS)
Lin, Herbert
1982-01-01
Most introductory physics courses emphasize highly idealized problems with unique well-defined answers. Though many textbooks complement these problems with estimation problems, few books present anything more than an elementary discussion of scaling. This paper presents some fundamentals of scaling in the zoological domain—a domain complex by any standard, but one also well suited to illustrate the power of very simple physical ideas. We consider the following animal characteristics: skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing flapping, and maximum sizes of animals that fly and hover. These relationships are compared to zoological data and everyday experience, and match reasonably well.
Fundamentals of gel dosimeters
NASA Astrophysics Data System (ADS)
McAuley, K. B.; Nasr, A. T.
2013-06-01
Fundamental chemical and physical phenomena that occur in Fricke gel dosimeters, polymer gel dosimeters, micelle gel dosimeters and genipin gel dosimeters are discussed. Fricke gel dosimeters are effective even though their radiation sensitivity depends on oxygen concentration. Oxygen contamination can cause severe problems in polymer gel dosimeters, even when THPC is used. Oxygen leakage must be prevented between manufacturing and irradiation of polymer gels, and internal calibration methods should be used so that contamination problems can be detected. Micelle gel dosimeters are promising due to their favourable diffusion properties. The introduction of micelles to gel dosimetry may open up new areas of dosimetry research wherein a range of water-insoluble radiochromic materials can be explored as reporter molecules.
Jackson, Neal
2007-01-01
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. In the last 20 years, much progress has been made and estimates now range between 60 and 75 km s(-1) Mpc(-1), with most now between 70 and 75 km s(-1) Mpc(-1), a huge improvement over the factor-of-2 uncertainty which used to prevail. Further improvements which gave a generally agreed margin of error of a few percent rather than the current 10% would be vital input to much other interesting cosmology. There are several programmes which are likely to lead us to this point in the next 10 years.
Unitaxial constant velocity microactuator
McIntyre, Timothy J.
1994-01-01
A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-manometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment.
NASA Technical Reports Server (NTRS)
Stevens, F W
1924-01-01
This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound.
Beiu, V.
1997-04-01
In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.
Tully, R B
1993-01-01
Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391
Unitaxial constant velocity microactuator
McIntyre, T.J.
1994-06-07
A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment is disclosed. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-nanometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment. 10 figs.
NASA Astrophysics Data System (ADS)
1984-01-01
The 1984 CPEM—the world's leading international biennial conference for electromagnetic metrology and related fundamental constants—will be held on 20 24 August, 1984, at Delft University of Technology, The Netherlands. Papers are requested for CPEM 84 which describe original work, not published or previously presented, covering the design, performance or application of electromagnetic measurements, techniques, instruments or systems. In cooperation with the relevant commission of the International Union of Pure and Applied Physics (IUPAP) the Conference Committee has decided that topics on fundamental constants related to electromagnetic measurements will also be part of CPEM 84. All papers concerned with EM-measurements and related fundamental constants will be considered. Papers in the following fields are regarded as particularly appropriate for this conference: EM-based fundamental constants and standards direct current and low frequency time and frequency antennas and fields microwaves and millimeter waves (micro)computer-aided measurements infrared, visible and ultraviolet radiation electro optics, fibre optics lasers cryo-electronics technical calibration services. The conference language will be English. Authors are requested to submit a summary (500 1000 words) along with an abstract (maximum 50 words) to facilitate paper selection by the programme committee. The summary must describe clearly what new and significant results have been obtained and why the results are important. Summaries must be received on or before 1 February, 1984 and must be sent to Prof. dr. H Postma, Technical Programme Chairman CPEM 84, Delft University of Technology, PO Box 5046, NL-2600 GA Delft, The Netherlands. Authors will be notified before 15 May, 1984 whether their papers are accepted and informed of the manner of presentation and possible publication in the IEEE Trans. Instrum. Meas. conference issue.
NASA Astrophysics Data System (ADS)
Steele, A. G.; Meija, J.; Sanchez, C. A.; Yang, L.; Wood, B. M.; Sturgeon, R. E.; Mester, Z.; Inglis, A. D.
2012-02-01
The next revision to the International System of Units will emphasize the relationship between the base units (kilogram, metre, second, ampere, kelvin, candela and mole) and fundamental constants of nature (the speed of light, c, the Planck constant, h, the elementary charge, e, the Boltzmann constant, kB, the Avogadro constant, NA, etc). The redefinition cannot proceed without consistency between two complementary metrological approaches to measuring h: a 'physics' approach, using watt balances and the equivalence principle between electrical and mechanical force, and a 'chemistry' approach that can be viewed as determining the mass of a single atom of silicon. We report the first high precision physics and chemistry results that agree within 12 parts per billion: h (watt balance) = 6.626 070 63(43) × 10-34 J s and h(silicon) = 6.626 070 55(21) × 10-34 J s. When combined with values determined by other metrology laboratories, this work helps to constrain our knowledge of h to 20 parts per billion, moving us closer to a redefinition of the metric system used around the world.
Tatara, T; Tsuzaki, K
2000-07-01
A study is conducted to determine whether the extracellular fluid (ECF) volume fraction and equivalent dielectric constant of the cell membrane epsilon m, derived from the dielectric properties of the human body can track the progression of surgical tissue injury. Frequency-dependent dielectric constants and electrical conductivities of body segments are obtained at surgical (trunk) and non-surgical sites (arm and leg) from five patients who have undergone oesophageal resections, before and at the end of surgery and on the day after the operation. The ECF volume fraction and the equivalent epsilon m of body segments are estimated by fitting the dielectric data for body segments to the cell suspension model incorporating fat tissue, and their time-course changes are compared between body segments. By the day after the operation, the estimated ECF volume fraction has increased in all body segments compared with that before surgery, by 0.13 in the arm, 0.16 in the trunk and 0.14 in the leg (p < 0.05), indicating postoperative fluid accumulation in the extracellular space. In contrast, the estimated equivalent epsilon m shows a different time course between body segments on the day after the operation, characterised by a higher change ratio of epsilon m of the trunk (1.34 +/- 0.66, p < 0.05), from that of the arm (0.66 +/- 0.34) and leg (0.61 +/- 0.11). The results suggest that the equivalent epsilon m of a body segment at a surgical site can track pathophysiological cell changes following surgical tissue injury.
NASA Astrophysics Data System (ADS)
Petitjean, Patrick; Wang, F. Y.; Wu, X. F.; Wei, J. J.
2016-12-01
Gamma-ray bursts (GRBs) are short and intense flashes at the cosmological distances, which are the most luminous explosions in the Universe. The high luminosities of GRBs make them detectable out to the edge of the visible universe. So, they are unique tools to probe the properties of high-redshift universe: including the cosmic expansion and dark energy, star formation rate, the reionization epoch and the metal evolution of the Universe. First, they can be used to constrain the history of cosmic acceleration and the evolution of dark energy in a redshift range hardly achievable by other cosmological probes. Second, long GRBs are believed to be formed by collapse of massive stars. So they can be used to derive the high-redshift star formation rate, which can not be probed by current observations. Moreover, the use of GRBs as cosmological tools could unveil the reionization history and metal evolution of the Universe, the intergalactic medium (IGM) properties and the nature of first stars in the early universe. But beyond that, the GRB high-energy photons can be applied to constrain Lorentz invariance violation (LIV) and to test Einstein's Equivalence Principle (EEP). In this paper, we review the progress on the GRB cosmology and fundamental physics probed by GRBs.
NASA Astrophysics Data System (ADS)
Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina
2012-03-01
Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.
NASA Technical Reports Server (NTRS)
Kuehl, H.
1947-01-01
After defining the aims and requirements to be set for a control system of gas-turbine power plants for aircraft, the report will deal with devices that prevent the quantity of fuel supplied per unit of time from exceeding the value permissible at a given moment. The general principles of the actuation of the adjustable parts of the power plant are also discussed.
Gravitational clock: A proposed experiment for the measurement of the gravitational constant G
NASA Technical Reports Server (NTRS)
Smalley, L. L.
1975-01-01
The increased importance and the fundamental significance of accurately measuring the gravitational constant G are discussed along with recent or proposed experimental measurements of G. The method of using mutually gravitating bodies in the clock mode in a drag-free satellite is described. A satellite experiment consisting of the flat-plate spherical mass oscillator proposed combines the mathematical and experimental conveniences most simply. It is estimated that accuracies of 1 part in 1,000,000 are easily obtainable by careful fabrication of parts. The use of cryogenic techniques, thin films, and superconductors allows increased accuracies of two or three orders of magnitude or better. These measurements can be increased to the level of 1 part in 10 to the 11th power at which time-variations, and other variations, in G can be observed.
Fundamental constraints on two-time physics
NASA Astrophysics Data System (ADS)
Piceno, E.; Rosado, A.; Sadurní, E.
2016-10-01
We show that generalizations of classical and quantum dynamics with two times lead to a fundamentally constrained evolution. At the level of classical physics, Newton's second law is extended and exactly integrated in a (1 + 2) -dimensional space, leading to effective single-time evolution for any initial condition. The cases 2 + 2 and 3 + 2 are also analyzed. In the domain of quantum mechanics, we follow strictly the hypothesis of probability conservation by extending the Heisenberg picture to unitary evolution with two times. As a result, the observability of two temporal axes is constrained by a generalized uncertainty relation involving level spacings, total duration of the effect and Planck's constant.
Deuteron charge radius and Rydberg constant from spectroscopy data in atomic deuterium
NASA Astrophysics Data System (ADS)
Pohl, Randolf; Nez, François; Udem, Thomas; Antognini, Aldo; Beyer, Axel; Fleurbaey, Hélène; Grinin, Alexey; Hänsch, Theodor W.; Julien, Lucile; Kottmann, Franz; Krauth, Julian J.; Maisenbacher, Lothar; Matveev, Arthur; Biraben, François
2017-04-01
We give a pedagogical description of the method to extract the charge radii and Rydberg constant from laser spectroscopy in regular hydrogen (H) and deuterium (D) atoms, that is part of the CODATA least-squares adjustment (LSA) of the fundamental physical constants. We give a deuteron charge radius {{r}\\text{d}} from D spectroscopy alone of 2.1415(45) fm. This value is independent of the measurements that lead to the proton charge radius, and five times more accurate than the value found in the CODATA Adjustment 10. The improvement is due to the use of a value for the 1S\\to 2S transition in atomic deuterium which can be inferred from published data or found in a PhD thesis.
Interfaces at equilibrium: A guide to fundamentals.
Marmur, Abraham
2016-05-20
The fundamentals of the thermodynamics of interfaces are reviewed and concisely presented. The discussion starts with a short review of the elements of bulk thermodynamics that are also relevant to interfaces. It continues with the interfacial thermodynamics of two-phase systems, including the definition of interfacial tension and adsorption. Finally, the interfacial thermodynamics of three-phase (wetting) systems is discussed, including the topic of non-wettable surfaces. A clear distinction is made between equilibrium conditions, in terms of minimizing energies (internal, Gibbs or Helmholtz), and equilibrium indicators, in terms of measurable, intrinsic properties (temperature, chemical potential, pressure). It is emphasized that the equilibrium indicators are the same whatever energy is minimized, if the boundary conditions are properly chosen. Also, to avoid a common confusion, a distinction is made between systems of constant volume and systems with drops of constant volume.
TASI Lectures on the cosmological constant
Bousso, Raphael; Bousso, Raphael
2007-08-30
The energy density of the vacuum, Lambda, is at least 60 orders of magnitude smaller than several known contributions to it. Approaches to this problem are tightly constrained by data ranging from elementary observations to precision experiments. Absent overwhelming evidence to the contrary, dark energy can only be interpreted as vacuum energy, so the venerable assumption that Lambda=0 conflicts with observation. The possibility remains that Lambda is fundamentally variable, though constant over large spacetime regions. This can explain the observed value, but only in a theory satisfying a number of restrictive kinematic and dynamical conditions. String theory offers a concrete realization through its landscape of metastable vacua.
Topological Quantization in Units of the Fine Structure Constant
Maciejko, Joseph; Qi, Xiao-Liang; Drew, H.Dennis; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC
2011-11-11
Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant {alpha} = e{sup 2}/{h_bar}c. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.
Fundamentals of Space Medicine
NASA Astrophysics Data System (ADS)
Clément, Gilles
2005-03-01
A total of more than 240 human space flights have been completed to date, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This readable text presents the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardio-vascular, bone, and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated, and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination of both the
Fundamentals of Space Medicine
NASA Astrophysics Data System (ADS)
Clément, G.
2003-10-01
As of today, a total of more than 240 human space flights have been completed, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This book presents in a readable text the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardiovascular, bone and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination
Precision Measurement of the Newtonian Gravitational Constant by Atom Interferometry
NASA Astrophysics Data System (ADS)
Rosi, G.; D'Amico, G.; Tino, G. M.; Cacciapuoti, L.; Prevedelli, M.; Sorrentino, F.
We report on the latest determination of the Newtonian gravitational constant G using our atom interferometry gravity gradiometer. After a short introduction on the G measurement issue we will provide a description of the experimental method employed, followed by a discussion of the experimental results in terms of sensitivity and systematic effects. Finally, prospects for future cold atom-based experiments devoted to the measurement of this fundamental constant are reported.
Improving Estimated Optical Constants With MSTM and DDSCAT Modeling
NASA Astrophysics Data System (ADS)
Pitman, K. M.; Wolff, M. J.
2015-12-01
We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long
Astronomers Gain Clues About Fundamental Physics
NASA Astrophysics Data System (ADS)
2005-12-01
An international team of astronomers has looked at something very big -- a distant galaxy -- to study the behavior of things very small -- atoms and molecules -- to gain vital clues about the fundamental nature of our entire Universe. The team used the National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) to test whether the laws of nature have changed over vast spans of cosmic time. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) "The fundamental constants of physics are expected to remain fixed across space and time; that's why they're called constants! Now, however, new theoretical models for the basic structure of matter indicate that they may change. We're testing these predictions." said Nissim Kanekar, an astronomer at the National Radio Astronomy Observatory (NRAO), in Socorro, New Mexico. So far, the scientists' measurements show no change in the constants. "We've put the most stringent limits yet on some changes in these constants, but that's not the end of the story," said Christopher Carilli, another NRAO astronomer. "This is the exciting frontier where astronomy meets particle physics," Carilli explained. The research can help answer fundamental questions about whether the basic components of matter are tiny particles or tiny vibrating strings, how many dimensions the Universe has, and the nature of "dark energy." The astronomers were looking for changes in two quantities: the ratio of the masses of the electron and the proton, and a number physicists call the fine structure constant, a combination of the electron charge, the speed of light and the Planck constant. These values, considered fundamental physical constants, once were "taken as time independent, with values given once and forever" said German particle physicist Christof Wetterich. However, Wetterich explained, "the viewpoint of modern particle theory has changed in recent years," with ideas such as
Fundamentals and Techniques of Nonimaging
O'Gallagher, J. J.; Winston, R.
2003-07-10
This is the final report describing a long term basic research program in nonimaging optics that has led to major advances in important areas, including solar energy, fiber optics, illumination techniques, light detectors, and a great many other applications. The term ''nonimaging optics'' refers to the optics of extended sources in systems for which image forming is not important, but effective and efficient collection, concentration, transport, and distribution of light energy is. Although some of the most widely known developments of the early concepts have been in the field of solar energy, a broad variety of other uses have emerged. Most important, under the auspices of this program in fundamental research in nonimaging optics established at the University of Chicago with support from the Office of Basic Energy Sciences at the Department of Energy, the field has become very dynamic, with new ideas and concepts continuing to develop, while applications of the early concepts continue to be pursued. While the subject began as part of classical geometrical optics, it has been extended subsequently to the wave optics domain. Particularly relevant to potential new research directions are recent developments in the formalism of statistical and wave optics, which may be important in understanding energy transport on the nanoscale. Nonimaging optics permits the design of optical systems that achieve the maximum possible concentration allowed by physical conservation laws. The earliest designs were constructed by optimizing the collection of the extreme rays from a source to the desired target: the so-called ''edge-ray'' principle. Later, new concentrator types were generated by placing reflectors along the flow lines of the ''vector flux'' emanating from lambertian emitters in various geometries. A few years ago, a new development occurred with the discovery that making the design edge-ray a functional of some other system parameter permits the construction of whole
An Evaluation of Fundamental Schools.
ERIC Educational Resources Information Center
Weber, Larry J.; And Others
1984-01-01
When compared with regular schools in the same district, fundamental school students performed as well as or better than regular school students; fundamental schools rated better on learning climate, discipline, and suspensions; and there were no differences in student self-concept. (Author/BW)
Is Planck's quantization constant unique?
NASA Astrophysics Data System (ADS)
Livadiotis, George
2016-07-01
A cornerstone of Quantum Mechanics is the existence of a non-zero least action, the Planck constant. However, the basic concepts and theoretical developments of Quantum Mechanics are independent of its specific numerical value. A different constant h _{*}, similar to the Planck constant h, but ˜12 orders of magnitude larger, characterizes plasmas. The study of >50 different geophysical, space, and laboratory plasmas, provided the first evidence for the universality and the quantum nature of h _{*}, revealing that it is a new quantization constant. The recent results show the diagnostics for determining whether plasmas are characterized by the Planck or the new quantization constant, compounding the challenge to reconcile both quantization constants in quantum mechanics.
Rosen, M D
2005-09-30
On the Nova Laser at LLNL, we demonstrated many of the key elements required for assuring that the next laser, the National Ignition Facility (NIF) will drive an Inertial Confinement Fusion (ICF) target to ignition. The indirect drive (sometimes referred to as ''radiation drive'') approach converts laser light to x-rays inside a gold cylinder, which then acts as an x-ray ''oven'' (called a hohlraum) to drive the fusion capsule in its center. On Nova we've demonstrated good understanding of the temperatures reached in hohlraums and of the ways to control the uniformity with which the x-rays drive the spherical fusion capsules. In these lectures we will be reviewing the physics of these laser heated hohlraums, recent attempts at optimizing their performance, and then return to the ICF problem in particular to discuss scaling of ICF gain with scale size, and to compare indirect vs. direct drive gains. In ICF, spherical capsules containing Deuterium and Tritium (DT)--the heavy isotopes of hydrogen--are imploded, creating conditions of high temperature and density similar to those in the cores of stars required for initiating the fusion reaction. When DT fuses an alpha particle (the nucleus of a helium atom) and a neutron are created releasing large amount amounts of energy. If the surrounding fuel is sufficiently dense, the alpha particles are stopped and can heat it, allowing a self-sustaining fusion burn to propagate radially outward and a high gain fusion micro-explosion ensues. To create those conditions the outer surface of the capsule is heated (either directly by a laser or indirectly by laser produced x-rays) to cause rapid ablation and outward expansion of the capsule material. A rocket-like reaction to that outward flowing heated material leads to an inward implosion of the remaining part of the capsule shell. The pressure generated on the outside of the capsule can reach nearly 100 megabar (100 million times atmospheric pressure [1b = 10{sup 6} cgs
The Search for Universal Constants and the Birth of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Robotti, Nadia; Badino, Massimiliano
The origin of quantum theory and Max Planck's theoretical work are without doubt two of the most frequently quoted episodes in the history of quantum physics, for the obvious reason that they represented the first steps in its formulation. Paradoxically however there are relatively few specific studies of Planck and those differ on a range of questions. In our opinion this is due to the extremely synthetic nature of some of Planck's papers, and especially the "fundamentals" of October and December 1900. Faced with such brevity, a number of historians of science and philosophers have preferred to give a comprehensive analysis of the landmarks in Planck's work, often resorting to a more or less retrospective reconstruction process rather than attempting to build an all-embracing vision of Planck's work as a whole. In this paper we have therefore attempted to rebuild Planck's steps from 1899 to 1900. An analysis of this type shows that Planck's work has a profound internal unity throughout the entire period leading up to the discovery of the "quantum of energy". In our opinion a key to interpreting the mutual relationships between the various parts and stages of the theory in an intelligible manner is provided by Planck's interest in universal constants. This interest was grounded in two factors: 1) universal constants gave the entire theory a precise physical meaning, 2) they could be used to build a universal system of units of measurement. In particular we show that various pairs of constants are a clear feature of Planck's treatment of the blackbody problem throughout the period in question and that for Planck the appearance of these constants in the distribution law represented a fundamental criteria. So much so that it inevitably played a key role in what has been defined as the crucial moment of the entire process - the decision to use a probabilistic definition of entropy.
QCD coupling constants and VDM
Erkol, G.; Ozpineci, A.; Zamiralov, V. S.
2012-10-23
QCD sum rules for coupling constants of vector mesons with baryons are constructed. The corresponding QCD sum rules for electric charges and magnetic moments are also derived and with the use of vector-meson-dominance model related to the coupling constants. The VDM role as the criterium of reciprocal validity of the sum rules is considered.
Constant-Pressure Hydraulic Pump
NASA Technical Reports Server (NTRS)
Galloway, C. W.
1982-01-01
Constant output pressure in gas-driven hydraulic pump would be assured in new design for gas-to-hydraulic power converter. With a force-multiplying ring attached to gas piston, expanding gas would apply constant force on hydraulic piston even though gas pressure drops. As a result, pressure of hydraulic fluid remains steady, and power output of the pump does not vary.
Fundamental principles of particle detectors
Fernow, R.C.
1988-01-01
This paper goes through the fundamental physics of particles-matter interactions which is necessary for the detection of these particles with detectors. A listing of 41 concepts and detector principles are given. 14 refs., 11 figs.
Ablative Thermal Protection System Fundamentals
NASA Technical Reports Server (NTRS)
Beck, Robin A. S.
2013-01-01
This is the presentation for a short course on the fundamentals of ablative thermal protection systems. It covers the definition of ablation, description of ablative materials, how they work, how to analyze them and how to model them.
Fundamental physics at the threshold of discovery
NASA Astrophysics Data System (ADS)
Toro, Natalia
This thesis is divided into two parts: one driven by theory, the other by experiment. The first two chapters consider two model-building challenges: the little hierarchy of supersymmetry and the slowness of confinement in Randall-Sundrum models. In the third chapter, we turn to the question of determining the nature of fundamental physics at the TeV scale from LHC data. Crucial to this venture is a characterization for models of new physics. We present On-Shell Effective Theories (OSETs), a characterization of hadron collider data in terms of masses, production cross sections, and decay modes of new particles. We argue that such a description can likely be obtained from ≲ 1 year of LHC data, and in many scenarios is an essential intermediate step in describing fundamental physics at the TeV scale.
Fundamental mechanisms of micromachine reliability
DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.
2000-01-01
Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is independent of
Hydrogenlike highly charged ions for tests of the time independence of fundamental constants.
Schiller, S
2007-05-04
Hyperfine transitions in the electronic ground state of cold, trapped hydrogenlike highly charged ions have attractive features for use as frequency standards because the majority of systematic frequency shifts are smaller by orders of magnitude compared to many microwave and optical frequency standards. Frequency measurements of these transitions hold promise for significantly improved laboratory tests of local position invariance of the electron and quark masses.
Fundamental ignition study for material fire safety improvement, part 1
NASA Technical Reports Server (NTRS)
Paciorek, K. L.; Zung, L. B.
1970-01-01
The investigation of preignition, ignition, and combustion characteristics of Delrin (acetate terminated polyformaldehyde) and Teflon (polytetrafluoroethylene) resins in air and oxygen are presented. The determination of ignition limits and their dependence on temperature and the oxidizing media, as well as the analyses of the volatiles produced, were studied. Tests were conducted in argon, an inert medium in which only purely pyrolytic reactions can take place, using the stagnation burner arrangement designed and constructed for this purpose. A theoretical treatment of the ignition and combination phenomena was devised. In the case of Delrin the ignition and ignition delays are apparently independent of the gas (air, oxygen) temperatures. The results indicate that hydrogen is the ignition triggering agent. Teflon ignition limits were established in oxygen only.
Fundamental performance differences of CMOS and CCD imagers: part V
NASA Astrophysics Data System (ADS)
Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff
2013-02-01
Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.
Fundamental ignition study for material fire safety improvement, part 2
NASA Technical Reports Server (NTRS)
Paciorek, K. L.; Kratzer, R. H.; Kaufman, J.
1971-01-01
The autoignition behavior of polymeric compositions in oxidizing media was investigated as well as the nature and relative concentration of the volatiles produced during oxidative decomposition culminating in combustion. The materials investigated were Teflon, Fluorel KF-2140 raw gum and its compounded versions Refset and Ladicote, 45B3 intumenscent paint, and Ames isocyanurate foam. The majority of the tests were conducted using a stagnation burner arrangement which provided a laminar gas flow and allowed the sample block and gas temperatures to be varied independently. The oxidizing atmospheres were essentially air and oxygen, although in the case of the Fluorel family of materials, due to partial blockage of the gas inlet system, some tests were performed unintentionally in enriched air (not oxygen). The 45B3 paint was not amenable to sampling in a dynamic system, due to its highly intumescent nature. Consequently, selected experiments were conducted using a sealed tube technique both in air and oxygen media.
Distributed detection with multiple sensors: Part I - fundamentals
Viswanathan, R.; Varshney, P.K.
1997-01-01
In this paper, basic results on distributed detection are reviewed. In particular, the authors consider the parallel and the serial architectures in some detail and discuss the decision rules obtained from their optimization based on the Neyman-Pearson (NP) criterion and the Bayes formulation. For conditionally independent sensor observations, the optimality of the likelihood ratio test (LRT) at the sensors is established. General comments on several important issues are made including the computational complexity of obtaining the optimal solutions, the design of detection networks with more general topologies, and applications to different areas.
Fluid property programs. Part 3. Program determines gas constants
Meehan, D.N.
1980-11-24
A calculator program written for the HP 67/97 programmable calculator uses gas-gravity data to quickly determine the pseudocritical properties of a reservoir gas, with corrections for the presence of N/sub 2/, CO/sub 2/, and H/sub 2/S. The program is based on equations for pressure and temperature developed by Standing and Katz and by Wichert and Aziz.
Oxygen Michaelis constants for tyrosinase.
Rodríguez-López, J N; Ros, J R; Varón, R; García-Cánovas, F
1993-01-01
The Michaelis constant of tyrosinase for oxygen in the presence of monophenols and o-diphenols, which generate a cyclizable o-quinone, has been studied. This constant depends on the nature of the monophenol and o-diphenol and is always lower in the presence of the former than of the latter. From the mechanism proposed for tyrosinase and from its kinetic analysis [Rodríguez-López, J. N., Tudela, J., Varón, R., García-Carmona, F. and García-Cánovas, F. (1992) J. Biol. Chem. 267, 3801-3810] a quantitative ratio has been established between the Michaelis constants for oxygen in the presence of monophenols and their o-diphenols. This ratio is used for the determination of the Michaelis constant for oxygen with monophenols when its value cannot be calculated experimentally. PMID:8352753
Avogadro's Number and Avogadro's Constant
ERIC Educational Resources Information Center
Davies, R. O.
1973-01-01
Discusses three possible methods of thinking about the implications of the definitions of the Avogadro constant and number. Indicates that there is only one way to arrive at a simple and standard conclusion. (CC)
Fundamental Physics from Observations of White Dwarf Stars
NASA Astrophysics Data System (ADS)
Bainbridge, M. B.; Barstow, M. A.; Reindl, N.; Barrow, J. D.; Webb, J. K.; Hu, J.; Preval, S. P.; Holberg, J. B.; Nave, G.; Tchang-Brillet, L.; Ayres, T. R.
2017-03-01
Variation in fundamental constants provide an important test of theories of grand unification. Potentially, white dwarf spectra allow us to directly observe variation in fundamental constants at locations of high gravitational potential. We study hot, metal polluted white dwarf stars, combining far-UV spectroscopic observations, atomic physics, atmospheric modelling and fundamental physics, in the search for variation in the fine structure constant. This registers as small but measurable shifts in the observed wavelengths of highly ionized Fe and Ni lines when compared to laboratory wavelengths. Measurements of these shifts were performed by Berengut et al (2013) using high-resolution STIS spectra of G191-B2B, demonstrating the validity of the method. We have extended this work by; (a) using new (high precision) laboratory wavelengths, (b) refining the analysis methodology (incorporating robust techniques from previous studies towards quasars), and (c) enlarging the sample of white dwarf spectra. A successful detection would be the first direct measurement of a gravitational field effect on a bare constant of nature. We describe our approach and present preliminary results.
The fundamental plane correlations for globular clusters
NASA Technical Reports Server (NTRS)
Djorgovski, S.
1995-01-01
In the parameter space whose axes include a radius (core, or half-light), a surface brightness (central, or average within the half-light radius), and the central projected velocity dispersion, globular clusters lie on a two-dimensional surface (a plane, if the logarithmic quantities are used). This is analogous to the 'fundamental plane' of elliptical galaxies. The implied bivariate correlations are the best now known for globular clusters. The derived scaling laws for the core properties imply that cluster cores are fully virialized, homologous systems, with a constant (M/L) ratio. The corresponding scaling laws on the half-light scale are differrent, but are nearly identical to those derived from the 'fundamental plane' of ellipticals. This may be due to the range of cluster concentrations, which are correlated with other parameters. A similar explanation for elliptical galaxies may be viable. These correlations provide new empirical constraints for models of globular cluster formation and evolution, and may also be usable as rough distance-indicator relations for globular clusters.
Geophysics Fatally Flawed by False Fundamental Philosophy
NASA Astrophysics Data System (ADS)
Myers, L. S.
2004-05-01
For two centuries scientists have failed to realize Laplace's nebular hypothesis \\(1796\\) of Earth's creation is false. As a consequence, geophysicists today are misinterpreting and miscalculating many fundamental aspects of the Earth and Solar System. Why scientists have deluded themselves for so long is a mystery. The greatest error is the assumption Earth was created 4.6 billion years ago as a molten protoplanet in its present size, shape and composition. This assumption ignores daily accretion of more than 200 tons/day of meteorites and dust, plus unknown volumes of solar insolation that created coal beds and other biomass that increased Earth's mass and diameter over time! Although the volume added daily is minuscule compared with Earth's total mass, logic and simple addition mandates an increase in mass, diameter and gravity. Increased diameter from accretion is proved by Grand Canyon stratigraphy that shows a one kilometer increase in depth and planetary radius at a rate exceeding three meters \\(10 ft\\) per Ma from start of the Cambrian \\(540 Ma\\) to end of the Permian \\(245 Ma\\)-each layer deposited onto Earth's surface. This is unequivocal evidence of passive external growth by accretion, part of a dual growth and expansion process called "Accreation" \\(creation by accretion\\). Dynamic internal core expansion, the second stage of Accreation, did not commence until the protoplanet reached spherical shape at 500-600 km diameter. At that point, gravity-powered compressive heating initiated core melting and internal expansion. Expansion quickly surpassed the external accretion growth rate and produced surface volcanoes to relieve explosive internal tectonic pressure and transfer excess mass (magma)to the surface. Then, 200-250 Ma, expansion triggered Pangaea's breakup, first sundering Asia and Australia to form the Pacific Ocean, followed by North and South America to form the Atlantic Ocean, by the mechanism of midocean ridges, linear underwater
Effect of Fundamental Frequency on Judgments of Electrolaryngeal Speech
ERIC Educational Resources Information Center
Nagle, Kathy F.; Eadie, Tanya L.; Wright, Derek R.; Sumida, Yumi A.
2012-01-01
Purpose: To determine (a) the effect of fundamental frequency (f0) on speech intelligibility, acceptability, and perceived gender in electrolaryngeal (EL) speakers, and (b) the effect of known gender on speech acceptability in EL speakers. Method: A 2-part study was conducted. In Part 1, 34 healthy adults provided speech recordings using…
Constant fields and constant gradients in open ionic channels.
Chen, D P; Barcilon, V; Eisenberg, R S
1992-01-01
Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159
Astrophysical probes of fundamental physics
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.
2009-10-01
I review the motivation for varying fundamental couplings and discuss how these measurements can be used to constrain fundamental physics scenarios that would otherwise be inaccessible to experiment. I highlight the current controversial evidence for varying couplings and present some new results. Finally I focus on the relation between varying couplings and dark energy, and explain how varying coupling measurements might be used to probe the nature of dark energy, with some advantages over standard methods. In particular I discuss what can be achieved with future spectrographs such as ESPRESSO and CODEX.
Effective cosmological constant induced by stochastic fluctuations of Newton's constant
NASA Astrophysics Data System (ADS)
de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi
2016-09-01
We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.
Frequency-constant Q, unity and disorder
Hargreaves, N.D.
1995-12-31
In exploration geophysics we obtain information about the earth by observing its response to different types of applied force. The response can cover the full range of possible Q values (where Q, the quality factor, is a measure of energy dissipation), from close to infinity in the case of deep crustal seismic to close to 0 in the case of many electromagnetic methods. When Q is frequency-constant, however, the various types of response have a common scaling behavior and can be described as being self-affine. The wave-equation then takes on a generalised form, changing from the standard wave-equation at Q = {infinity} to the diffusion equation at Q = 0, via lossy, diffusive, propagation at intermediate Q values. Solutions of this wave-diffusion equation at any particular Q value can be converted to an equivalent set of results for any other Q value. In particular it is possible to convert from diffusive to wave propagation by a mapping from Q < {infinity} to Q = {infinity}. In the context of seismic sounding this is equivalent to applying inverse Q-filtering; in a more general context the mapping integrates different geophysical observations by referencing them to the common result at Q = {infinity}. The self-affinity of the observations for frequency-constant Q is an expression of scale invariance in the fundamental physical properties of the medium of propagation, this being the case whether the mechanism of diffusive propagation is scattering of intrinsic attenuation. Scale invariance, or fractal scaling, is a general property of disordered systems; the assumption of frequency-constant Q not only implies a unity between different geophysical observations, but also suggests that it is the disordered nature of the earth`s sub-surface that is the unifying factor.
Brake Fundamentals. Automotive Articulation Project.
ERIC Educational Resources Information Center
Cunningham, Larry; And Others
Designed for secondary and postsecondary auto mechanics programs, this curriculum guide contains learning exercises in seven areas: (1) brake fundamentals; (2) brake lines, fluid, and hoses; (3) drum brakes; (4) disc brake system and service; (5) master cylinder, power boost, and control valves; (6) parking brakes; and (7) trouble shooting. Each…
Light as a Fundamental Particle
ERIC Educational Resources Information Center
Weinberg, Steven
1975-01-01
Presents two arguments concerning the role of the photon. One states that the photon is just another particle distinguished by a particular value of charge, spin, mass, lifetime, and interaction properties. The second states that the photon plays a fundamental role with a deep relation to ultimate formulas of physics. (GS)
Environmental Law: Fundamentals for Schools.
ERIC Educational Resources Information Center
Day, David R.
This booklet outlines the environmental problems most likely to arise in schools. An overview provides a fundamental analysis of environmental issues rather than comprehensive analysis and advice. The text examines the concerns that surround superfund cleanups, focusing on the legal framework, and furnishes some practical pointers, such as what to…
Fundamentals of the Slide Library.
ERIC Educational Resources Information Center
Boerner, Susan Zee
This paper is an introduction to the fundamentals of the art (including architecture) slide library, with some emphasis on basic procedures of the science slide library. Information in this paper is particularly relevant to the college, university, and museum slide library. Topics addressed include: (1) history of the slide library; (2) duties of…
Fundamentals of Environmental Education. Report.
ERIC Educational Resources Information Center
1976
An outline of fundamental definitions, relationships, and human responsibilities related to environment provides a basis from which a variety of materials, programs, and activities can be developed. The outline can be used in elementary, secondary, higher education, or adult education programs. The framework is based on principles of the science…
Fundamentals of Welding. Teacher Edition.
ERIC Educational Resources Information Center
Fortney, Clarence; And Others
These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…
Fundamentals of Microelectronics Processing (VLSI).
ERIC Educational Resources Information Center
Takoudis, Christos G.
1987-01-01
Describes a 15-week course in the fundamentals of microelectronics processing in chemical engineering, which emphasizes the use of very large scale integration (VLSI). Provides a listing of the topics covered in the course outline, along with a sample of some of the final projects done by students. (TW)
FUNdamental Movement in Early Childhood.
ERIC Educational Resources Information Center
Campbell, Linley
2001-01-01
Noting that the development of fundamental movement skills is basic to children's motor development, this booklet provides a guide for early childhood educators in planning movement experiences for children between 4 and 8 years. The booklet introduces a wide variety of appropriate practices to promote movement skill acquisition and increased…
The Fundamental Manifold of Spheroids
NASA Astrophysics Data System (ADS)
Zaritsky, Dennis; Gonzalez, Anthony H.; Zabludoff, Ann I.
2006-02-01
We present a unifying empirical description of the structural and kinematic properties of all spheroids embedded in dark matter halos. We find that the intracluster stellar spheroidal components of galaxy clusters, which we call cluster spheroids (CSphs) and which are typically 100 times the size of normal elliptical galaxies, lie on a ``fundamental plane'' as tight as that defined by elliptical galaxies (rms in effective radius of ~0.07) but having a different slope. The slope, as measured by the coefficient of the logσ term, declines significantly and systematically between the fundamental planes of ellipticals, brightest cluster galaxies (BCGs), and CSphs. We attribute this decline primarily to a continuous change in Me/Le, the mass-to-light ratio within the effective radius re, with spheroid scale. The magnitude of the slope change requires that it arise principally from differences in the relative distributions of luminous and dark matter, rather than from stellar population differences such as in age and metallicity. By expressing the Me/Le term as a function of σ in the simple derivation of the fundamental plane and requiring the behavior of that term to mimic the observed nonlinear relationship between logMe/Le and logσ, we simultaneously fit a two-dimensional manifold to the measured properties of dwarf elliptical and elliptical galaxies, BCGs, and CSphs. The combined data have an rms scatter in logre of 0.114 (0.099 for the combination of ellipticals, BCGs, and CSphs), which is modestly larger than each fundamental plane has alone, but which includes the scatter introduced by merging different studies done in different filters by different investigators. This ``fundamental manifold'' fits the structural and kinematic properties of spheroids that span a factor of 100 in σ and 1000 in re. While our mathematical form is neither unique nor derived from physical principles, the tightness of the fit leaves little room for improvement by other unification
Distributed Low Temperature Combustion: Fundamental Understanding of Combustion Regime Transitions
2016-09-07
study is to bring fundamental understanding of the impact of the chemical (Tau_c) and flow (Tau_f) timescales on combustion regime transitions in...reaction zone regime. The choice of DME is partly due to the potential practical relevance, but also due to the fundamentally different chemical ... chemical mechanisms for the considered fuels (e.g. DME) to establish their ability to reproduce laminar flame and auto-ignition properties. The
Cosmological constant from quantum spacetime
NASA Astrophysics Data System (ADS)
Majid, Shahn; Tao, Wen-Qing
2015-06-01
We show that a hypothesis that spacetime is quantum with coordinate algebra [xi,t ]=λPxi , and spherical symmetry under rotations of the xi, essentially requires in the classical limit that the spacetime metric is the Bertotti-Robinson metric, i.e., a solution of Einstein's equations with a cosmological constant and a non-null electromagnetic field. Our arguments do not give the value of the cosmological constant or the Maxwell field strength, but they cannot both be zero. We also describe the quantum geometry and the full moduli space of metrics that can emerge as classical limits from this algebra.
On flows having constant vorticity
NASA Astrophysics Data System (ADS)
Roberts, Paul H.; Wu, Cheng-Chin
2011-10-01
Constant vorticity flows of a uniform fluid in a rigid ellipsoidal container rotating at a variable rate are considered. These include librationally driven and precessionally driven flows. The well-known Poincaré solution for precessionally driven flow in a spheroid is generalized to an ellipsoid with unequal principal axes. The dynamic stability of these flows is investigated, and of other flows in which the angular velocity of the container is constant in time. Solutions for the Chandler wobble are discussed. The role of an invariant, called here the Helmholtzian, is examined.
Vibrational force constants for acetaldehyde
NASA Astrophysics Data System (ADS)
Nikolova, B.
1990-05-01
The vibrational force field of ethanal (acetaldehyde), CH 3CHO, is refined by using procedures with differential increments for the force constants (Commun. Dep. Chem., Bulg. Acad. Sci., 21/3 (1988) 433). The characteristics general valence force constants of the high-dimensional symmetry classes of ethanal, A' of tenth and A″ of fifth order, are determined for the experimental assignment of bands. The low barrier to hindered internal rotation about the single carbon—carbon bond is quantitatively estimated on the grounds of normal vibrational analysis.
Cosmologies with variable gravitational constant
Narkikar, J.V.
1983-03-01
In 1937 Dirac presented an argument, based on the socalled large dimensionless numbers, which led him to the conclusion that the Newtonian gravitational constant G changes with epoch. Towards the end of the last century Ernst Mach had given plausible arguments to link the property of inertia of matter to the large scale structure of the universe. Mach's principle also leads to cosmological models with a variable gravitational constant. Three cosmologies which predict a variable G are discussed in this paper both from theoretical and observational points of view.
van Gemert, M J; Lucassen, G W; Welch, A J
1996-08-01
The thermal response of a semi-infinite medium in air, irradiated by laser light in a cylindrical geometry, cannot accurately be approximately by single radial and axial time constants for heat conduction. This report presents an analytical analysis of hear conduction where the thermal response is expressed in terms of distributions over radial and axial time constants. The source term for heat production is written as the product of a Gaussian shaped radial term and an exponentially shaped axial term. The two terms are expanded in integrals over eigenfunctions of the radial and axial parts of the Laplace heat conduction operator. The result is a double integral over the coupled distributions of the two time constants to compute the temperature rise as a function of time and of axial and radial positions. The distribution of axial time constants is a homogeneous slowly decreasing function of spatial frequency (v) indicating that one single axial time constant cannot reasonably characterize axial heat conduction. The distribution of radial time constants is a function centred around a distinguished maximum in the spatial frequency (lambda) close to the single radial time constant value used previously. This suggests that one radial time constant to characterize radial heat conduction may be a useful concept. Special cases have been evaluated analytically, such as short and long irradiation times, axial or radial heat conduction (shallow or deep penetrating laser beams) and, especially, thermal relaxation (cooling) of the tissue. For shallow penetrating laser beams the asymptotic cooling rate is confirmed to be proportional to [(t)0.5-(t-tL)0.5] which approaches 1/t0.5 for t > tL, where t is the time and tL is the laser pulse duration. For deep penetrating beams this is proportional to 1/(t-tL). For intermediate penetration, i.e. penetration depths about equal to spot size diameters, this is proportional to 1/(t-tL)1.5. The double integral has been evaluated
Fundamentals of Managing Reference Collections
ERIC Educational Resources Information Center
Singer, Carol A.
2012-01-01
Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…
ERIC Educational Resources Information Center
Ford, T. A.
1979-01-01
In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…
Variations of the solar constant
Sofia, S.
1981-12-01
The variations in data received from rocket-borne and balloon-borne instruments are discussed. Indirect techniques to measure and monitor the solar constant are presented. Emphasis is placed on the correlation of data from the Solar Maximum Mission and the Nimbus 7 satellites. Abstracts of individual items from the workshop were prepared separately for the data base.
Astrophysical Probes of Fundamental Physics
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.
I review the theoretical motivation for varying fundamental couplings and discuss how these measurements can be used to constrain a number of fundamental physics scenarios that would otherwise be inacessible to experiment. As a case study I will focus on the relation between varying couplings and dark energy, and explain how varying coupling measurements can be used to probe the nature of dark energy, with important advantages over the standard methods. Assuming that the current observational evidence for varying α. and μ is correct, a several-sigma detection of dynamical dark energy is feasible within a few years, using currently operational ground-based facilities. With forthcoming instruments like CODEX, a high-accuracy reconstruction of the equation of state may be possible all the way up to redshift z ˜ 4.
Fundamental neutron physics at LANSCE
Greene, G.
1995-10-01
Modern neutron sources and science share a common origin in mid-20th-century scientific investigations concerned with the study of the fundamental interactions between elementary particles. Since the time of that common origin, neutron science and the study of elementary particles have evolved into quite disparate disciplines. The neutron became recognized as a powerful tool for studying condensed matter with modern neutron sources being primarily used (and justified) as tools for neutron scattering and materials science research. The study of elementary particles has, of course, led to the development of rather different tools and is now dominated by activities performed at extremely high energies. Notwithstanding this trend, the study of fundamental interactions using neutrons has continued and remains a vigorous activity at many contemporary neutron sources. This research, like neutron scattering research, has benefited enormously by the development of modern high-flux neutron facilities. Future sources, particularly high-power spallation sources, offer exciting possibilities for continuing this research.
DOE Fundamentals Handbook: Classical Physics
Not Available
1992-06-01
The Classical Physics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of physical forces and their properties. The handbook includes information on the units used to measure physical properties; vectors, and how they are used to show the net effect of various forces; Newton's Laws of motion, and how to use these laws in force and motion applications; and the concepts of energy, work, and power, and how to measure and calculate the energy involved in various applications. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility systems and equipment.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
... Regulation Supplement: Release of Fundamental Research Information (DFARS Case 2012-D054) AGENCY: Defense... relating to the release of fundamental research information. This rule was previously published as part of... fundamental research projects and not safeguarding. This rule was initiated to implement guidance provided...
Microplasmas: from applications to fundamentals
NASA Astrophysics Data System (ADS)
Nguon, Olivier; Huang, Sisi; Gauthier, Mario; Karanassios, Vassili
2014-05-01
Microplasmas are receiving increasing attention in the scientific literature and in recent conferences. Yet, few analytical applications of microplasmas for elemental analysis using liquid samples have been described in the literature. To address this, we describe two applications: one involves the determination of Zn in microsamples of the metallo-enzyme Super Oxide Dismutase. The other involves determination of Pd-concentration in microsamples of Pd nanocatalysts. These applications demonstrate the potential of microplasmas and point to the need for future fundamental studies.
Constant-bandwidth constant-temperature hot-wire anemometer.
Ligeza, P
2007-07-01
A constant-temperature anemometer (CTA) enables the measurement of fast-changing velocity fluctuations. In the classical solution of CTA, the transmission band is a function of flow velocity. This is a minor drawback when the mean flow velocity does not significantly change, though it might lead to dynamic errors when flow velocity varies over a considerable range. A modification is outlined, whereby an adaptive controller is incorporated in the CTA system such that the anemometer's transmission band remains constant in the function of flow velocity. For that purpose, a second feedback loop is provided, and the output signal from the anemometer will regulate the controller's parameters such that the transmission bandwidth remains constant. The mathematical model of a CTA that has been developed and model testing data allow a through evaluation of the proposed solution. A modified anemometer can be used in measurements of high-frequency variable flows in a wide range of velocities. The proposed modification allows the minimization of dynamic measurement errors.
The spectroscopic constants and anharmonic force field of AgSH: An ab initio study.
Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang
2016-07-05
The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH.
The Not so Constant Gravitational "Constant" G as a Function of Quantum Vacuum
NASA Astrophysics Data System (ADS)
Maxmilian Caligiuri, Luigi
Gravitation is still the less understood among the fundamental forces of Nature. The ultimate physical origin of its ruling constant G could give key insights in this understanding. According to the Einstein's Theory of General Relativity, a massive body determines a gravitational potential that alters the speed of light, the clock's rate and the particle size as a function of the distance from its own center. On the other hand, it has been shown that the presence of mass determines a modification of Zero-Point Field (ZPF) energy density within its volume and in the space surrounding it. All these considerations strongly suggest that also the constant G could be expressed as a function of quantum vacuum energy density somehow depending on the distance from the mass whose presence modifies the ZPF energy structure. In this paper, starting from a constitutive medium-based picture of space, it has been formulated a model of gravitational constant G as a function of Planck's time and Quantum Vacuum energy density in turn depending on the radial distance from center of the mass originating the gravitational field, supposed as spherically symmetric. According to this model, in which gravity arises from the unbalanced physical vacuum pressure, gravitational "constant" G is not truly unchanging but slightly varying as a function of the distance from the mass source of gravitational potential itself. An approximate analytical form of such dependence has been discussed. The proposed model, apart from potentially having deep theoretical consequences on the commonly accepted picture of physical reality (from cosmology to matter stability), could also give the theoretical basis for unthinkable applications related, for example, to the field of gravity control and space propulsion.
Omura, Yoshiaki; Lu, Dominic P; Jones, Marilyn; O'Young, Brian; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu
2011-01-01
The expression of the longevity gene, Sirtuin 1, was non-invasively measured using Electro-Magnetic Field (EMF) resonance phenomenon between a known amount of polyclonal antibody of the C-terminal of Sirtuin 1 & Sirtuin 1 molecule inside of the body. Our measurement of over 100 human adult males and females, ranging between 20-122 years old, indicated that the majority of subjects had Sirtuin 1 levels of 5-10 pg BDORT units in most parts of the body. When Sirtuin 1 was less than 1 pg, the majority of the people had various degrees of tumors or other serious diseases. When Sirtuin 1 levels were less than 0.25 pg BDORT units, a high incidence of AIDS was also detected. Very few people had Sirtuin 1 levels of over 25 pg BDORT units in most parts of the body. We selected 7 internationally recognized supercentenarians who lived between 110-122 years old. To our surprise, most of their body Sirtuin 1 levels were between 2.5-10 pg BDORT units. However, by evaluating different parts of the brain, we found that both sides of the Hippocampus had a much higher amount of Sirtuin 1, between 25-100 pg BDORT units. With most subjects, Sirtuin 1 was found to be higher in the Hippocampus than in the rest of the body and remains relatively constant regardless of age. We found that Aspartame, plastic eye contact lenses, and asbestos in dental apparatuses, which reduce normal cell telomeres, also significantly reduce Sirtuin 1. In addition, we found that increasing normal cell telomere by electrical or mechanical stimulation of True ST-36 increases the expression of the Sirtuin 1 gene in people in which expression is low. This measurement of Sirtuin 1 in the Hippocampus has become a reliable indicator for detecting potential longevity of an individual.
How does Planck’s constant influence the macroscopic world?
NASA Astrophysics Data System (ADS)
Yang, Pao-Keng
2016-09-01
In physics, Planck’s constant is a fundamental physical constant accounting for the energy-quantization phenomenon in the microscopic world. The value of Planck’s constant also determines in which length scale the quantum phenomenon will become conspicuous. Some students think that if Planck’s constant were to have a larger value than it has now, the quantum effect would only become observable in a world with a larger size, whereas the macroscopic world might remain almost unchanged. After reasoning from some basic physical principles and theories, we found that doubling Planck’s constant might result in a radical change on the geometric sizes and apparent colors of macroscopic objects, the solar spectrum and luminosity, the climate and gravity on Earth, as well as energy conversion between light and materials such as the efficiency of solar cells and light-emitting diodes. From the discussions in this paper, students can appreciate how Planck’s constant affects various aspects of the world in which we are living now.
Low uncertainty Boltzmann constant determinations and the kelvin redefinition.
Fischer, J
2016-03-28
At its 25th meeting, the General Conference on Weights and Measures (CGPM) approved Resolution 1 'On the future revision of the International System of Units, the SI', which sets the path towards redefinition of four base units at the next CGPM in 2018. This constitutes a decisive advance towards the formal adoption of the new SI and its implementation. Kilogram, ampere, kelvin and mole will be defined in terms of fixed numerical values of the Planck constant, elementary charge, Boltzmann constant and Avogadro constant, respectively. The effect of the new definition of the kelvin referenced to the value of the Boltzmann constant k is that the kelvin is equal to the change of thermodynamic temperature T that results in a change of thermal energy kT by 1.380 65×10(-23) J. A value of the Boltzmann constant suitable for defining the kelvin is determined by fundamentally different primary thermometers such as acoustic gas thermometers, dielectric constant gas thermometers, noise thermometers and the Doppler broadening technique. Progress to date of the measurements and further perspectives are reported. Necessary conditions to be met before proceeding with changing the definition are given. The consequences of the new definition of the kelvin on temperature measurement are briefly outlined.
Dielectric-constant gas thermometry
NASA Astrophysics Data System (ADS)
Gaiser, Christof; Zandt, Thorsten; Fellmuth, Bernd
2015-10-01
The principles, techniques and results from dielectric-constant gas thermometry (DCGT) are reviewed. Primary DCGT with helium has been used for measuring T-T90 below the triple point of water (TPW), where T is the thermodynamic temperature and T90 is the temperature on the international temperature scale of 1990 (ITS-90), and, in an inverse regime with T as input quantity, for determining the Boltzmann constant at the TPW. Furthermore, DCGT allows the determination of several important material properties including the polarizability of neon and argon as well as the virial coefficients of helium, neon, and argon. With interpolating DCGT (IDCGT), the ITS-90 has been approximated in the temperature range from 4 K to 25 K. An overview and uncertainty budget for each of these applications of DCGT is provided, accompanied by corroborating evidence from the literature or, for IDCGT, a CIPM key comparison.
Three pion nucleon coupling constants
NASA Astrophysics Data System (ADS)
Ruiz Arriola, E.; Amaro, J. E.; Navarro Pérez, R.
2016-08-01
There exist four pion nucleon coupling constants, fπ0pp, - fπ0nn, fπ+pn/2 and fπ-np/2 which coincide when up and down quark masses are identical and the electron charge is zero. While there is no reason why the pion-nucleon-nucleon coupling constants should be identical in the real world, one expects that the small differences might be pinned down from a sufficiently large number of independent and mutually consistent data. Our discussion provides a rationale for our recent determination fp2 = 0.0759(4),f 02 = 0.079(1),f c2 = 0.0763(6), based on a partial wave analysis of the 3σ self-consistent nucleon-nucleon Granada-2013 database comprising 6713 published data in the period 1950-2013.
Renormalization constants from string theory.
NASA Astrophysics Data System (ADS)
di Vecchia, P.; Magnea, L.; Lerda, A.; Russo, R.; Marotta, R.
The authors review some recent results on the calculation of renormalization constants in Yang-Mills theory using open bosonic strings. The technology of string amplitudes, supplemented with an appropriate continuation off the mass shell, can be used to compute the ultraviolet divergences of dimensionally regularized gauge theories. The results show that the infinite tension limit of string amplitudes corresponds to the background field method in field theory.
Fundamental Limits to Cellular Sensing
NASA Astrophysics Data System (ADS)
ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew
2016-03-01
In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade
Short-range Fundamental forces
Antoniadis, I; Baessler, Stefan; Buechner, M; Fedorov, General Victor; Hoedl, S.; Lambrecht, A; Nesvizhevsky, V.; Pignol, G; Reynaud, S.; Sobolev, Yu.
2011-01-01
We consider theoretical motivations to search for extra short-range fundamental forces as well as experiments constraining their parameters. The forces could be of two types: (1) spin-independent forces; and (2) spin-dependent axion-like forces. Different experimental techniques are sensitive in respective ranges of characteristic distances. The techniques include measurements of gravity at short distances, searches for extra interactions on top of the Casimir force, precision atomic and neutron experiments. We focus on neutron constraints, thus the range of characteristic distances considered here corresponds to the range accessible for neutron experiments.
Fundamental Characteristics of Breather Hydrodynamics
NASA Astrophysics Data System (ADS)
Chabchoub, Amin
2014-05-01
The formation of oceanic rogue waves can be explained by the modulation instability of deep-water Stokes waves. In particular, being doubly-localized and amplifying the background wave amplitude by a factor of three or higher, the class of Peregrine-type breather solutions of the nonlinear Schrödinger equation (NLS) are considered to be appropriate models to describe extreme ocean wave dynamics. Here, we present an experimental validation of fundamental properties of the NLS within the context of Peregrine breather dynamics and we discuss the long-term behavior of such in time and space localized structures.
Reconstruction of fundamental SUSY parameters
P. M. Zerwas et al.
2003-09-25
We summarize methods and expected accuracies in determining the basic low-energy SUSY parameters from experiments at future e{sup +}e{sup -} linear colliders in the TeV energy range, combined with results from LHC. In a second step we demonstrate how, based on this set of parameters, the fundamental supersymmetric theory can be reconstructed at high scales near the grand unification or Planck scale. These analyses have been carried out for minimal supergravity [confronted with GMSB for comparison], and for a string effective theory.
Solid Lubrication Fundamentals and Applications
NASA Technical Reports Server (NTRS)
Miyoshi, Kazuhisa
2001-01-01
Solid Lubrication Fundamentals and Applications description of the adhesion, friction, abrasion, and wear behavior of solid film lubricants and related tribological materials, including diamond and diamond-like solid films. The book details the properties of solid surfaces, clean surfaces, and contaminated surfaces as well as discussing the structures and mechanical properties of natural and synthetic diamonds; chemical-vapor-deposited diamond film; surface design and engineering toward wear-resistant, self-lubricating diamond films and coatings. The author provides selection and design criteria as well as applications for synthetic and natural coatings in the commercial, industrial and aerospace industries..
Dielectric constant of liquid alkanes and hydrocarbon mixtures
NASA Technical Reports Server (NTRS)
Sen, A. D.; Anicich, V. G.; Arakelian, T.
1992-01-01
The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.
Dielectric constant of liquid alkanes and hydrocarbon mixtures.
Sen, A D; Anicich, V G; Arakelian, T
1992-01-01
The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.
WHY IS THE SOLAR CONSTANT NOT A CONSTANT?
Li, K. J.; Xu, J. C.; Gao, P. X.; Yang, L. H.; Liang, H. F.; Zhan, L. S.
2012-03-10
In order to probe the mechanism of variations of the solar constant on the inter-solar-cycle scale, the total solar irradiance (TSI; the so-called solar constant) in the time interval of 1978 November 7 to 2010 September 20 is decomposed into three components through empirical mode decomposition and time-frequency analyses. The first component is the rotation signal, counting up to 42.31% of the total variation of TSI, which is understood to be mainly caused by large magnetic structures, including sunspot groups. The second is an annual-variation signal, counting up to 15.17% of the total variation, the origin of which is not known at this point in time. Finally, the third is the inter-solar-cycle signal, counting up to 42.52%, which is inferred to be caused by the network magnetic elements in quiet regions, whose magnetic flux ranges from (4.27-38.01) Multiplication-Sign 10{sup 19} Mx.
Stability of fundamental couplings: A global analysis
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.; Pinho, A. M. M.
2017-01-01
Astrophysical tests of the stability of fundamental couplings are becoming an increasingly important probe of new physics. Motivated by the recent availability of new and stronger constraints we update previous works testing the consistency of measurements of the fine-structure constant α and the proton-to-electron mass ratio μ =mp/me (mostly obtained in the optical/ultraviolet) with combined measurements of α , μ and the proton gyromagnetic ratio gp (mostly in the radio band). We carry out a global analysis of all available data, including the 293 archival measurements of Webb et al. and 66 more recent dedicated measurements, and constraining both time and spatial variations. While nominally the full data sets show a slight statistical preference for variations of α and μ (at up to two standard deviations), we also find several inconsistencies between different subsets, likely due to hidden systematics and implying that these statistical preferences need to be taken with caution. The statistical evidence for a spatial dipole in the values of α is found at the 2.3 sigma level. Forthcoming studies with facilities such as ALMA and ESPRESSO should clarify these issues.
An Alcohol Test for Drifting Constants
NASA Astrophysics Data System (ADS)
Jansen, P.; Bagdonaite, J.; Ubachs, W.; Bethlem, H. L.; Kleiner, I.; Xu, L.-H.
2013-06-01
The Standard Model of physics is built on the fundamental constants of nature, however without providing an explanation for their values, nor requiring their constancy over space and time. Molecular spectroscopy can address this issue. Recently, we found that microwave transitions in methanol are extremely sensitive to a variation of the proton-to-electron mass ratio μ, due to a fortuitous interplay between classically forbidden internal rotation and rotation of the molecule as a whole. In this talk, we will explain the origin of this effect and how the sensitivity coefficients in methanol are calculated. In addition, we set a limit on a possible cosmological variation of μ by comparing transitions in methanol observed in the early Universe with those measured in the laboratory. Based on radio-astronomical observations of PKS1830-211, we deduce a constraint of Δμ/μ=(0.0± 1.0)× 10^{-7} at redshift z = 0.89, corresponding to a look-back time of 7 billion years. While this limit is more constraining and systematically more robust than previous ones, the methanol method opens a new search territory for probing μ-variation on cosmological timescales. P. Jansen, L.-H. Xu, I. Kleiner, W. Ubachs, and H.L. Bethlem Phys. Rev. Lett. {106}(100801) 2011. J. Bagdonaite, P. Jansen, C. Henkel, H.L. Bethlem, K.M. Menten, and W. Ubachs Science {339}(46) 2013.
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Chandra Independently Determines Hubble Constant
NASA Astrophysics Data System (ADS)
2006-08-01
A critically important number that specifies the expansion rate of the Universe, the so-called Hubble constant, has been independently determined using NASA's Chandra X-ray Observatory. This new value matches recent measurements using other methods and extends their validity to greater distances, thus allowing astronomers to probe earlier epochs in the evolution of the Universe. "The reason this result is so significant is that we need the Hubble constant to tell us the size of the Universe, its age, and how much matter it contains," said Max Bonamente from the University of Alabama in Huntsville and NASA's Marshall Space Flight Center (MSFC) in Huntsville, Ala., lead author on the paper describing the results. "Astronomers absolutely need to trust this number because we use it for countless calculations." Illustration of Sunyaev-Zeldovich Effect Illustration of Sunyaev-Zeldovich Effect The Hubble constant is calculated by measuring the speed at which objects are moving away from us and dividing by their distance. Most of the previous attempts to determine the Hubble constant have involved using a multi-step, or distance ladder, approach in which the distance to nearby galaxies is used as the basis for determining greater distances. The most common approach has been to use a well-studied type of pulsating star known as a Cepheid variable, in conjunction with more distant supernovae to trace distances across the Universe. Scientists using this method and observations from the Hubble Space Telescope were able to measure the Hubble constant to within 10%. However, only independent checks would give them the confidence they desired, considering that much of our understanding of the Universe hangs in the balance. Chandra X-ray Image of MACS J1149.5+223 Chandra X-ray Image of MACS J1149.5+223 By combining X-ray data from Chandra with radio observations of galaxy clusters, the team determined the distances to 38 galaxy clusters ranging from 1.4 billion to 9.3 billion
Detector Fundamentals for Reachback Analysts
Karpius, Peter Joseph; Myers, Steven Charles
2016-08-03
This presentation is a part of the DHS LSS spectroscopy course and provides an overview of the following concepts: detector system components, intrinsic and absolute efficiency, resolution and linearity, and operational issues and limits.
Cosmological constant and local gravity
Bernabeu, Jose; Espinoza, Catalina; Mavromatos, Nick E.
2010-04-15
We discuss the linearization of Einstein equations in the presence of a cosmological constant, by expanding the solution for the metric around a flat Minkowski space-time. We demonstrate that one can find consistent solutions to the linearized set of equations for the metric perturbations, in the Lorentz gauge, which are not spherically symmetric, but they rather exhibit a cylindrical symmetry. We find that the components of the gravitational field satisfying the appropriate Poisson equations have the property of ensuring that a scalar potential can be constructed, in which both contributions, from ordinary matter and {Lambda}>0, are attractive. In addition, there is a novel tensor potential, induced by the pressure density, in which the effect of the cosmological constant is repulsive. We also linearize the Schwarzschild-de Sitter exact solution of Einstein's equations (due to a generalization of Birkhoff's theorem) in the domain between the two horizons. We manage to transform it first to a gauge in which the 3-space metric is conformally flat and, then, make an additional coordinate transformation leading to the Lorentz gauge conditions. We compare our non-spherically symmetric solution with the linearized Schwarzschild-de Sitter metric, when the latter is transformed to the Lorentz gauge, and we find agreement. The resulting metric, however, does not acquire a proper Newtonian form in terms of the unique scalar potential that solves the corresponding Poisson equation. Nevertheless, our solution is stable, in the sense that the physical energy density is positive.
Fundamental plant biology enabled by the space shuttle.
Paul, Anna-Lisa; Wheeler, Ray M; Levine, Howard G; Ferl, Robert J
2013-01-01
The relationship between fundamental plant biology and space biology was especially synergistic in the era of the Space Shuttle. While all terrestrial organisms are influenced by gravity, the impact of gravity as a tropic stimulus in plants has been a topic of formal study for more than a century. And while plants were parts of early space biology payloads, it was not until the advent of the Space Shuttle that the science of plant space biology enjoyed expansion that truly enabled controlled, fundamental experiments that removed gravity from the equation. The Space Shuttle presented a science platform that provided regular science flights with dedicated plant growth hardware and crew trained in inflight plant manipulations. Part of the impetus for plant biology experiments in space was the realization that plants could be important parts of bioregenerative life support on long missions, recycling water, air, and nutrients for the human crew. However, a large part of the impetus was that the Space Shuttle enabled fundamental plant science essentially in a microgravity environment. Experiments during the Space Shuttle era produced key science insights on biological adaptation to spaceflight and especially plant growth and tropisms. In this review, we present an overview of plant science in the Space Shuttle era with an emphasis on experiments dealing with fundamental plant growth in microgravity. This review discusses general conclusions from the study of plant spaceflight biology enabled by the Space Shuttle by providing historical context and reviews of select experiments that exemplify plant space biology science.
Quantum repeaters: fundamental and future
NASA Astrophysics Data System (ADS)
Li, Yue; Hua, Sha; Liu, Yu; Ye, Jun; Zhou, Quan
2007-04-01
An overview of the Quantum Repeater techniques based on Entanglement Distillation and Swapping is provided. Beginning with a brief history and the basic concepts of the quantum repeaters, the article primarily focuses on the communication model based on the quantum repeater techniques, which mainly consists of two fundamental modules --- the Entanglement Distillation module and the Swapping module. The realizations of Entanglement Distillation are discussed, including the Bernstein's Procrustean method, the Entanglement Concentration and the CNOT-purification method, etc. The schemes of implementing Swapping, which include the Swapping based on Bell-state measurement and the Swapping in Cavity QED, are also introduced. Then a comparison between these realizations and evaluations on them are presented. At last, the article discusses the experimental schemes of quantum repeaters at present, documents some remaining problems and emerging trends in this field.
Fundamentals of Acoustic Backscatter Imagery
2011-09-20
pressure, I,, of 1 /iPa, corresponds to 0.67 x 10- 8 Wim2. Assuming spherical spreading, the one meter distance reference frame, and the definition of dB (Eq...then be approximated by an infinite series Fundamentals ofAcoustic Backscatter Imagery 11 W(r) = Wm (r) + X Fjsc (r) j=O where "tic(r) is the incident...f( x ,y, Z)Iz=h(xy) = 0 f( x , y, z)I z=h( x ,y)= f( x , y, Z) I z o + h di+ h 2 d2f +zz z= The function ftx,y,z) can represent, for example, the stress
Fundamental Travel Demand Model Example
NASA Technical Reports Server (NTRS)
Hanssen, Joel
2010-01-01
Instances of transportation models are abundant and detailed "how to" instruction is available in the form of transportation software help documentation. The purpose of this paper is to look at the fundamental inputs required to build a transportation model by developing an example passenger travel demand model. The example model reduces the scale to a manageable size for the purpose of illustrating the data collection and analysis required before the first step of the model begins. This aspect of the model development would not reasonably be discussed in software help documentation (it is assumed the model developer comes prepared). Recommendations are derived from the example passenger travel demand model to suggest future work regarding the data collection and analysis required for a freight travel demand model.
Fundamental concepts of quantum chaos
NASA Astrophysics Data System (ADS)
Robnik, M.
2016-09-01
We review the fundamental concepts of quantum chaos in Hamiltonian systems. The quantum evolution of bound systems does not possess the sensitive dependence on initial conditions, and thus no chaotic behaviour occurs, whereas the study of the stationary solutions of the Schrödinger equation in the quantum phase space (Wigner functions) reveals precise analogy of the structure of the classical phase portrait. We analyze the regular eigenstates associated with invariant tori in the classical phase space, and the chaotic eigenstates associated with the classically chaotic regions, and the corresponding energy spectra. The effects of quantum localization of the chaotic eigenstates are treated phenomenologically, resulting in Brody-like level statistics, which can be found also at very high-lying levels, while the coupling between the regular and the irregular eigenstates due to tunneling, and of the corresponding levels, manifests itself only in low-lying levels.
Cognition is … Fundamentally Cultural
Bender, Andrea; Beller, Sieghard
2013-01-01
A prevailing concept of cognition in psychology is inspired by the computer metaphor. Its focus on mental states that are generated and altered by information input, processing, storage and transmission invites a disregard for the cultural dimension of cognition, based on three (implicit) assumptions: cognition is internal, processing can be distinguished from content, and processing is independent of cultural background. Arguing against each of these assumptions, we point out how culture may affect cognitive processes in various ways, drawing on instances from numerical cognition, ethnobiological reasoning, and theory of mind. Given the pervasive cultural modulation of cognition—on all of Marr’s levels of description—we conclude that cognition is indeed fundamentally cultural, and that consideration of its cultural dimension is essential for a comprehensive understanding. PMID:25379225
Fundamental reaction pathways during coprocessing
Stock, L.M.; Gatsis, J.G. . Dept. of Chemistry)
1992-12-01
The objective of this research was to investigate the fundamental reaction pathways in coal petroleum residuum coprocessing. Once the reaction pathways are defined, further efforts can be directed at improving those aspects of the chemistry of coprocessing that are responsible for the desired results such as high oil yields, low dihydrogen consumption, and mild reaction conditions. We decided to carry out this investigation by looking at four basic aspects of coprocessing: (1) the effect of fossil fuel materials on promoting reactions essential to coprocessing such as hydrogen atom transfer, carbon-carbon bond scission, and hydrodemethylation; (2) the effect of varied mild conditions on the coprocessing reactions; (3) determination of dihydrogen uptake and utilization under severe conditions as a function of the coal or petroleum residuum employed; and (4) the effect of varied dihydrogen pressure, temperature, and residence time on the uptake and utilization of dihydrogen and on the distribution of the coprocessed products. Accomplishments are described.
Fundamental enabling issues in nanotechnology :
Floro, Jerrold Anthony; Foiles, Stephen Martin; Hearne, Sean Joseph; Hoyt, Jeffrey John; Seel, Steven Craig; Webb III, Edmund Blackburn; Morales, Alfredo Martin; Zimmerman, Jonathan A.
2007-10-01
To effectively integrate nanotechnology into functional devices, fundamental aspects of material behavior at the nanometer scale must be understood. Stresses generated during thin film growth strongly influence component lifetime and performance; stress has also been proposed as a mechanism for stabilizing supported nanoscale structures. Yet the intrinsic connections between the evolving morphology of supported nanostructures and stress generation are still a matter of debate. This report presents results from a combined experiment and modeling approach to study stress evolution during thin film growth. Fully atomistic simulations are presented predicting stress generation mechanisms and magnitudes during all growth stages, from island nucleation to coalescence and film thickening. Simulations are validated by electrodeposition growth experiments, which establish the dependence of microstructure and growth stresses on process conditions and deposition geometry. Sandia is one of the few facilities with the resources to combine experiments and modeling/theory in this close a fashion. Experiments predicted an ongoing coalescence process that generates signficant tensile stress. Data from deposition experiments also supports the existence of a kinetically limited compressive stress generation mechanism. Atomistic simulations explored island coalescence and deposition onto surfaces intersected by grain boundary structures to permit investigation of stress evolution during later growth stages, e.g. continual island coalescence and adatom incorporation into grain boundaries. The predictive capabilities of simulation permit direct determination of fundamental processes active in stress generation at the nanometer scale while connecting those processes, via new theory, to continuum models for much larger island and film structures. Our combined experiment and simulation results reveal the necessary materials science to tailor stress, and therefore performance, in
Little, Max A.; Jones, Nick S.
2011-01-01
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play. PMID:22003312
Little, Max A; Jones, Nick S
2011-11-08
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Stability constant estimator user`s guide
Hay, B.P.; Castleton, K.J.; Rustad, J.R.
1996-12-01
The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.
Holographic dark energy with cosmological constant
Hu, Yazhou; Li, Nan; Zhang, Zhenhui; Li, Miao E-mail: mli@itp.ac.cn E-mail: zhangzhh@mail.ustc.edu.cn
2015-08-01
Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ω{sub hde} are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ{sup 2}{sub min}=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain −0.07<Ω{sub Λ0}<0.68 and correspondingly 0.04<Ω{sub hde0}<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.
Holographic dark energy with cosmological constant
NASA Astrophysics Data System (ADS)
Hu, Yazhou; Li, Miao; Li, Nan; Zhang, Zhenhui
2015-08-01
Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ωhde are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ2min=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain -0.07<ΩΛ0<0.68 and correspondingly 0.04<Ωhde0<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.
BOOK REVIEWS: Quantum Mechanics: Fundamentals
NASA Astrophysics Data System (ADS)
Whitaker, A.
2004-02-01
This review is of three books, all published by Springer, all on quantum theory at a level above introductory, but very different in content, style and intended audience. That of Gottfried and Yan is of exceptional interest, historical and otherwise. It is a second edition of Gottfried’s well-known book published by Benjamin in 1966. This was written as a text for a graduate quantum mechanics course, and has become one of the most used and respected accounts of quantum theory, at a level mathematically respectable but not rigorous. Quantum mechanics was already solidly established by 1966, but this second edition gives an indication of progress made and changes in perspective over the last thirty-five years, and also recognises the very substantial increase in knowledge of quantum theory obtained at the undergraduate level. Topics absent from the first edition but included in the second include the Feynman path integral, seen in 1966 as an imaginative but not very useful formulation of quantum theory. Feynman methods were given only a cursory mention by Gottfried. Their practical importance has now been fully recognised, and a substantial account of them is provided in the new book. Other new topics include semiclassical quantum mechanics, motion in a magnetic field, the S matrix and inelastic collisions, radiation and scattering of light, identical particle systems and the Dirac equation. A topic that was all but totally neglected in 1966, but which has flourished increasingly since, is that of the foundations of quantum theory. John Bell’s work of the mid-1960s has led to genuine theoretical and experimental achievement, which has facilitated the development of quantum optics and quantum information theory. Gottfried’s 1966 book played a modest part in this development. When Bell became increasingly irritated with the standard theoretical approach to quantum measurement, Viki Weisskopf repeatedly directed him to Gottfried’s book. Gottfried had devoted a
Constant magnification optical tracking system
NASA Technical Reports Server (NTRS)
Frazer, R. E. (Inventor)
1982-01-01
A constant magnification optical tracking system for continuously tracking of a moving object is described. In the tracking system, a traveling objective lens maintains a fixed relationship with an object to be optically tracked. The objective lens was chosen to provide a collimated light beam oriented in the direction of travel of the moving object. A reflective surface is attached to the traveling objective lens for reflecting an image of the moving object. The object to be tracked is a free-falling object which is located at the focal point of the objective lens for at least a portion of its free-fall path. A motor and control means is provided for mantaining the traveling objective lens in a fixed relationship relative to the free-falling object, thereby keeping the free-falling object at the focal point and centered on the axis of the traveling objective lens throughout its entire free-fall path.
Philicities, Fugalities, and Equilibrium Constants.
Mayr, Herbert; Ofial, Armin R
2016-05-17
The mechanistic model of Organic Chemistry is based on relationships between rate and equilibrium constants. Thus, strong bases are generally considered to be good nucleophiles and poor nucleofuges. Exceptions to this rule have long been known, and the ability of iodide ions to catalyze nucleophilic substitutions, because they are good nucleophiles as well as good nucleofuges, is just a prominent example for exceptions from the general rule. In a reaction series, the Leffler-Hammond parameter α = δΔG(⧧)/δΔG° describes the fraction of the change in the Gibbs energy of reaction, which is reflected in the change of the Gibbs energy of activation. It has long been considered as a measure for the position of the transition state; thus, an α value close to 0 was associated with an early transition state, while an α value close to 1 was considered to be indicative of a late transition state. Bordwell's observation in 1969 that substituent variation in phenylnitromethanes has a larger effect on the rates of deprotonation than on the corresponding equilibrium constants (nitroalkane anomaly) triggered the breakdown of this interpretation. In the past, most systematic investigations of the relationships between rates and equilibria of organic reactions have dealt with proton transfer reactions, because only for few other reaction series complementary kinetic and thermodynamic data have been available. In this Account we report on a more general investigation of the relationships between Lewis basicities, nucleophilicities, and nucleofugalities as well as between Lewis acidities, electrophilicities, and electrofugalities. Definitions of these terms are summarized, and it is suggested to replace the hybrid terms "kinetic basicity" and "kinetic acidity" by "protophilicity" and "protofugality", respectively; in this way, the terms "acidity" and "basicity" are exclusively assigned to thermodynamic properties, while "philicity" and "fugality" refer to kinetics
Omnidirectional antenna having constant phase
Sena, Matthew
2017-04-04
Various technologies presented herein relate to constructing and/or operating an antenna having an omnidirectional electrical field of constant phase. The antenna comprises an upper plate made up of multiple conductive rings, a lower ground-plane plate, a plurality of grounding posts, a conical feed, and a radio frequency (RF) feed connector. The upper plate has a multi-ring configuration comprising a large outer ring and several smaller rings of equal size located within the outer ring. The large outer ring and the four smaller rings have the same cross-section. The grounding posts ground the upper plate to the lower plate while maintaining a required spacing/parallelism therebetween.
How Do Fundamental Christians Deal with Depression?
ERIC Educational Resources Information Center
Spinney, Douglas Harvey
1991-01-01
Provides explanation of developmental dynamics in experience of fundamental Christians that provoke reactive depression. Describes depressant retardant defenses against depression that have been observed in Christian fundamental subculture. Suggests four counseling strategies for helping fundamentalists. (Author/ABL)
Simulating Supercapacitors: Can We Model Electrodes As Constant Charge Surfaces?
Merlet, Céline; Péan, Clarisse; Rotenberg, Benjamin; Madden, Paul A; Simon, Patrice; Salanne, Mathieu
2013-01-17
Supercapacitors based on an ionic liquid electrolyte and graphite or nanoporous carbon electrodes are simulated using molecular dynamics. We compare a simplified electrode model in which a constant, uniform charge is assigned to each carbon atom with a realistic model in which a constant potential is applied between the electrodes (the carbon charges are allowed to fluctuate). We show that the simulations performed with the simplified model do not provide a correct description of the properties of the system. First, the structure of the adsorbed electrolyte is partly modified. Second, dramatic differences are observed for the dynamics of the system during transient regimes. In particular, upon application of a constant applied potential difference, the increase in the temperature, due to the Joule effect, associated with the creation of an electric current across the cell follows Ohm's law, while unphysically high temperatures are rapidly observed when constant charges are assigned to each carbon atom.
Future Fundamental Combustion Research for Aeropropulsion Systems.
1985-01-01
AD-MISS 771 FUTURE FUNDAMENTAL COMBUSTION RESEARCH FOR I AEROPROPULSION SYSTEMS(U) NATIONAL AERONAUTICS AND I SPACE ADMINISTRATION CLEVELAND OH LEWIS... Future Fundamental Combustion Research for Aeropropulsion Systems u. Edward J. Mularz V Propulsion Laboratory A VSCOM Research and Technology Laboratories... FUTURE FUNDAMENTAL COMBUSTION RESEARCH FOR AEROPROPULSION SYSTEMS Edward J. Mularz
Fundamental Scaling Laws in Nanophotonics
Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J.
2016-01-01
The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors. PMID:27869159
Fundamental Scaling Laws in Nanophotonics
NASA Astrophysics Data System (ADS)
Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J.
2016-11-01
The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors.
Fundamental studies of fusion plasmas
Aamodt, R.E.; Catto, P.J.; D'Ippolito, D.A.; Myra, J.R.; Russell, D.A.
1992-05-26
The major portion of this program is devoted to critical ICH phenomena. The topics include edge physics, fast wave propagation, ICH induced high frequency instabilities, and a preliminary antenna design for Ignitor. This research was strongly coordinated with the world's experimental and design teams at JET, Culham, ORNL, and Ignitor. The results have been widely publicized at both general scientific meetings and topical workshops including the speciality workshop on ICRF design and physics sponsored by Lodestar in April 1992. The combination of theory, empirical modeling, and engineering design in this program makes this research particularly important for the design of future devices and for the understanding and performance projections of present tokamak devices. Additionally, the development of a diagnostic of runaway electrons on TEXT has proven particularly useful for the fundamental understanding of energetic electron confinement. This work has led to a better quantitative basis for quasilinear theory and the role of magnetic vs. electrostatic field fluctuations on electron transport. An APS invited talk was given on this subject and collaboration with PPPL personnel was also initiated. Ongoing research on these topics will continue for the remainder fo the contract period and the strong collaborations are expected to continue, enhancing both the relevance of the work and its immediate impact on areas needing critical understanding.
Fundamental Scaling Laws in Nanophotonics.
Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J
2016-11-21
The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of "smaller-is-better" has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors.
Information physics fundamentals of nanophotonics.
Naruse, Makoto; Tate, Naoya; Aono, Masashi; Ohtsu, Motoichi
2013-05-01
Nanophotonics has been extensively studied with the aim of unveiling and exploiting light-matter interactions that occur at a scale below the diffraction limit of light, and recent progress made in experimental technologies--both in nanomaterial fabrication and characterization--is driving further advancements in the field. From the viewpoint of information, on the other hand, novel architectures, design and analysis principles, and even novel computing paradigms should be considered so that we can fully benefit from the potential of nanophotonics. This paper examines the information physics aspects of nanophotonics. More specifically, we present some fundamental and emergent information properties that stem from optical excitation transfer mediated by optical near-field interactions and the hierarchical properties inherent in optical near-fields. We theoretically and experimentally investigate aspects such as unidirectional signal transfer, energy efficiency and networking effects, among others, and we present their basic theoretical formalisms and describe demonstrations of practical applications. A stochastic analysis of light-assisted material formation is also presented, where an information-based approach provides a deeper understanding of the phenomena involved, such as self-organization. Furthermore, the spatio-temporal dynamics of optical excitation transfer and its inherent stochastic attributes are utilized for solution searching, paving the way to a novel computing paradigm that exploits coherent and dissipative processes in nanophotonics.
Levitated Optomechanics for Fundamental Physics
NASA Astrophysics Data System (ADS)
Rashid, Muddassar; Bateman, James; Vovrosh, Jamie; Hempston, David; Ulbricht, Hendrik
2015-05-01
Optomechanics with levitated nano- and microparticles is believed to form a platform for testing fundamental principles of quantum physics, as well as find applications in sensing. We will report on a new scheme to trap nanoparticles, which is based on a parabolic mirror with a numerical aperture of 1. Combined with achromatic focussing, the setup is a cheap and readily straightforward solution to trapping nanoparticles for further study. Here, we report on the latest progress made in experimentation with levitated nanoparticles; these include the trapping of 100 nm nanodiamonds (with NV-centres) down to 1 mbar as well as the trapping of 50 nm Silica spheres down to 10?4 mbar without any form of feedback cooling. We will also report on the progress to implement feedback stabilisation of the centre of mass motion of the trapped particle using digital electronics. Finally, we argue that such a stabilised particle trap can be the particle source for a nanoparticle matterwave interferometer. We will present our Talbot interferometer scheme, which holds promise to test the quantum superposition principle in the new mass range of 106 amu. EPSRC, John Templeton Foundation.
Gas cell neutralizers (Fundamental principles)
Fuehrer, B.
1985-06-01
Neutralizing an ion-beam of the size and energy levels involved in the neutral-particle-beam program represents a considerable extension of the state-of-the-art of neutralizer technology. Many different mediums (e.g., solid, liquid, gas, plasma, photons) can be used to strip the hydrogen ion of its extra electron. A large, multidisciplinary R and D effort will no doubt be required to sort out all of the ''pros and cons'' of these various techniques. The purpose of this particular presentation is to discuss some basic configurations and fundamental principles of the gas type of neutralizer cell. Particular emphasis is placed on the ''Gasdynamic Free-Jet'' neutralizer since this configuration has the potential of being much shorter than other type of gas cells (in the beam direction) and it could operate in nearly a continuous mode (CW) if necessary. These were important considerations in the ATSU design which is discussed in some detail in the second presentation entitled ''ATSU Point Design''.
Hyperbolic metamaterials: fundamentals and applications.
Shekhar, Prashant; Atkinson, Jonathan; Jacob, Zubin
2014-01-01
Metamaterials are nano-engineered media with designed properties beyond those available in nature with applications in all aspects of materials science. In particular, metamaterials have shown promise for next generation optical materials with electromagnetic responses that cannot be obtained from conventional media. We review the fundamental properties of metamaterials with hyperbolic dispersion and present the various applications where such media offer potential for transformative impact. These artificial materials support unique bulk electromagnetic states which can tailor light-matter interaction at the nanoscale. We present a unified view of practical approaches to achieve hyperbolic dispersion using thin film and nanowire structures. We also review current research in the field of hyperbolic metamaterials such as sub-wavelength imaging and broadband photonic density of states engineering. The review introduces the concepts central to the theory of hyperbolic media as well as nanofabrication and characterization details essential to experimentalists. Finally, we outline the challenges in the area and offer a set of directions for future work.
On the fundamental role of dynamics in quantum physics
NASA Astrophysics Data System (ADS)
Hofmann, Holger F.
2016-05-01
Quantum theory expresses the observable relations between physical properties in terms of probabilities that depend on the specific context described by the "state" of a system. However, the laws of physics that emerge at the macroscopic level are fully deterministic. Here, it is shown that the relation between quantum statistics and deterministic dynamics can be explained in terms of ergodic averages over complex valued probabilities, where the fundamental causality of motion is expressed by an action that appears as the phase of the complex probability multiplied with the fundamental constant ħ. Importantly, classical physics emerges as an approximation of this more fundamental theory of motion, indicating that the assumption of a classical reality described by differential geometry is merely an artefact of an extrapolation from the observation of macroscopic dynamics to a fictitious level of precision that does not exist within our actual experience of the world around us. It is therefore possible to completely replace the classical concepts of trajectories with the more fundamental concept of action phase probabilities as a universally valid description of the deterministic causality of motion that is observed in the physical world.
Prospects for Fundamental Symmetry Tests with Polyatomic Molecules
NASA Astrophysics Data System (ADS)
Berger, Robert; Isaev, Timur
2013-06-01
Special features of polyatomic molecules make them attractive candidates for search for violation of fundamental symmetries and variation of fundamental constants [1, 2]. We discuss the possibility of searching for nuclear spin-dependent space-parity violating (NSD-PV) interaction in closed-shell and open-shell polyatomic molecules. The parameter W_{a} of the effective molecular spin-rotational Hamiltonian characterising the strength of NSD-PV interaction in open-shell linear molecules is discussed and approaches for its calculation outlined. In addition, possibilities for detecting NSD-PV in chiral molecules via NMR and MW spectroscopy are presented. REFERENCES: C. Stoeffler et al, Phys. Chem. Chem. Phys., 13 (3), 2011; M. Quack, J. Stohner and M. Willeke, Ann. Rev. Phys. Chem., 59, 2008 J. Bagdonaite et al, Science, 339 (6115), 2013.
Is There a Cosmological Constant?
NASA Astrophysics Data System (ADS)
Kochanek, Christopher
2002-07-01
The grant contributed to the publication of 18 refereed papers and 5 conference proceedings. The primary uses of the funding have been for page charges, travel for invited talks related to the grant research, and the support of a graduate student, Charles Keeton. The refereed papers address four of the primary goals of the proposal: (1) the statistics of radio lenses as a probe of the cosmological model (#1), (2) the role of spiral galaxies as lenses (#3), (3) the effects of dust on statistics of lenses (#7, #8), and (4) the role of groups and clusters as lenses (#2, #6, #10, #13, #15, #16). Four papers (#4, #5, #11, #12) address general issues of lens models, calibrations, and the relationship between lens galaxies and nearby galaxies. One considered cosmological effects in lensing X-ray sources (#9), and two addressed issues related to the overall power spectrum and theories of gravity (#17, #18). Our theoretical studies combined with the explosion in the number of lenses and the quality of the data obtained for them is greatly increasing our ability to characterize and understand the lens population. We can now firmly conclude both from our study of the statistics of radio lenses and our survey of extinctions in individual lenses that the statistics of optically selected quasars were significantly affected by extinction. However, the limits on the cosmological constant remain at lambda < 0.65 at a 2-sigma confidence level, which is in mild conflict with the results of the Type la supernova surveys. We continue to find that neither spiral galaxies nor groups and clusters contribute significantly to the production of gravitational lenses. The lack of group and cluster lenses is strong evidence for the role of baryonic cooling in increasing the efficiency of galaxies as lenses compared to groups and clusters of higher mass but lower central density. Unfortunately for the ultimate objective of the proposal, improved constraints on the cosmological constant, the next
Is There a Cosmological Constant?
NASA Technical Reports Server (NTRS)
Kochanek, Christopher; Oliversen, Ronald J. (Technical Monitor)
2002-01-01
The grant contributed to the publication of 18 refereed papers and 5 conference proceedings. The primary uses of the funding have been for page charges, travel for invited talks related to the grant research, and the support of a graduate student, Charles Keeton. The refereed papers address four of the primary goals of the proposal: (1) the statistics of radio lenses as a probe of the cosmological model (#1), (2) the role of spiral galaxies as lenses (#3), (3) the effects of dust on statistics of lenses (#7, #8), and (4) the role of groups and clusters as lenses (#2, #6, #10, #13, #15, #16). Four papers (#4, #5, #11, #12) address general issues of lens models, calibrations, and the relationship between lens galaxies and nearby galaxies. One considered cosmological effects in lensing X-ray sources (#9), and two addressed issues related to the overall power spectrum and theories of gravity (#17, #18). Our theoretical studies combined with the explosion in the number of lenses and the quality of the data obtained for them is greatly increasing our ability to characterize and understand the lens population. We can now firmly conclude both from our study of the statistics of radio lenses and our survey of extinctions in individual lenses that the statistics of optically selected quasars were significantly affected by extinction. However, the limits on the cosmological constant remain at lambda < 0.65 at a 2-sigma confidence level, which is in mild conflict with the results of the Type la supernova surveys. We continue to find that neither spiral galaxies nor groups and clusters contribute significantly to the production of gravitational lenses. The lack of group and cluster lenses is strong evidence for the role of baryonic cooling in increasing the efficiency of galaxies as lenses compared to groups and clusters of higher mass but lower central density. Unfortunately for the ultimate objective of the proposal, improved constraints on the cosmological constant, the next
Fundamental Aspects of Pressuremeter Testing.
1987-04-30
feort:. arid reproducIc’e sampilt , of sei- (.;Ii, Ito jors pare (! aid to -1e (1 ’lie %,to if. ap l,:r, tee too part eularl., w.ito .1 focr I ike 4 ;elh... Eurl , a,d Earth Supported Structures, Vol. 2, Purdu ’niversitv, West Lafayettc. Indi- ana, pp. 1-54. Cace’ci, M. S. and Cacheris, W. P. (19)%1
Fundamental Principles of Proper Space Kinematics
NASA Astrophysics Data System (ADS)
Wade, Sean
It is desirable to understand the movement of both matter and energy in the universe based upon fundamental principles of space and time. Time dilation and length contraction are features of Special Relativity derived from the observed constancy of the speed of light. Quantum Mechanics asserts that motion in the universe is probabilistic and not deterministic. While the practicality of these dissimilar theories is well established through widespread application inconsistencies in their marriage persist, marring their utility, and preventing their full expression. After identifying an error in perspective the current theories are tested by modifying logical assumptions to eliminate paradoxical contradictions. Analysis of simultaneous frames of reference leads to a new formulation of space and time that predicts the motion of both kinds of particles. Proper Space is a real, three-dimensional space clocked by proper time that is undergoing a densification at the rate of c. Coordinate transformations to a familiar object space and a mathematical stationary space clarify the counterintuitive aspects of Special Relativity. These symmetries demonstrate that within the local universe stationary observers are a forbidden frame of reference; all is in motion. In lieu of Quantum Mechanics and Uncertainty the use of the imaginary number i is restricted for application to the labeling of mass as either material or immaterial. This material phase difference accounts for both the perceived constant velocity of light and its apparent statistical nature. The application of Proper Space Kinematics will advance more accurate representations of microscopic, oscopic, and cosmological processes and serve as a foundation for further study and reflection thereafter leading to greater insight.
The dependency of timbre on fundamental frequency.
Marozeau, Jeremy; de Cheveigné, Alain; McAdams, Stephen; Winsberg, Suzanne
2003-11-01
The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0's, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0's.
Fundamental structures of dynamic social networks.
Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune
2016-09-06
Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision.
Fundamental structures of dynamic social networks
Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune
2016-01-01
Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision. PMID:27555584
The dependency of timbre on fundamental frequency
NASA Astrophysics Data System (ADS)
Marozeau, Jeremy; de Cheveigné, Alain; McAdams, Stephen; Winsberg, Suzanne
2003-11-01
The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0's, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0's.
Life, the Universe, and everything—42 fundamental questions
NASA Astrophysics Data System (ADS)
Allen, Roland E.; Lidström, Suzy
2017-01-01
In The Hitchhiker’s Guide to the Galaxy, by Douglas Adams, the Answer to the Ultimate Question of Life, the Universe, and Everything is found to be 42—but the meaning of this is left open to interpretation. We take it to mean that there are 42 fundamental questions which must be answered on the road to full enlightenment, and we attempt a first draft (or personal selection) of these ultimate questions, on topics ranging from the cosmological constant and origin of the Universe to the origin of life and consciousness.
Fundamental Frequency Tracking and Applications to Musical Signal Analysis
NASA Astrophysics Data System (ADS)
Brown, Judith C.
The constant-Q spectral transform (Brown, 1991) can be used to analyze musical signals and can be effectively employed as a front end for measurements of fundamental frequency. This transform also has advantages for the analysis of musical signals over the conventional discrete Fourier transform, or FFT in its fast-Fouriertransform implementation. Because the FFT computes frequency components on a linear scale with a particular fixed resolution or bandwidth (frequency spacing between components), it frequently results in too little resolution for low musical frequencies and better resolution than needed at high frequencies.
PMN-PT nanowires with a very high piezoelectric constant.
Xu, Shiyou; Poirier, Gerald; Yao, Nan
2012-05-09
A profound way to increase the output voltage (or power) of the piezoelectric nanogenerators is to utilize a material with higher piezoelectric constants. Here we report the synthesis of novel piezoelectric 0.72Pb(Mg(1/3)Nb(2/3))O(3)-0.28PbTiO(3) (PMN-PT) nanowires using a hydrothermal process. The unpoled single-crystal PMN-PT nanowires show a piezoelectric constant (d(33)) up to 381 pm/V, with an average value of 373 ± 5 pm/V. This is about 15 times higher than the maximum reported value of 1-D ZnO nanostructures and 3 times higher than the largest reported value of 1-D PZT nanostructures. These PMN-PT nanostructures are of good potential being used as the fundamental building block for higher power nanogenerators, high sensitivity nanosensors, and large strain nanoactuators.
Induced cosmological constant and other features of asymmetric brane embedding
Shtanov, Yuri; Sahni, Varun; Shafieloo, Arman; Toporensky, Alexey E-mail: varun@iucaa.ernet.in E-mail: lesha@xray.sai.msu.ru
2009-04-15
We investigate the cosmological properties of an 'induced gravity' brane scenario in the absence of mirror symmetry with respect to the brane. We find that brane evolution can proceed along one of four distinct branches. By contrast, when mirror symmetry is imposed, only two branches exist, one of which represents the self-accelerating brane, while the other is the so-called normal branch. This model incorporates many of the well-known possibilities of brane cosmology including phantom acceleration (w < -1), self-acceleration, transient acceleration, quiescent singularities, and cosmic mimicry. Significantly, the absence of mirror symmetry also provides an interesting way of inducing a sufficiently small cosmological constant on the brane. A small (positive) {Lambda}-term in this case is induced by a small asymmetry in the values of bulk fundamental constants on the two sides of the brane.
NASA Astrophysics Data System (ADS)
Ozkanlar, Abdullah; Rodriguez, Jorge H.
2009-03-01
Some (bio)chemical reactions are non-adiabatic processes whereby the total spin angular momentum, before and after the reaction, is not conserved. These are named spin- forbidden reactions. The application of spin density functional theory (SDFT) to the prediction of rate constants is a challenging task of fundamental and practical importance. We apply non-adiabatic transition state theory in conjunction with SDFT to predict the rate constant of the spin- forbidden dihydrogen binding to iron tetracarbonyl. To model the surface hopping probability between singlet and triplet states, the Landau-Zener formalism is used. The lowest energy point for singlet-triplet crossing, known as minimum energy crossing point (MECP), was located and used to compute, in a semi-quantum approach, reaction rate constants at 300 K. The predicted rates are in good agreement with experiment. In addition, we present results which are relevant to the ligand binding reactions of metalloproteins. This work is supported in part by NSF via CAREER award CHE-0349189 (JHR).
Capacitive Cells for Dielectric Constant Measurement
ERIC Educational Resources Information Center
Aguilar, Horacio Munguía; Maldonado, Rigoberto Franco
2015-01-01
A simple capacitive cell for dielectric constant measurement in liquids is presented. As an illustrative application, the cell is used for measuring the degradation of overheated edible oil through the evaluation of their dielectric constant.
Astronomia Motivadora no Ensino Fundamental
NASA Astrophysics Data System (ADS)
Melo, J.; Voelzke, M. R.
2008-09-01
O objetivo principal deste trabalho é procurar desenvolver o interesse dos alunos pelas ciências através da Astronomia. Uma pesquisa com perguntas sobre Astronomia foi realizada junto a 161 alunos do Ensino Fundamental, com o intuito de descobrir conhecimentos prévios dos alunos sobre o assunto. Constatou-se, por exemplo, que 29,3% da 6Âª série responderam corretamente o que é eclipse, 30,0% da 8Âª série acertaram o que a Astronomia estuda, enquanto 42,3% dos alunos da 5Âª série souberam definir o Sol. Pretende-se ampliar as turmas participantes e trabalhar, principalmente de forma prática com: dimensões e escalas no Sistema Solar, construção de luneta, questões como dia e noite, estações do ano e eclipses. Busca-se abordar, também, outros conteúdos de Física tais como a óptica na construção da luneta, e a mecânica no trabalho com escalas e medidas, e ao utilizar uma luminária para representar o Sol na questão do eclipse, e de outras disciplinas como a Matemática na transformação de unidades, regras de três; Artes na modelagem ou desenho dos planetas; a própria História com relação à busca pela origem do universo, e a Informática que possibilita a busca mais rápida por informações, além de permitir simulações e visualizações de imagens importantes. Acredita-se que a Astronomia é importante no processo ensino aprendizagem, pois permite a discussão de temas curiosos como, por exemplo, a origem do universo, viagens espaciais a existência ou não de vida em outros planetas, além de temas atuais como as novas tecnologias.
Derivation of midinfrared (5-25 microns) optical constants of some silicates and palagonite
NASA Technical Reports Server (NTRS)
Roush, T.; Pollack, J.; Orenberg, J.
1991-01-01
The 5-25 micron real and imaginary refraction indices are presented for palagonite and the silicates pyrophyllite, kaolinite, serpentine, montmorillonite, saponite, and orthopyroxene. Optical constants in the region of the H2O-bending fundamental near 6 microns are obtained for saponite, montmorillonite, and palagonite. It is established that, if a pellet of pure material can be polished to a mirror finish, the optical constants of such noncohesive materials as clays are easily derivable.
Empirical Examination of Fundamental Indexation in the German Market
NASA Astrophysics Data System (ADS)
Mihm, Max; Locarek-Junge, Hermann
Index Funds, Exchange Traded Funds and Derivatives give investors easy access to well diversified index portfolios. These index-based investment products exhibit low fees, which make them an attractive alternative to actively managed funds. Against this background, a new class of stock indices has been established based on the concept of “Fundamental Indexation”. The selection and weighting of index constituents is conducted by means of fundamental criteria like total assets, book value or number of employees. This paper examines the performance of fundamental indices in the German equity market. For this purpose, a backtest of five fundamental indices is conducted over the last 20 years. Furthermore the index returns are analysed under the assumption of an efficient as well as an inefficient market. Index returns in efficient markets are explained by applying the three factor model for stock returns of Fama and French (J Financ Econ 33(1):3-56, 1993). The results show that the outperformance of fundamental indices is partly due to a higher risk exposure, particularly to companies with a low price to book ratio. By relaxing the assumption of market efficiency, a return drag of capitalisation weighted indices can be deduced. Given a mean-reverting movement of prices, a direct connection between market capitalisation and index weighting leads to inferior returns.
Measurements of the dielectric constants for planetary volatiles
NASA Technical Reports Server (NTRS)
Anicich, Vincent G.; Huntress, Wesley T., Jr.
1987-01-01
The model of Titan at present has the surface temperature, pressure, and composition such that there is a possibility of a binary ethane-methane ocean. Proposed experiments for future Titan flybys include microwave mappers. Very little has been measured of the dielectric properties of the small hydrocarbons at these radar frequencies. An experiment was conducted utilizing a slotted line to measure the dielectric properties of the hydrocarbons, methane to heptane, from room temperature to -180 C. Measurements of the real part of the dielectric constants are accurate to + or - 0.006 and the imaginary part (the loss tangent) of the liquids studied is less than or equal to 0.001. In order to verify this low loss tangent, the real part of the dielectric constant of hexane at 25 C was studied as a function of the frequency range of the slotted line system used. The dielectric constant of hexane at room temperature, between 500 MHz and 3 MHz, is constant within experimental error.
Ultralight porous metals: From fundamentals to applications
NASA Astrophysics Data System (ADS)
Tianjian, Lu
2002-10-01
Over the past few years a number of low cost metallic foams have been produced and used as the core of sandwich panels and net shaped parts. The main aim is to develop lightweight structures which are stiff, strong, able to absorb large amount of energy and cheap for application in the transport and construction industries. For example, the firewall between the engine and passenger compartment of an automobile must have adequate mechanical strength, good energy and sound absorbing properties, and adequate fire retardance. Metal foams provide all of these features, and are under serious consideration for this applications by a number of automobile manufacturers (e.g., BMW and Audi). Additional specialized applications for foam-cored sandwich panels range from heat sinks for electronic devices to crash barriers for automobiles, from the construction panels in lifts on aircraft carriers to the luggage containers of aircraft, from sound proofing walls along railway tracks and highways to acoustic absorbers in lean premixed combustion chambers. But there is a problem. Before metallic foams can find a widespread application, their basic properties must be measured, and ideally modeled as a function of microstructural details, in order to be included in a design. This work aims at reviewing the recent progress and presenting some new results on fundamental research regarding the micromechanical origins of the mechanical, thermal, and acoustic properties of metallic foams.
Fundamentals of materials accounting for nuclear safeguards
Pillay, K.K.S.
1989-04-01
Materials accounting is essential to providing the necessary assurance for verifying the effectiveness of a safeguards system. The use of measurements, analyses, records, and reports to maintain knowledge of the quantities of nuclear material present in a defined area of a facility and the use of physical inventories and materials balances to verify the presence of special nuclear materials are collectively known as materials accounting for nuclear safeguards. This manual, prepared as part of the resource materials for the Safeguards Technology Training Program of the US Department of Energy, addresses fundamental aspects of materials accounting, enriching and complementing them with the first-hand experiences of authors from varied disciplines. The topics range from highly technical subjects to site-specific system designs and policy discussions. This collection of papers is prepared by more than 25 professionals from the nuclear safeguards field. Representing research institutions, industries, and regulatory agencies, the authors create a unique resource for the annual course titled ''Materials Accounting for Nuclear Safeguards,'' which is offered at the Los Alamos National Laboratory.
Moving-Gradient Furnace With Constant-Temperature Cold Zone
NASA Technical Reports Server (NTRS)
Gernert, Nelson J.; Shaubach, Robert M.
1993-01-01
Outer heat pipe helps in controlling temperature of cold zone of furnace. Part of heat-pipe furnace that includes cold zone surrounded by another heat pipe equipped with heater at one end and water cooling coil at other end. Temperature of heat pipe maintained at desired constant value by controlling water cooling. Serves as constant-temperature heat source or heat sink, as needed, for gradient of temperature as gradient region moved along furnace. Proposed moving-gradient heat-pipe furnace used in terrestrial or spaceborne experiments on directional solidification in growth of crystals.
Solar Constant (SOLCON) Experiment: Ground Support Equipment (GSE) software development
NASA Technical Reports Server (NTRS)
Gibson, M. Alan; Thomas, Susan; Wilson, Robert
1991-01-01
The Solar Constant (SOLCON) Experiment, the objective of which is to determine the solar constant value and its variability, is scheduled for launch as part of the Space Shuttle/Atmospheric Laboratory for Application and Science (ATLAS) spacelab mission. The Ground Support Equipment (GSE) software was developed to monitor and analyze the SOLCON telemetry data during flight and to test the instrument on the ground. The design and development of the GSE software are discussed. The SOLCON instrument was tested during Davos International Solar Intercomparison, 1989 and the SOLCON data collected during the tests are analyzed to study the behavior of the instrument.
Remote Sensing of Salinity: The Dielectric Constant of Sea Water
NASA Technical Reports Server (NTRS)
LeVine, David M.; Lang, R.; Utku, C.; Tarkocin, Y.
2011-01-01
Global monitoring of sea surface salinity from space requires an accurate model for the dielectric constant of sea water as a function of salinity and temperature to characterize the emissivity of the surface. Measurements are being made at 1.413 GHz, the center frequency of the Aquarius radiometers, using a resonant cavity and the perturbation method. The cavity is operated in a transmission mode and immersed in a liquid bath to control temperature. Multiple measurements are made at each temperature and salinity. Error budgets indicate a relative accuracy for both real and imaginary parts of the dielectric constant of about 1%.
Implementing fundamental care in clinical practice.
Feo, Rebecca; Conroy, Tiffany; Alderman, Jan; Kitson, Alison
2017-04-05
Modern healthcare environments are becoming increasingly complex. Delivering high-quality fundamental care in these environments is challenging for nurses and has been the focus of recent media, policy, academic and public scrutiny. Much of this attention arises from evidence that fundamental care is being neglected or delivered inadequately. There are an increasing number of standards and approaches to the delivery of fundamental care, which may result in confusion and additional documentation for nurses to complete. This article provides nurses with an approach to reframe their thinking about fundamental care, to ensure they meet patients' care needs and deliver holistic, person-centred care.
Suess, D.; Abert, C.; Bruckner, F.; Windl, R.; Vogler, C.; Breth, L.; Fidler, J.
2015-04-28
The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePt film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5–0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10{sup −4} if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.
Do the Constants of Nature Couple to Strong Gravitational Fields?
NASA Astrophysics Data System (ADS)
Preval, Simon P.; Barstow, Martin A.; Holberg, Jay B.; Barrow, John; Berengut, Julian; Webb, John; Dougan, Darren; Hu, Jiting
2015-06-01
Recently, white dwarf stars have found a new use in the fundamental physics community. Many prospective theories of the fundamental interactions of Nature allow traditional constants, like the fine structure constant α, to vary in some way. A study by Berengut et al. (2013) used the Fe/Ni v line measurements made by Preval et al. (2013) from the hot DA white dwarf G191-B2B, in an attempt to detect any variation in α. It was found that the Fe v lines indicated an increasing alpha, whereas the Ni v lines indicated a decreasing alpha. Possible explanations for this could be misidentification of the lines, inaccurate atomic data, or wavelength dependent distortion in the spectrum. We examine the first two cases by using a high S/N reference spectrum from the hot sdO BD+28°4211 to calibrate the Fe/Ni v atomic data. With this new data, we re-evaluate the work of Berengut et al. (2013) to derive a new constraint on the variation of alpha in a gravitational field.
Fundamental Investigations of Airframe Noise
NASA Technical Reports Server (NTRS)
Macaraeg, M. G.
2004-01-01
An extensive numerical and experimental study of airframe noise mechanisms associated with a subsonic high-lift system has been performed at NASA Langley Research Center (LaRC). Investigations involving both steady and unsteady computations and experiments on a small-scale, part-span flap model are presented. Both surface (steady and unsteady pressure measurements, hot films, oil flows, pressure sensitive paint) and off surface (5 hole-probe, particle-imaged velocimetry, laser velocimetry, laser light sheet measurements) were taken in the LaRC Quiet Flow Facility (QFF) and several hard-wall tunnels up to flight Reynolds number. Successful microphone array measurements were also taken providing both acoustic source maps on the model, and quantitative spectra. Critical directivity measurements were obtained in the QFF. NASA Langley unstructured and structured Reynolds- Averaged Navier-Stokes codes modeled the flap geometries excellent comparisons with surface and offsurface experimental data were obtained. Subsequently, these meanflow calculations were utilized in both linear stability and direct numerical simulations of the flap-edge flow field to calculate unsteady surface pressures and farfield acoustic spectra. Accurate calculations were critical in obtaining not only noise source characteristics, but shear layer correction data as well. Techniques utilized in these investigations as well as brief overviews of results will be given.
Statistical Modelling of the Soil Dielectric Constant
NASA Astrophysics Data System (ADS)
Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy
2010-05-01
the soil type, and that way it enables clear comparing to results from other soil type dependent models. The paper is focused on proper representing possible range of porosity in commonly existing soils. This work is done with aim of implementing the statistical-physical model of the dielectric constant to a use in the model CMEM (Community Microwave Emission Model), applicable to SMOS (Soil Moisture and Ocean Salinity ESA Mission) data. The input data to the model clearly accepts definition of soil fractions in common physical measures, and in opposition to other empirical models, does not need calibrating. It is not dependent on recognition of the soil by type, but instead it offers the control of accuracy by proper determination of the soil compound fractions. SMOS employs CMEM being funded only by the sand-clay-silt composition. Common use of the soil data, is split on tens or even hundreds soil types depending on the region. We hope that only by determining three element compounds of sand-clay-silt, in few fractions may help resolving the question of relevance of soil data to the input of CMEM, for SMOS. Now, traditionally employed soil types are converted on sand-clay-silt compounds, but hardly cover effects of other specific properties like the porosity. It should bring advantageous effects in validating SMOS observation data, and is taken for the aim in the Cal/Val project 3275, in the campaigns for SVRT (SMOS Validation and Retrieval Team). Acknowledgements. This work was funded in part by the PECS - Programme for European Cooperating States, No. 98084 "SWEX/R - Soil Water and Energy Exchange/Research".
Individual differences in fundamental social motives.
Neel, Rebecca; Kenrick, Douglas T; White, Andrew Edward; Neuberg, Steven L
2016-06-01
Motivation has long been recognized as an important component of how people both differ from, and are similar to, each other. The current research applies the biologically grounded fundamental social motives framework, which assumes that human motivational systems are functionally shaped to manage the major costs and benefits of social life, to understand individual differences in social motives. Using the Fundamental Social Motives Inventory, we explore the relations among the different fundamental social motives of Self-Protection, Disease Avoidance, Affiliation, Status, Mate Seeking, Mate Retention, and Kin Care; the relationships of the fundamental social motives to other individual difference and personality measures including the Big Five personality traits; the extent to which fundamental social motives are linked to recent life experiences; and the extent to which life history variables (e.g., age, sex, childhood environment) predict individual differences in the fundamental social motives. Results suggest that the fundamental social motives are a powerful lens through which to examine individual differences: They are grounded in theory, have explanatory value beyond that of the Big Five personality traits, and vary meaningfully with a number of life history variables. A fundamental social motives approach provides a generative framework for considering the meaning and implications of individual differences in social motivation. (PsycINFO Database Record
Fundamentals of Physics, Problem Supplement No. 1
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2000-05-01
No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.
Fundamentals of Physics, Student's Solutions Manual
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2000-07-01
No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.
Fundamentals of Physics, 7th Edition
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2004-05-01
No other book on the market today can match the 30-year success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving. This book offers a unique combination of authoritative content and stimulating applications.
Bernabé-Pineda, Margarita; Ramírez-Silva, María Teresa; Romero-Romo, Mario; González-Vergara, Enrique; Rojas-Hernández, Alberto
2004-04-01
The stability of curcumin (H3Cur) in aqueous media is improved when the systems in which it is present are at high pH values (higher than 11.7), fitting a model describable by a pseudo-zero order with a rate constant k' for the disappearance of the Cur3- species of 1.39 (10(-9)) Mmin(-1). There were three acidity constants measured for the curcumin as follows: pKA3 = 10.51 +/- 0.01 corresponding to the equilibrium HCur2- = Cur3- + H+, a pKA2 = 9.88 +/- 0.02 corresponding to the equilibrium H2Cur- = HCur-(2) + H+. These pKA values were attributed to the hydrogen of the phenol part of the curcumin, while the pKA1 = 8.38 +/- 0.04 corresponds to the equilibrium H3Cur = H2Cur- + H+ and is attributed the acetylacetone type group. Formation of quinoid structures play an important role in the tautomeric forms of the curcumin in aqueous media, which makes the experimental values differ from the theoretically calculated ones, depending on the conditions adopted in the study.
Emergent cosmological constant from colliding electromagnetic waves
Halilsoy, M.; Mazharimousavi, S. Habib; Gurtug, O. E-mail: habib.mazhari@emu.edu.tr
2014-11-01
In this study we advocate the view that the cosmological constant is of electromagnetic (em) origin, which can be generated from the collision of em shock waves coupled with gravitational shock waves. The wave profiles that participate in the collision have different amplitudes. It is shown that, circular polarization with equal amplitude waves does not generate cosmological constant. We also prove that the generation of the cosmological constant is related to the linear polarization. The addition of cross polarization generates no cosmological constant. Depending on the value of the wave amplitudes, the generated cosmological constant can be positive or negative. We show additionally that, the collision of nonlinear em waves in a particular class of Born-Infeld theory also yields a cosmological constant.
Constant voltage electro-slag remelting control
Schlienger, M.E.
1996-10-22
A system for controlling electrode gap in an electro-slag remelt furnace has a constant regulated voltage and an electrode which is fed into the slag pool at a constant rate. The impedance of the circuit through the slag pool is directly proportional to the gap distance. Because of the constant voltage, the system current changes are inversely proportional to changes in gap. This negative feedback causes the gap to remain stable. 1 fig.
Determination of the Avogadro Constant by Counting the Atoms in a Si28 Crystal
NASA Astrophysics Data System (ADS)
Andreas, B.; Azuma, Y.; Bartl, G.; Becker, P.; Bettin, H.; Borys, M.; Busch, I.; Gray, M.; Fuchs, P.; Fujii, K.; Fujimoto, H.; Kessler, E.; Krumrey, M.; Kuetgens, U.; Kuramoto, N.; Mana, G.; Manson, P.; Massa, E.; Mizushima, S.; Nicolaus, A.; Picard, A.; Pramann, A.; Rienitz, O.; Schiel, D.; Valkiers, S.; Waseda, A.
2011-01-01
The Avogadro constant links the atomic and the macroscopic properties of matter. Since the molar Planck constant is well known via the measurement of the Rydberg constant, it is also closely related to the Planck constant. In addition, its accurate determination is of paramount importance for a definition of the kilogram in terms of a fundamental constant. We describe a new approach for its determination by counting the atoms in 1 kg single-crystal spheres, which are highly enriched with the Si28 isotope. It enabled isotope dilution mass spectroscopy to determine the molar mass of the silicon crystal with unprecedented accuracy. The value obtained, NA=6.02214078(18)×1023mol-1, is the most accurate input datum for a new definition of the kilogram.
Determination of the Avogadro constant by counting the atoms in a 28Si crystal.
Andreas, B; Azuma, Y; Bartl, G; Becker, P; Bettin, H; Borys, M; Busch, I; Gray, M; Fuchs, P; Fujii, K; Fujimoto, H; Kessler, E; Krumrey, M; Kuetgens, U; Kuramoto, N; Mana, G; Manson, P; Massa, E; Mizushima, S; Nicolaus, A; Picard, A; Pramann, A; Rienitz, O; Schiel, D; Valkiers, S; Waseda, A
2011-01-21
The Avogadro constant links the atomic and the macroscopic properties of matter. Since the molar Planck constant is well known via the measurement of the Rydberg constant, it is also closely related to the Planck constant. In addition, its accurate determination is of paramount importance for a definition of the kilogram in terms of a fundamental constant. We describe a new approach for its determination by counting the atoms in 1 kg single-crystal spheres, which are highly enriched with the 28Si isotope. It enabled isotope dilution mass spectroscopy to determine the molar mass of the silicon crystal with unprecedented accuracy. The value obtained, NA = 6.022,140,78(18) × 10(23) mol(-1), is the most accurate input datum for a new definition of the kilogram.
Synchrotron infrared spectroscopy of the ν4, ν8, ν10, ν11 and ν14 fundamental bands of thiirane
NASA Astrophysics Data System (ADS)
Evans, Corey J.; Carter, Jason P.; Appadoo, Dominique R. T.; Wong, Andy; McNaughton, Don
2015-10-01
The high-resolution spectrum of thiirane has been recorded using the far-infrared beamline at the Australian synchrotron facility. Spectra have been recorded between 700 cm-1 and 1200 cm-1 and ro-vibrational transitions associated with four fundamental bands of thiirane have been observed and assigned. The effects of Coriolis coupling were observed in the upper energy levels associated with the ν4 (1024 cm-1) and the ν14 (1050 cm-1) fundamental bands as well as in the ν11 (825 cm-1) and the ν8 (895 cm-1) fundamental bands. The ν10 (945 cm-1) fundamental band was also observed and was found to have no significant perturbations associated with it. For each of the observed bands rotational and centrifugal distortion constants have been evaluated, while for all but the ν10 fundamental band, Coriolis interaction parameters have been determined for the upper states. The ground state constants have also been further refined.
Tuning the Spring Constant of Cantilever-free Probe Arrays
NASA Astrophysics Data System (ADS)
Eichelsdoerfer, Daniel J.; Brown, Keith A.; Boya, Radha; Shim, Wooyoung; Mirkin, Chad A.
2013-03-01
The versatility of atomic force microscope (AFM) based techniques such as scanning probe lithography is due in part to the utilization of a cantilever that can be fabricated to match a desired application. In contrast, cantilever-free scanning probe lithography utilizes a low cost array of probes on a compliant backing layer that allows for high throughput nanofabrication but lacks the tailorability afforded by the cantilever in traditional AFM. Here, we present a method to measure and tune the spring constant of probes in a cantilever-free array by adjusting the mechanical properties of the underlying elastomeric layer. Using this technique, we are able to fabricate large-area silicon probe arrays with spring constants that can be tuned in the range from 7 to 150 N/m. This technique offers an advantage in that the spring constant depends linearly on the geometry of the probe, which is in contrast to traditional cantilever-based lithography where the spring constant varies as the cube of the beam width and thickness. To illustrate the benefit of utilizing a probe array with a lower spring constant, we pattern a block copolymer on a delicate 50 nm thick silicon nitride window.
Variación temporal de las constantes fundamentales
NASA Astrophysics Data System (ADS)
Landau, S. J.; Vucetich, H.
La variación temporal de las constantes fundamentales es un problema que ha motivado numerosos trabajos teóricos y experimentales desde la hipótesis de los grandes números de Dirac en 1937. Entre los métodos experimentales y observacionales para establecer restricciones sobre la variación de las constantes fundamentes es importante mencionar: comparación entre relojes atómicos[1], métodos geofísicos[2][3], análisis de sistemas de absorción en quasares[4][5][6] y cotas provenientes de la nucleosíntesis primordial[7]. En un trabajo reciente[5], se reportó una significativa variación en la constante de estructura fina. Intentos de unificar las cuatro interacciones fundamentales dieron como resultado teorías con múltiples dimensiones como las teorías de Kaluza-Klein y teorías de supercuerdas. Estas teorías proporcionan un marco teórico natural para el estudio de la variación temporal de las constantes fundamentales. A su vez, un modelo sencillo para estudiar la variación de la constante de estructura fina, fue propuesto en [8], a partir de premisas muy generales como ser covarianza, invarianza de gauge, causalidad y invarianza ante reversiones temporales en el electromagnetismo. Diferentes versiones de las teorías antes mencionadas coinciden en predecir variaciones temporales de las constantes fundamentales pero difieren en la forma de esta variación[9][10]. De esta manera, las restricciones establecidas experimentalmente sobre la variación de las constantes fundamentales pueden ser una herramienta importante para testear estas diferentes teorías. En este trabajo, utilizamos las cotas provenientes de diversas técnicas experimentales, para testear si las mismas son consistentes con alguna de las teorías antes mencionadas. En particular, establecemos cotas sobre la variación de los parámentros libres de las diferentes teorías como por ejemplo el radio de las dimensiones extras en las teorías tipo Kaluza-Klein.
Fundamentals of preparative and nonlinear chromatography
Guiochon, Georges A; Felinger, Attila; Katti, Anita; Shirazi, Dean G
2006-02-01
The second edition of Fundamentals of Preparative and Nonlinear Chromatography is devoted to the fundamentals of a new process of purification or extraction of chemicals or proteins widely used in the pharmaceutical industry and in preparative chromatography. This process permits the preparation of extremely pure compounds satisfying the requests of the US Food and Drug Administration. The book describes the fundamentals of thermodynamics, mass transfer kinetics, and flow through porous media that are relevant to chromatography. It presents the models used in chromatography and their solutions, discusses the applications made, describes the different processes used, their numerous applications, and the methods of optimization of the experimental conditions of this process.
Regularizing cosmological singularities by varying physical constants
Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl
2013-02-01
Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.
Cosmological constant from the emergent gravity perspective
NASA Astrophysics Data System (ADS)
Padmanabhan, T.; Padmanabhan, Hamsa
2014-05-01
Observations indicate that our universe is characterized by a late-time accelerating phase, possibly driven by a cosmological constant Λ, with the dimensionless parameter Λ {LP2} ˜= 10-122, where LP = (Għ/c3)1/2 is the Planck length. In this review, we describe how the emergent gravity paradigm provides a new insight and a possible solution to the cosmological constant problem. After reviewing the necessary background material, we identify the necessary and sufficient conditions for solving the cosmological constant problem. We show that these conditions are naturally satisfied in the emergent gravity paradigm in which (i) the field equations of gravity are invariant under the addition of a constant to the matter Lagrangian and (ii) the cosmological constant appears as an integration constant in the solution. The numerical value of this integration constant can be related to another dimensionless number (called CosMIn) that counts the number of modes inside a Hubble volume that cross the Hubble radius during the radiation and the matter-dominated epochs of the universe. The emergent gravity paradigm suggests that CosMIn has the numerical value 4π, which, in turn, leads to the correct, observed value of the cosmological constant. Further, the emergent gravity paradigm provides an alternative perspective on cosmology and interprets the expansion of the universe itself as a quest towards holographic equipartition. We discuss the implications of this novel and alternate description of cosmology.
Strategic Information Resources Management: Fundamental Practices.
ERIC Educational Resources Information Center
Caudle, Sharon L.
1996-01-01
Discusses six fundamental information resources management (IRM) practices in successful organizations that can improve government service delivery performance. Highlights include directing changes, integrating IRM decision making into a strategic management process, performance management, maintaining an investment philosophy, using business…
Instructor Special Report: RIF (Reading Is FUNdamental)
ERIC Educational Resources Information Center
Instructor, 1976
1976-01-01
At a time when innovative programs of the sixties are quickly falling out of the picture, Reading Is FUNdamental, after ten years and five million free paperbacks, continues to expand and show results. (Editor)
Language Policy and Planning: Fundamental Issues.
ERIC Educational Resources Information Center
Kaplan, Robert B.
1994-01-01
Fundamental issues in language policy and planning are discussed: language death, language survival, language change, language revival, language shift and expansion, language contact and pidginization or creolization, and literacy development. (Contains 21 references.) (LB)
Accounting Fundamentals for Non-Accountants
The purpose of this module is to provide an introduction and overview of accounting fundamentals for non-accountants. The module also covers important topics such as communication, internal controls, documentation and recordkeeping.
Fundamentals of Indoor Air Quality in Buildings
This module provides the fundamentals to understanding indoor air quality. It provides a rudimentary framework for understanding how indoor and outdoor sources of pollution affect the indoor air quality of buildings.
Fundamental reflectivity and electronic structure of NiBr2 and NiCl2 insulators
NASA Astrophysics Data System (ADS)
Pollini, I.; Thomas, J.; Jezequel, G.; Lemonnier, J. C.; Mamy, R.
1983-01-01
The fundamental reflectivity of NiBr2 and NiCl2 has been measured over the energy range 2-11 eV from 300 to 30 K with the use of synchrotron radiation. The imaginary part of the dielectric constant ɛ2 has been determined at 30 K by means of the Kramers-Kronig technique. The structure in the complex optical spectra of nickel halides is interpreted in terms of charge-transfer transitions, orbital promotions, excitons, and direct allowed interband transitions at the symmetry points Γ, Z, and F, and along symmetry lines Λ, B, and Γ-L of the Brillouin zone. The energy gap is assigned to Γ-3-->Γ+1 transitions at the zone center, both in NiBr2 (7.90 eV) and NiCl2 (8.70 eV). Finally, the interpretation of the satellite exciton at 6.5 eV in NiBr2 (30 K) is discussed.
Does logic moderate the fundamental attribution error?
Stalder, D R
2000-06-01
The fundamental attribution error was investigated from an individual difference perspective. Mathematicians were compared with nonmathematicians (Exp. 1; n: 84), and undergraduates who scored high on a test of logical reasoning ability were compared with those who scored low (Exp. 2; n: 62). The mathematicians and those participants scoring higher on logic appeared less prone to the fundamental attribution error, primarily using a measure of confidence in attributions.
ERIC Educational Resources Information Center
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
Fundamental Insights into Combustion Instability Predictions in Aerospace Propulsion
NASA Astrophysics Data System (ADS)
Huang, Cheng
in conjunction with a Galerkin procedure to reduce the governing partial differential equation to an ordinary differential equation, which constitutes the ROM. Once the ROM is established, it can then be used as a lower-order test-bed to predict detailed results within certain parametric ranges at a fraction of the cost of solving the full governing equations. A detailed assessment is performed on the method in two parts. In part one, a one-dimensional scalar reaction-advection model equation is used for fundamental investigations, which include verification of the POD eigen-basis calculation and of the ROM development procedure. Moreover, certain criteria during ROM development are established: 1. a necessary number of POD modes that should be included to guarantee a stable ROM; 2. the need for the numerical discretization scheme to be consistent between the original CFD and the developed ROM. Furthermore, the predictive capabilities of the resulting ROM are evaluated to test its limits and to validate the values of applying broadband forcing in improving the ROM performance. In part two, the exploration is extended to a vector system of equations. Using the one-dimensional Euler equation is used as a model equation. A numerical stability issue is identified during the ROM development, the cause of which is further studied and attributed to the normalization methods implemented to generate coupled POD eigen-bases for vector variables. (Abstract shortened by UMI.).
2011-01-01
of the Sun to Uranus , MS/MU The value for the ratio of the mass of the Sun to the mass of Uranus , MS/MU, is taken from Jacobson et al. (1992). Ratio...of Uranus and its Major Satellites from Voyager Tracking Data and Earth-based Uranian Satellite Data. Astron. J. 103(6), 2068–2078 (1992) Jacobson
Fundamental experiments on hydride reorientation in zircaloy
NASA Astrophysics Data System (ADS)
Colas, Kimberly B.
In the current study, an in-situ X-ray diffraction technique using synchrotron radiation was used to follow directly the kinetics of hydride dissolution and precipitation during thermomechanical cycles. This technique was combined with conventional microscopy (optical, SEM and TEM) to gain an overall understanding of the process of hydride reorientation. Thus this part of the study emphasized the time-dependent nature of the process, studying large volume of hydrides in the material. In addition, a micro-diffraction technique was also used to study the spatial distribution of hydrides near stress concentrations. This part of the study emphasized the spatial variation of hydride characteristics such as strain and morphology. Hydrided samples in the shape of tensile dog-bones were used in the time-dependent part of the study. Compact tension specimens were used during the spatial dependence part of the study. The hydride elastic strains from peak shift and size and strain broadening were studied as a function of time for precipitating hydrides. The hydrides precipitate in a very compressed state of stress, as measured by the shift in lattice spacing. As precipitation proceeds the average shift decreases, indicating average stress is reduced, likely due to plastic deformation and morphology changes. When nucleation ends the hydrides follow the zirconium matrix thermal contraction. When stress is applied below the threshold stress for reorientation, hydrides first nucleate in a very compressed state similar to that of unstressed hydrides. After reducing the average strain similarly to unstressed hydrides, the average hydride strain reaches a constant value during cool-down to room temperature. This could be due to a greater ease of deforming the matrix due to the applied far-field strain which would compensate for the strains due to thermal contraction. Finally when hydrides reorient, the average hydride strains become tensile during the first precipitation regime and
The second acidic constant of salicylic acid.
Porto, Raffaella; De Tommaso, Gaetano; Furia, Emilia
2005-01-01
The second dissociation constant of salicylic acid (H2L) has been determined, at 25 degrees C, in NaCl ionic media by UV spectrophotometric measurements. The investigated ionic strength values were 0.16, 0.25, 0.50, 1.0, 2.0 and 3.0 M. The protolysis constants calculated at the different ionic strengths yielded, with the Specific Interaction Theory, the infinite dilution constant, log beta1(0) = 13.62 +/- 0.03, for the equilibrium L2- + H+ <==> HL-. The interaction coefficient between Na+ and L2-, b(Na+, L2-) = 0.02 +/- 0.07, has been also calculated.
Laser Propulsion and the Constant Momentum Mission
Larson, C. William; Mead, Franklin B. Jr.; Knecht, Sean D.
2004-03-30
We show that perfect propulsion requires a constant momentum mission, as a consequence of Newton's second law. Perfect propulsion occurs when the velocity of the propelled mass in the inertial frame of reference matches the velocity of the propellant jet in the rocket frame of reference. We compare constant momentum to constant specific impulse propulsion, which, for a given specification of the mission delta V, has an optimum specific impulse that maximizes the propelled mass per unit jet kinetic energy investment. We also describe findings of more than 50 % efficiency for conversion of laser energy into jet kinetic energy by ablation of solids.
Improved Lebesgue constants on the triangle
NASA Astrophysics Data System (ADS)
Heinrichs, Wilhelm
2005-08-01
New sets of points with improved Lebesgue constants in the triangle are calculated. Starting with the Fekete points a direct minimization process for the Lebesgue constant leads to better results. The points and corresponding quadrature weigths are explicitly given. It is quite surprising that the optimal points are not symmetric. The points along the boundary of the triangle are the 1D Gauss-Lobatto points. For all degrees, our points yield the smallest Lebesgue constants currently known. Numerical examples are presented, which show the improved interpolation properties of our nodes.
The cosmological constant and cold dark matter
NASA Astrophysics Data System (ADS)
Efstathiou, G.; Sutherland, W. J.; Maddox, S. J.
1990-12-01
It is argued here that the success of the cosmological cold dark matter (CDM) model can be retained and the new observations of very large scale cosmological structures can be accommodated in a spatially flat cosmology in which as much as 80 percent of the critical density is provided by a positive cosmological constant. In such a universe, expansion was dominated by CDM until a recent epoch, but is now governed by the cosmological constant. This constant can also account for the lack of fluctuations in the microwave background and the large number of certain kinds of objects found at high redshift.
Challenging fundamental limits in the fabrication of vector vortex waveplates
NASA Astrophysics Data System (ADS)
Hakobyan, R. S.; Tabiryan, N. V.; Serabyn, E.
Vector vortex waveplates (VVWs) are in the heart of vortex coronagraphs aimed at exoplanet detection close to bright stars. VVWs made of liquid crystal polymers (LCPs) provide structural continuity, opportunity of high order singularities, large area, and inexpensive manufacturing technology. However, to date, the performance of such devices is compromised by imperfections in singularity area that allow some residual starlight leakage. Reducing the singularity to subwavelength sizes increases the energy of elastic deformations of the LC. As a result, the azimuthally symmetric orientation pattern gives way to 3D deformations that reduce the elastic energy of the LC. The stability of radial orientation is determined by elastic constants of the LC, the thickness of the layer and the boundary conditions. In the current paper, we examin the role of those factors to determine the fundamental limits the singularity area could be reduced to for LCP VVWs.
Constants and Pseudo-Constants of Coupled Beam Motion in the PEP-II Rings
Decker, F.J.; Colocho, W.S.; Wang, M.H.; Yan, Y.T.; Yocky, G.; /SLAC
2011-11-01
Constants of beam motion help as cross checks to analyze beam diagnostics and the modeling procedure. Pseudo-constants, like the betatron mismatch parameter or the coupling parameter det C, are constant till certain elements in the beam line change them. This can be used to visually find the non-desired changes, pinpointing errors compared with the model.
Marshak waves: Constant flux vs constant T-a (slight) paradigm shift
Rosen, M.D.
1994-12-22
We review the basic scaling laws for Marshak waves and point out the differences in results for wall loss, albedo, and Marshak depth when a constant absorbed flux is considered as opposed to a constant absorbed temperature. Comparisons with LASNEX simulations and with data are presented that imply that a constant absorbed flux is a more appropriate boundary condition.
Faculty beliefs on fundamental dimensions of scholarship
NASA Astrophysics Data System (ADS)
Finnegan, Brian
scholarship, the policies, activities, and rewards of institutions must reflect a similar belief on the part of faculty. By understanding faculty beliefs on the fundamental dimensions of scholarship, an important step in building this new culture can be taken.
Measurements of the gravitational constant - why we need new ideas
NASA Astrophysics Data System (ADS)
Schlamminger, Stephan
2016-03-01
In this presentation, I will summarize measurements of the Newtonian constant of gravitation, big G, that have been carried out in the last 30 years. I will describe key techniques that were used by researchers around the world to determine G. Unfortunately, the data set is inconsistent with itself under the assumption that the gravitational constant does not vary in space or time, an assumption that has been tested by other experiments. Currently, several research groups have reported measurements with relative uncertainties below 2 ×10-5 , however, the relative difference between the smallest and largest reported number exceeds 5 ×10-4 . It is embarrassing that after over 200 years of measuring the gravitational constant, we do not have a better understanding of the numerical value of this constant. Clearly, we need new ideas to tackle this problem and now is the time to come forward with new ideas. The National Science Foundation is currently soliciting proposals for an Ideas Lab on measuring big G. In the second part of the presentation, I will introduce the Ideas Lab on big G and I am hoping to motivate the audience to think about new ideas to measure G and encourage them to apply to participate in the Ideas Lab.
How the cosmological constant affects gravastar formation
Chan, R.; Silva, M.F.A. da; Rocha, P. E-mail: mfasnic@gmail.com
2009-12-01
Here we generalized a previous model of gravastar consisted of an internal de Sitter spacetime, a dynamical infinitely thin shell with an equation of state, but now we consider an external de Sitter-Schwarzschild spacetime. We have shown explicitly that the final output can be a black hole, a ''bounded excursion'' stable gravastar, a stable gravastar, or a de Sitter spacetime, depending on the total mass of the system, the cosmological constants, the equation of state of the thin shell and the initial position of the dynamical shell. We have found that the exterior cosmological constant imposes a limit to the gravastar formation, i.e., the exterior cosmological constant must be smaller than the interior cosmological constant. Besides, we have also shown that, in the particular case where the Schwarzschild mass vanishes, no stable gravastar can be formed, but we still have formation of black hole.
The Cosmological Constant in Quantum Cosmology
Wu Zhongchao
2008-10-10
Hawking proposed that the cosmological constant is probably zero in quantum cosmology in 1984. By using the right configuration for the wave function of the universe, a complete proof is found very recently.
The Rate Constant for Fluorescence Quenching
ERIC Educational Resources Information Center
Legenza, Michael W.; Marzzacco, Charles J.
1977-01-01
Describes an experiment that utilizes fluorescence intensity measurements from a Spectronic 20 to determine the rate constant for the fluorescence quenching of various aromatic hydrocarbons by carbon tetrachloride in an ethanol solvent. (MLH)
The Solar Constant: A Take Home Lab
ERIC Educational Resources Information Center
Eaton, B. G.; And Others
1977-01-01
Describes a method that uses energy from the sun, absorbed by aluminum discs, to melt ice, and allows the determination of the solar constant. The take-home equipment includes Styrofoam cups, a plastic syringe, and aluminum discs. (MLH)
Dielectric constant of water in the interface
NASA Astrophysics Data System (ADS)
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V.
2016-07-01
We define the dielectric constant (susceptibility) that should enter the Maxwell boundary value problem when applied to microscopic dielectric interfaces polarized by external fields. The dielectric constant (susceptibility) of the interface is defined by exact linear-response equations involving correlations of statistically fluctuating interface polarization and the Coulomb interaction energy of external charges with the dielectric. The theory is applied to the interface between water and spherical solutes of altering size studied by molecular dynamics (MD) simulations. The effective dielectric constant of interfacial water is found to be significantly lower than its bulk value, and it also depends on the solute size. For TIP3P water used in MD simulations, the interface dielectric constant changes from 9 to 4 when the solute radius is increased from ˜5 to 18 Å.
Low-Dielectric-Constant Polyimide Fibers
NASA Technical Reports Server (NTRS)
Dorogy, William E., Jr.; Proctor, K. Mason; St. Clair, Anne K.
1994-01-01
In experiments performed at NASA Langley Research Center, low-dielectric-constant polyimide fibers produced by use of resin extrusion. These fibers also have high thermal stability and good tensile properties. Useful in industrial and aerospace applications in which fibers required to have dielectric constants less than 3, high thermal stability, and tensile properties in range of those of standard textile fibers. Potential applications include use in printed circuit-boards and in aircraft composites.
A Fundamental Equation of State for Ethanol
NASA Astrophysics Data System (ADS)
Schroeder, J. A.; Penoncello, S. G.; Schroeder, J. S.
2014-12-01
The existing fundamental equation for ethanol demonstrates undesirable behavior in several areas and especially in the critical region. In addition, new experimental data have become available in the open literature since the publication of the current correlation. The development of a new fundamental equation for ethanol, in the form of Helmholtz energy as a function of temperature and density, is presented. New, nonlinear fitting techniques, along with the new experimental data, are shown to improve the behavior of the fundamental equation. Ancillary equations are developed, including equations for vapor pressure, saturated liquid density, saturated vapor density, and ideal gas heat capacity. Both the fundamental and ancillary equations are compared to experimental data. The fundamental equation can compute densities to within ±0.2%, heat capacities to within ±1%-2%, and speed of sound to within ±1%. Values of the vapor pressure and saturated vapor densities are represented to within ±1% at temperatures of 300 K and above, while saturated liquid densities are represented to within ±0.3% at temperatures of 200 K and above. The uncertainty of all properties is higher in the critical region and near the triple point. The equation is valid for pressures up to 280 MPa and temperatures from 160 to 650 K.
Fundamental frequency from classical molecular dynamics.
Yamada, Tomonori; Aida, Misako
2015-02-07
We give a theoretical validation for calculating fundamental frequencies of a molecule from classical molecular dynamics (MD) when its anharmonicity is small enough to be treated by perturbation theory. We specifically give concrete answers to the following questions: (1) What is the appropriate initial condition of classical MD to calculate the fundamental frequency? (2) From that condition, how accurately can we extract fundamental frequencies of a molecule? (3) What is the benefit of using ab initio MD for frequency calculations? Our analytical approaches to those questions are classical and quantum normal form theories. As numerical examples we perform two types of MD to calculate fundamental frequencies of H2O with MP2/aug-cc-pVTZ: one is based on the quartic force field and the other one is direct ab initio MD, where the potential energies and the gradients are calculated on the fly. From those calculations, we show comparisons of the frequencies from MD with the post vibrational self-consistent field calculations, second- and fourth-order perturbation theories, and experiments. We also apply direct ab initio MD to frequency calculations of C-H vibrational modes of tetracene and naphthalene. We conclude that MD can give the same accuracy in fundamental frequency calculation as second-order perturbation theory but the computational cost is lower for large molecules.
Inflation with a constant rate of roll
Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi E-mail: alstar@landau.ac.ru
2015-09-01
We consider an inflationary scenario where the rate of inflaton roll defined by {sup ··}φ/H φ-dot remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.
Inflation with a constant rate of roll
NASA Astrophysics Data System (ADS)
Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi
2015-09-01
We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.
RNA structure and scalar coupling constants
Tinoco, I. Jr.; Cai, Z.; Hines, J.V.; Landry, S.M.; SantaLucia, J. Jr.; Shen, L.X.; Varani, G.
1994-12-01
Signs and magnitudes of scalar coupling constants-spin-spin splittings-comprise a very large amount of data that can be used to establish the conformations of RNA molecules. Proton-proton and proton-phosphorus splittings have been used the most, but the availability of {sup 13}C-and {sup 15}N-labeled molecules allow many more coupling constants to be used for determining conformation. We will systematically consider the torsion angles that characterize a nucleotide unit and the coupling constants that depend on the values of these torsion angles. Karplus-type equations have been established relating many three-bond coupling constants to torsion angles. However, one- and two-bond coupling constants can also depend on conformation. Serianni and coworkers measured carbon-proton coupling constants in ribonucleosides and have calculated their values as a function of conformation. The signs of two-bond coupling can be very useful because it is easier to measure a sign than an accurate magnitude.
Effective Lagrangian Models for gauge theories of fundamental interactions
NASA Astrophysics Data System (ADS)
Sannino, Francesco
The non abelian gauge theory which describes, in the perturbative regime, the strong interactions is Quantum Chromodynamics (QCD). Quarks and gluons are the fundamental degrees of freedom of the theory. A key feature of the theory (due to quantum corrections) is asymptotic freedom, i.e. the strong coupling constant increases as the energy scale of interest decreases. The perturbative approach becomes unreliable below a characteristic scale of the theory (Λ). Quarks and gluons confine themselves into colorless particles called hadrons (pions, protons,/...). The latter are the true physical states of the theory. We need to investigate alternative ways to describe strong interactions, and in general any asymptotically free theory, in the non perturbative regime. This is the fundamental motivation of the present thesis. Although the underlying gauge theory cannot be easily treated in the non perturbative regime we can still use its global symmetries as a guide to build Effective Lagrangian Models. These models will be written directly in terms of the colorless physical states of the theory, i.e. hadrons.
Effect of speed matching on fundamental diagram of pedestrian flow
NASA Astrophysics Data System (ADS)
Fu, Zhijian; Luo, Lin; Yang, Yue; Zhuang, Yifan; Zhang, Peitong; Yang, Lizhong; Yang, Hongtai; Ma, Jian; Zhu, Kongjin; Li, Yanlai
2016-09-01
Properties of pedestrian may change along their moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study the speed matching effect (a pedestrian adjusts his velocity constantly to the average velocity of his neighbors) and its influence on the density-velocity relationship (a pedestrian adjust his velocity to the surrounding density), known as the fundamental diagram of the pedestrian flow. By the means of the cellular automaton, the simulation results fit well with the empirical data, indicating the great advance of the discrete model for pedestrian dynamics. The results suggest that the system velocity and flow rate increase obviously under a big noise, i.e., a diverse composition of pedestrian crowd, especially in the region of middle or high density. Because of the temporary effect, the speed matching has little influence on the fundamental diagram. Along the entire density, the relationship between the step length and the average pedestrian velocity is a piecewise function combined two linear functions. The number of conflicts reaches the maximum with the pedestrian density of 2.5 m-2, while decreases by 5.1% with the speed matching.
Traffic dynamics: Its impact on the Macroscopic Fundamental Diagram
NASA Astrophysics Data System (ADS)
Knoop, Victor L.; van Lint, Hans; Hoogendoorn, Serge P.
2015-11-01
Literature shows that-under specific conditions-the Macroscopic Fundamental Diagram (MFD) describes a crisp relationship between the average flow (production) and the average density in an entire network. The limiting condition is that traffic conditions must be homogeneous over the whole network. Recent works describe hysteresis effects: systematic deviations from the MFD as a result of loading and unloading. This article proposes a two dimensional generalization of the MFD, the so-called Generalized Macroscopic Fundamental Diagram (GMFD), which relates the average flow to both the average density and the (spatial) inhomogeneity of density. The most important contribution is that we show this is a continuous function, of which the MFD is a projection. Using the GMFD, we can describe the mentioned hysteresis patterns in the MFD. The underlying traffic phenomenon explaining the two dimensional surface described by the GMFD is that congestion concentrates (and subsequently spreads out) around the bottlenecks that oversaturate first. We call this the nucleation effect. Due to this effect, the network flow is not constant for a fixed number of vehicles as predicted by the MFD, but decreases due to local queueing and spill back processes around the congestion "nuclei". During this build up of congestion, the production hence decreases, which gives the hysteresis effects.
Redefining Planck Mass: Unlocking the Fundamental Quantum of the Universe
NASA Astrophysics Data System (ADS)
Laubenstein, John
2008-04-01
The large value of the Planck Mass relative to the quantum scale raises unanswered questions as to the source of mass itself. While we wait for experimental verification of the elusive Higgs boson, it may be worth recognizing that Planck Mass is not the result of rigorous mathematics -- but rather derived from an intuitive manipulation of physical constants. Recent findings reported by IWPD suggest a quantum scale Planck Mass as small as 10 (-73) kg. At this scale, the Planck Mass joins Planck Length and Time as a truly fundamental quantum entity. This presentation will provide evidence supporting the fundamental quantum nature of a dramatically smaller Planck Mass while discussing the impact of this finding on both the quantum and cosmic scale. A quantum scale Planck Mass will require an accelerating expansion of the universe at an age of 14.2 billion years. No initial conditions are imposed at the earliest Planck Time of 10 (-44) s allowing the universe to evolve as a background free field propagating at the speed of light with a local degree of freedom. This model provides the basis for a quantum theory of gravity and provides a conceptual pathway for the unification of GR and QM.
Human cortical dynamics determined by speech fundamental frequency.
Mäkelä, Anna Mari; Alku, Paavo; Mäkinen, Ville; Valtonen, Jussi; May, Patrick; Tiitinen, Hannu
2002-11-01
Evidence for speech-specific brain processes has been searched for through the manipulation of formant frequencies which mediate phonetic content and which are, in evolutionary terms, relatively "new" aspects of speech. Here we used whole-head magnetoencephalography and advanced stimulus reproduction methodology to examine the contribution of the fundamental frequency F0 and its harmonic integer multiples in cortical processing. The subjects were presented with a vowel, a frequency-matched counterpart of the vowel lacking in phonetic contents, and a pure tone. The F0 of the stimuli was set at that of a typical male (i.e., 100 Hz), female (200 Hz), or infant (270 Hz) speaker. We found that speech sounds, both with and without phonetic content, elicited the N1m response in human auditory cortex at a constant latency of 120 ms, whereas pure tones matching the speech sounds in frequency, intensity, and duration gave rise to N1m responses whose latency varied between 120 and 160 ms. Thus, it seems that the fundamental frequency F0 and its harmonics determine the temporal dynamics of speech processing in human auditory cortex and that speech specificity arises out of cortical sensitivity to the complex acoustic structure determined by the human sound production apparatus.
Intrinsic fundamental frequency of vowels is moderated by regional dialect
Jacewicz, Ewa; Fox, Robert Allen
2015-01-01
There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352
Sensors, Volume 1, Fundamentals and General Aspects
NASA Astrophysics Data System (ADS)
Grandke, Thomas; Ko, Wen H.
1996-12-01
'Sensors' is the first self-contained series to deal with the whole area of sensors. It describes general aspects, technical and physical fundamentals, construction, function, applications and developments of the various types of sensors. This volume deals with the fundamentals and common principles of sensors and covers the wide areas of principles, technologies, signal processing, and applications. Contents include: Sensor Fundamentals, e.g. Sensor Parameters, Modeling, Design and Packaging; Basic Sensor Technologies, e.g. Thin and Thick Films, Integrated Magnetic Sensors, Optical Fibres and Intergrated Optics, Ceramics and Oxides; Sensor Interfaces, e.g. Signal Processing, Multisensor Signal Processing, Smart Sensors, Interface Systems; Sensor Applications, e.g. Automotive: On-board Sensors, Traffic Surveillance and Control, Home Appliances, Environmental Monitoring, etc. This volume is an indispensable reference work and text book for both specialits and newcomers, researchers and developers.
The Fundamental Neutron Physics Facilities at NIST.
Nico, J S; Arif, M; Dewey, M S; Gentile, T R; Gilliam, D M; Huffman, P R; Jacobson, D L; Thompson, A K
2005-01-01
The program in fundamental neutron physics at the National Institute of Standards and Technology (NIST) began nearly two decades ago. The Neutron Interactions and Dosimetry Group currently maintains four neutron beam lines dedicated to studies of fundamental neutron interactions. The neutrons are provided by the NIST Center for Neutron Research, a national user facility for studies that include condensed matter physics, materials science, nuclear chemistry, and biological science. The beam lines for fundamental physics experiments include a high-intensity polychromatic beam, a 0.496 nm monochromatic beam, a 0.89 nm monochromatic beam, and a neutron interferometer and optics facility. This paper discusses some of the parameters of the beam lines along with brief presentations of some of the experiments performed at the facilities.
Fundamental understanding of matter: an engineering viewpoint
Cullingford, H.S.; Cort, G.E.
1980-01-01
Fundamental understanding of matter is a continuous process that should produce physical data for use by engineers and scientists in their work. Lack of fundamental property data in any engineering endeavor cannot be mitigated by theoretical work that is not confirmed by physical experiments. An engineering viewpoint will be presented to justify the need for understanding of matter. Examples will be given in the energy engineering field to outline the importance of further understanding of material and fluid properties and behavior. Cases will be cited to show the effects of various data bases in energy, mass, and momentum transfer. The status of fundamental data sources will be discussed in terms of data centers, new areas of engineering, and the progress in measurement techniques. Conclusions and recommendations will be outlined to improve the current situation faced by engineers in carrying out their work. 4 figures.
Fundamental Interventions: How Clinicians Can Address the Fundamental Causes of Disease.
Reich, Adam D; Hansen, Helena B; Link, Bruce G
2016-06-01
In order to enhance the "structural competency" of medicine-the capability of clinicians to address social and institutional determinants of their patients' health-physicians need a theoretical lens to see how social conditions influence health and how they might address them. We consider one such theoretical lens, fundamental cause theory, and propose how it might contribute to a more structurally competent medical profession. We first describe fundamental cause theory and how it makes the social causes of disease and health visible. We then outline the sorts of "fundamental interventions" that physicians might make in order to address the fundamental causes.
The efficiency of combustion turbines with constant-pressure combustion
NASA Technical Reports Server (NTRS)
Piening, Werner
1941-01-01
Of the two fundamental cycles employed in combustion turbines, namely, the explosion (or constant-volume) cycle and the constant-pressure cycle, the latter is considered more in detail and its efficiency is derived with the aid of the cycle diagrams for the several cases with adiabatic and isothermal compression and expansion strokes and with and without utilization of the exhaust heat. Account is also taken of the separate efficiencies of the turbine and compressor and of the pressure losses and heat transfer in the piping. The results show that without the utilization of the exhaust heat the efficiencies for the two cases of adiabatic and isothermal compression is offset by the increase in the heat supplied. It may be seen from the curves that it is necessary to attain separate efficiencies of at least 80 percent in order for useful results to be obtained. There is further shown the considerable effect on the efficiency of pressure losses in piping or heat exchangers.
DOE Fundamentals Handbook: Mathematics, Volume 2
Not Available
1992-06-01
The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.
DOE Fundamentals Handbook: Mathematics, Volume 1
Not Available
1992-06-01
The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.
DOE Fundamentals Handbook: Electrical Science, Volume 1
Not Available
1992-06-01
The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.
DOE Fundamentals Handbook: Electrical Science, Volume 2
Not Available
1992-06-01
The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.
Dark Energy: A Crisis for Fundamental Physics
Stubbs, Christopher [Harvard University, Cambridge, Massachusetts, USA
2016-07-12
Astrophysical observations provide robust evidence that our current picture of fundamental physics is incomplete. The discovery in 1998 that the expansion of the Universe is accelerating (apparently due to gravitational repulsion between regions of empty space!) presents us with a profound challenge, at the interface between gravity and quantum mechanics. This "Dark Energy" problem is arguably the most pressing open question in modern fundamental physics. The first talk will describe why the Dark Energy problem constitutes a crisis, with wide-reaching ramifications. One consequence is that we should probe our understanding of gravity at all accessible scales, and the second talk will present experiments and observations that are exploring this issue.
DOE Fundamentals Handbook: Electrical Science, Volume 4
Not Available
1992-06-01
The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive transformers; and electrical test components; batteries; AC and DC voltage regulators; instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.
Running coupling constant of ten-flavor QCD with the Schroedinger functional method
Hayakawa, M.; Uno, S.; Ishikawa, K.-I.; Osaki, Y.; Takeda, S.; Yamada, N.
2011-04-01
The walking technicolor theory attempts to realize electroweak symmetry breaking as the spontaneous chiral symmetry breakdown caused by the gauge dynamics with slowly varying gauge coupling constant and large mass anomalous dimension. Many-flavor QCD theories are candidates owning these features. We focus on the SU(3) gauge theory with ten flavors of massless fermions in the fundamental representation, and compute the gauge coupling constant in the Schroedinger functional scheme. Numerical simulation is performed with O(a)-unimproved lattice action, and the continuum limit is taken in linear in lattice spacing. We observe evidence that this theory possesses an infrared fixed point.
Radio and Television Repairer Fundamentals. Student's Manual.
ERIC Educational Resources Information Center
Maul, Chuck
This self-contained student manual on fundamentals of radio and television repair is designed to help trade and industrial students relate work experience on the job to information studied at school. Designed for individualized instruction under the supervision of a coordinator or instructor, the manual has 9 sections, each containing 2 to 10…
Fundamental Concepts Bridging Education and the Brain
ERIC Educational Resources Information Center
Masson, Steve; Foisy, Lorie-Marlène Brault
2014-01-01
Although a number of papers have already discussed the relevance of brain research for education, the fundamental concepts and discoveries connecting education and the brain have not been systematically reviewed yet. In this paper, four of these concepts are presented and evidence concerning each one is reviewed. First, the concept of…
A Fundamental Theorem on Particle Acceleration
Xie, Ming
2003-05-01
A fundamental theorem on particle acceleration is derived from the reciprocity principle of electromagnetism and a rigorous proof of the theorem is presented. The theorem establishes a relation between acceleration and radiation, which is particularly useful for insightful understanding of and practical calculation about the first order acceleration in which energy gain of the accelerated particle is linearly proportional to the accelerating field.
Uncovering Racial Bias in Nursing Fundamentals Textbooks.
ERIC Educational Resources Information Center
Byrne, Michelle M.
2001-01-01
The portrayal of African Americans in nursing fundamentals textbooks was analyzed, resulting in 11 themes in the areas of history, culture, and physical assessment. Few African American leaders were included, and racial bias and stereotyping were apparent. Differences were often discussed using Eurocentric norms, and language tended to minimize…
Biological and cognitive underpinnings of religious fundamentalism.
Zhong, Wanting; Cristofori, Irene; Bulbulia, Joseph; Krueger, Frank; Grafman, Jordan
2017-04-06
Beliefs profoundly affect people's lives, but their cognitive and neural pathways are poorly understood. Although previous research has identified the ventromedial prefrontal cortex (vmPFC) as critical to representing religious beliefs, the means by which vmPFC enables religious belief is uncertain. We hypothesized that the vmPFC represents diverse religious beliefs and that a vmPFC lesion would be associated with religious fundamentalism, or the narrowing of religious beliefs. To test this prediction, we assessed religious adherence with a widely-used religious fundamentalism scale in a large sample of 119 patients with penetrating traumatic brain injury (pTBI). If the vmPFC is crucial to modulating diverse personal religious beliefs, we predicted that pTBI patients with lesions to the vmPFC would exhibit greater fundamentalism, and that this would be modulated by cognitive flexibility and trait openness. Instead, we found that participants with dorsolateral prefrontal cortex (dlPFC) lesions have fundamentalist beliefs similar to patients with vmPFC lesions and that the effect of a dlPFC lesion on fundamentalism was significantly mediated by decreased cognitive flexibility and openness. These findings indicate that cognitive flexibility and openness are necessary for flexible and adaptive religious commitment, and that such diversity of religious thought is dependent on dlPFC functionality.
Fundamental Movement Skill Proficiency amongst Adolescent Youth
ERIC Educational Resources Information Center
O' Brien, Wesley; Belton, Sarahjane; Issartel, Johann
2016-01-01
Background: Literature suggests that physical education programmes ought to provide intense instruction towards basic movement skills needed to enjoy a variety of physical activities. Fundamental movement skills (FMS) are basic observable patterns of behaviour present from childhood to adulthood (e.g. run, skip and kick). Recent evidence indicates…
Mathematical Literacy--It's Become Fundamental
ERIC Educational Resources Information Center
McCrone, Sharon Soucy; Dossey, John A.
2007-01-01
The rising tide of numbers and statistics in daily life signals a need for a fundamental broadening of the concept of literacy: mathematical literacy assuming a coequal role in the curriculum alongside language-based literacy. Mathematical literacy is not about studying higher levels of formal mathematics, but about making math relevant and…
Fundamental Theorems of Algebra for the Perplexes
ERIC Educational Resources Information Center
Poodiak, Robert; LeClair, Kevin
2009-01-01
The fundamental theorem of algebra for the complex numbers states that a polynomial of degree n has n roots, counting multiplicity. This paper explores the "perplex number system" (also called the "hyperbolic number system" and the "spacetime number system") In this system (which has extra roots of +1 besides the usual [plus or minus]1 of the…
Course Objectives: Electronic Fundamentals, EL16.
ERIC Educational Resources Information Center
Wilson, David H.
The general objective, recommended text, and specific objectives of a course titled "Electronic Fundamentals," as offered at St. Lawrence College of Applied Arts and Technology, are provided. The general objective of the course is "to acquire an understanding of diodes, transistors, and tubes, and so be able to analyze the operation…
Fundamental problems in provable security and cryptography.
Dent, Alexander W
2006-12-15
This paper examines methods for formally proving the security of cryptographic schemes. We show that, despite many years of active research and dozens of significant results, there are fundamental problems which have yet to be solved. We also present a new approach to one of the more controversial aspects of provable security, the random oracle model.
[Reading Is Fundamental: Pamphlets and Newsletters].
ERIC Educational Resources Information Center
Smithsonian Institution, Washington, DC.
These pamphlets and newsletters are products of the Reading Is Fundamental (RIF) program, which provides free and inexpensive books to children through a variety of community organizations throughout the country. The newsletter appears monthly and contains reports on specific programs, trends in the national program, RIF involvement with other…
Fundamentals of Energy Technology. Energy Technology Series.
ERIC Educational Resources Information Center
Center for Occupational Research and Development, Inc., Waco, TX.
This course in fundamentals of energy technology is one of 16 courses in the Energy Technology Series developed for an Energy Conservation-and-Use Technology curriculum. Intended for use in two-year postsecondary technical institutions to prepare technicians for employment, the courses are also useful in industry for updating employees in…
Solar Energy: Solar System Design Fundamentals.
ERIC Educational Resources Information Center
Knapp, Henry H., III
This module on solar system design fundamentals is one of six in a series intended for use as supplements to currently available materials on solar energy and energy conservation. Together with the recommended texts and references (sources are identified), these modules provide an effective introduction to energy conservation and solar energy…
Fundamental Movement Skills and Autism Spectrum Disorders
ERIC Educational Resources Information Center
Staples, Kerri L.; Reid, Greg
2010-01-01
Delays and deficits may both contribute to atypical development of movement skills by children with ASD. Fundamental movement skills of 25 children with autism spectrum disorders (ASD) (ages 9-12 years) were compared to three typically developing groups using the "Test of Gross Motor Development" ("TGMD-2"). The group matched on chronological age…
Fundamental Movement Skills: An Important Focus
ERIC Educational Resources Information Center
Barnett, Lisa M.; Stodden, David; Cohen, Kristen E.; Smith, Jordan J.; Lubans, David Revalds; Lenoir, Matthieu; Iivonen, Susanna; Miller, Andrew D.; Laukkanen, Arto; Dudley, Dean; Lander, Natalie J.; Brown, Helen; Morgan, Philip J.
2016-01-01
Purpose: Recent international conference presentations have critiqued the promotion of fundamental movement skills (FMS) as a primary pedagogical focus. Presenters have called for a debate about the importance of, and rationale for teaching FMS, and this letter is a response to that call. The authors of this letter are academics who actively…
Fundamental Ideas: Rethinking Computer Science Education.
ERIC Educational Resources Information Center
Schwill, Andreas
1997-01-01
Describes a way to teach computer science based on J.S. Bruner's psychological framework. This educational philosophy has been integrated into two German federal state schools. One way to cope with the rapid developments in computer science is to teach the fundamental ideas, principles, methods, and ways of thinking to K-12 students, (PEN)
Fundamentals of Library Science. Library Science 424.
ERIC Educational Resources Information Center
Foster, Donald L.
An introductory letter, a list of general instructions on how to proceed with a correspondence course, a syllabus, and an examination request form are presented for a correspondence course in the fundamentals of library science offered by the University of New Mexico's Division of Continuing Education and Community Services. The course is a survey…
The Failed Feminist Challenge to "Fundamental Epistemology"
ERIC Educational Resources Information Center
Pinnick, Cassandra L.
2005-01-01
Despite volumes written in the name of the new and fundamental feminist project in philosophy of science, and conclusions drawn on the strength of the hypothesis that the feminist project will boost progress toward cognitive aims associated with science and rationality (and, one might add, policy decisions enacted in the name of these aims), the…
Fundamental and Gradient Differences in Language Development
ERIC Educational Resources Information Center
Herschensohn, Julia
2009-01-01
This article reexamines Bley-Vroman's original (1990) and evolved fundamental difference hypothesis that argues that differences in path and endstate of first language acquisition and adult foreign language learning result from differences in the acquisition procedure (i.e., language faculty and cognitive strategies, respectively). The evolved…
Workshop on Fundamental Science using Pulsed Power
Wootton, Alan
2016-02-20
The project objective was to fund travel to a workshop organized by the Institute for High Energy Density Science (IHEDS) at the University of Texas at Austin. In so doing the intent was to a) Grow the national academic High Energy Density Science (HEDS) community, b) Expand high impact, discovery driven fundamental HEDS, and c) Facilitate user-oriented research
Fundamentals of Electric Circuits. Laboratory Manual.
ERIC Educational Resources Information Center
Wentworth Inst., Boston, MA.
This laboratory manual consists of three major sections. The first section deals with Direct Current (DC) fundamentals, and is divided into 17 phases leading towards the design and analysis of a DC ammeter and a DC voltmeter. Each phase consists of facts and problems to be learned in the phase, preliminary discussion, laboratory operation…
The equivalent fundamental-mode source
Spriggs, G.D.; Busch, R.D.; Sakurai, Takeshi; Okajima, Shigeaki
1997-02-01
In 1960, Hansen analyzed the problem of assembling fissionable material in the presence of a weak neutron source. Using point kinetics, he defined the weak source condition and analyzed the consequences of delayed initiation during ramp reactivity additions. Although not clearly stated in Hansen`s work, the neutron source strength that appears in the weak source condition corresponds to the equivalent fundamental-mode source. In this work, we describe the concept of an equivalent fundamental-mode source and we derive a deterministic expression for a factor, g*, that converts any arbitrary source distribution to an equivalent fundamental-mode source. We also demonstrate a simplified method for calculating g* in subcritical systems. And finally, we present a new experimental method that can be employed to measure the equivalent fundamental-mode source strength in a multiplying assembly. We demonstrate the method on the zero-power, XIX-1 assembly at the Fast Critical Assembly (FCA) Facility, Japan Atomic Energy Research Institute (JAERI).
Linear stability of the Linet–Tian solution with negative cosmological constant
NASA Astrophysics Data System (ADS)
Gleiser, Reinaldo J.
2017-03-01
In this paper we analyze the linear stability of the Linet–Tian solution with negative cosmological constant. In the limit of vanishing cosmological constant the Linet–Tian metric reduces to a form of the Levi–Civita metric, and, therefore, it can be considered as a generalization of the former to include a cosmological constant. The gravitational instability of the Levi–Civita metric was recently established, and the purpose of this paper is to investigate what changes result from the introduction of a cosmological constant. A fundamental difference brought about by a (negative) cosmological constant is in the structure at infinity. This introduces an added problem in attempting to define an evolution for the perturbations because the constant time hypersurfaces are not Cauchy surfaces. In this paper we show that under a large set of boundary conditions that lead to a unique evolution of the perturbations, we always find unstable modes, that would generically be present in the evolution of arbitrary initial data, leading to the conclusion that the Linet–Tian space times with negative cosmological constant are linearly unstable under gravitational perturbations.
Second Yamabe constant on Riemannian products
NASA Astrophysics Data System (ADS)
Henry, Guillermo
2017-04-01
Let (Mm , g) be a closed Riemannian manifold (m ≥ 2) of positive scalar curvature and (Nn , h) any closed manifold. We study the asymptotic behaviour of the second Yamabe constant and the second N-Yamabe constant of (M × N , g + th) as t goes to + ∞. We obtain that lim t → + ∞Y2(M × N , [ g + th ]) =2 2/m+n Y(M ×Rn , [ g +ge ]) . If n ≥ 2, we show the existence of nodal solutions of the Yamabe equation on (M × N , g + th) (provided t large enough). When sg is constant, we prove that lim t → + ∞ YN2 (M × N , g + th) =2 2/m+n YRn(M ×Rn , g +ge) . Also we study the second Yamabe invariant and the second N-Yamabe invariant.
Construction and experimental testing of the constant-bandwidth constant-temperature anemometer.
Ligeza, P
2008-09-01
A classical constant-temperature hot-wire anemometer enables the measurement of fast-changing flow velocity fluctuations, although its transmission bandwidth is a function of measured velocity. This may be a source of significant dynamic errors. Incorporation of an adaptive controller into the constant-temperature system results in hot-wire anemometer operating with a constant transmission bandwidth. The construction together with the results of experimental testing of a constant-bandwidth hot-wire anemometer prototype are presented in this article. During the testing, an approximately constant transmission bandwidth of the anemometer was achieved. The constant-bandwidth hot-wire anemometer can be used in measurements of high-frequency variable flows characterized by a wide range of velocity changes.
Atomic weights: no longer constants of nature
Coplen, Tyler B.; Holden, Norman E.
2011-01-01
Many of us were taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis have changed the way we view atomic weights and why they are no longer constants of nature
Cosmological constant in scale-invariant theories
Foot, Robert; Kobakhidze, Archil; Volkas, Raymond R.
2011-10-01
The incorporation of a small cosmological constant within radiatively broken scale-invariant models is discussed. We show that phenomenologically consistent scale-invariant models can be constructed which allow a small positive cosmological constant, providing certain relation between the particle masses is satisfied. As a result, the mass of the dilaton is generated at two-loop level. Another interesting consequence is that the electroweak symmetry-breaking vacuum in such models is necessarily a metastable ''false'' vacuum which, fortunately, is not expected to decay on cosmological time scales.
TOPICAL REVIEW The cosmological constant puzzle
NASA Astrophysics Data System (ADS)
Bass, Steven D.
2011-04-01
The accelerating expansion of the Universe points to a small positive vacuum energy density and negative vacuum pressure. A strong candidate is the cosmological constant in Einstein's equations of general relativity. Possible contributions are zero-point energies and the condensates associated with spontaneous symmetry breaking. The vacuum energy density extracted from astrophysics is 1056 times smaller than the value expected from quantum fields and standard model particle physics. Is the vacuum energy density time dependent? We give an introduction to the cosmological constant puzzle and ideas how to solve it.
Dielectric constants of soils at microwave frequencies
NASA Technical Reports Server (NTRS)
Geiger, F. E.; Williams, D.
1972-01-01
A knowledge of the complex dielectric constant of soils is essential in the interpretation of microwave airborne radiometer data of the earth's surface. Measurements were made at 37 GHz on various soils from the Phoenix, Ariz., area. Extensive data have been obtained for dry soil and soil with water content in the range from 0.6 to 35 percent by dry weight. Measurements were made in a two arm microwave bridge and results were corrected for reflections at the sample interfaces by solution of the parallel dielectric plate problem. The maximum dielectric constants are about a factor of 3 lower than those reported for similar soils at X-band frequencies.
Environmental dependence of masses and coupling constants
Olive, Keith A.; Pospelov, Maxim
2008-02-15
We construct a class of scalar field models coupled to matter that lead to the dependence of masses and coupling constants on the ambient matter density. Such models predict a deviation of couplings measured on the Earth from values determined in low-density astrophysical environments, but do not necessarily require the evolution of coupling constants with the redshift in the recent cosmological past. Additional laboratory and astrophysical tests of {delta}{alpha} and {delta}(m{sub p}/m{sub e}) as functions of the ambient matter density are warranted.
Microfabricated microengine with constant rotation rate
Romero, Louis A.; Dickey, Fred M.
1999-01-01
A microengine uses two synchronized linear actuators as a power source and converts oscillatory motion from the actuators into constant rotational motion via direct linkage connection to an output gear or wheel. The microengine provides output in the form of a continuously rotating output gear that is capable of delivering drive torque at a constant rotation to a micromechanism. The output gear can have gear teeth on its outer perimeter for directly contacting a micromechanism requiring mechanical power. The gear is retained by a retaining means which allows said gear to rotate freely. The microengine is microfabricated of polysilicon on one wafer using surface micromachining batch fabrication.
Microfabricated microengine with constant rotation rate
Romero, L.A.; Dickey, F.M.
1999-09-21
A microengine uses two synchronized linear actuators as a power source and converts oscillatory motion from the actuators into constant rotational motion via direct linkage connection to an output gear or wheel. The microengine provides output in the form of a continuously rotating output gear that is capable of delivering drive torque at a constant rotation to a micromechanism. The output gear can have gear teeth on its outer perimeter for directly contacting a micromechanism requiring mechanical power. The gear is retained by a retaining means which allows said gear to rotate freely. The microengine is microfabricated of polysilicon on one wafer using surface micromachining batch fabrication.
Our Universe from the cosmological constant
Barrau, Aurélien; Linsefors, Linda E-mail: linda.linsefors@lpsc.in2p3.fr
2014-12-01
The issue of the origin of the Universe and of its contents is addressed in the framework of bouncing cosmologies, as described for example by loop quantum gravity. If the current acceleration is due to a true cosmological constant, this constant is naturally conserved through the bounce and the Universe should also be in a (contracting) de Sitter phase in the remote past. We investigate here the possibility that the de Sitter temperature in the contracting branch fills the Universe with radiation that causes the bounce and the subsequent inflation and reheating. We also consider the possibility that this gives rise to a cyclic model of the Universe and suggest some possible tests.
Degravitation of the cosmological constant in bigravity
NASA Astrophysics Data System (ADS)
Platscher, Moritz; Smirnov, Juri
2017-03-01
In this article the phenomenon of degravitation of the cosmological constant is studied in the framework of bigravity. It is demonstrated that despite a sizable value of the cosmological constant its gravitational effect can be only mild. The bigravity framework is chosen for this demonstration as it leads to a consistent, ghost-free theory of massive gravity. We show that degravitation takes place in the limit where the physical graviton is dominantly a gauge invariant metric combination. We present and discuss several phenomenological consequences expected in this regime.
Porous low dielectric constant materials for microelectronics.
Baklanov, Mikhail R; Maex, Karen
2006-01-15
Materials with a low dielectric constant are required as interlayer dielectrics for the on-chip interconnection of ultra-large-scale integration devices to provide high speed, low dynamic power dissipation and low cross-talk noise. The selection of chemical compounds with low polarizability and the introduction of porosity result in a reduced dielectric constant. Integration of such materials into microelectronic circuits, however, poses a number of challenges, as the materials must meet strict requirements in terms of properties and reliability. These issues are the subject of the present paper.
Atomic Weights No Longer Constants of Nature
Coplen, T.B.; Holden, N.
2011-03-01
Many of us grew up being taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis has changed the way we view atomic weights and why they are no longer constants of nature.
Relation of the diffuse reflectance remission function to the fundamental optical parameters.
NASA Technical Reports Server (NTRS)
Simmons, E. L.
1972-01-01
The Kubelka-Munk equations describing the diffuse reflectance of a powdered sample were compared to equations obtained using a uniformly-sized rough-surfaced spherical particle model. The comparison resulted in equations relating the remission function and the Kubelka-Munk constants to the index of refraction, the absorption coefficient, and the average particle diameter of a powdered sample. Published experimental results were used to test the equation relating to the remission function to the fundamental optical parameters.
Spray Gun With Constant Mixing Ratio
NASA Technical Reports Server (NTRS)
Simpson, William G.
1987-01-01
Conceptual mechanism mounted in handle of spray gun maintains constant ratio between volumetric flow rates in two channels leading to spray head. With mechanism, possible to keep flow ratio near 1:1 (or another desired ratio) over range of temperatures, orifice or channel sizes, or clogging conditions.
Variations of the Solar Constant. [conference
NASA Technical Reports Server (NTRS)
Sofia, S. (Editor)
1981-01-01
The variations in data received from rocket-borne and balloon-borne instruments are discussed. Indirect techniques to measure and monitor the solar constant are presented. Emphasis is placed on the correlation of data from the Solar Maximum Mission and the Nimbus 7 satellites.
Teaching Nanochemistry: Madelung Constants of Nanocrystals
ERIC Educational Resources Information Center
Baker, Mark D.; Baker, A. David
2010-01-01
The Madelung constants for binary ionic nanoparticles are determined. The computational method described here sums the Coulombic interactions of each ion in the particle without the use of partial charges commonly used for bulk materials. The results show size-dependent lattice energies. This is a useful concept in teaching how properties such as…
Sensing Position With Approximately Constant Contact Force
NASA Technical Reports Server (NTRS)
Sturdevant, Jay
1996-01-01
Computer-controlled electromechanical system uses number of linear variable-differential transformers (LVDTs) to measure axial positions of selected points on surface of lens, mirror, or other precise optical component with high finish. Pressures applied to pneumatically driven LVDTs adjusted to maintain small, approximately constant contact forces as positions of LVDT tips vary.
A tunable CMOS constant current source
NASA Technical Reports Server (NTRS)
Thelen, D.
1991-01-01
A constant current source has been designed which makes use of on chip electrically erasable memory to adjust the magnitude and temperature coefficient of the output current. The current source includes a voltage reference based on the difference between enhancement and depletion transistor threshold voltages. Accuracy is +/- 3% over the full range of power supply, process variations, and temperature using eight bits for tuning.
Spectral curve fitting of dielectric constants
NASA Astrophysics Data System (ADS)
Ruzi, M.; Ennis, C.; Robertson, E. G.
2017-01-01
Optical constants are important properties governing the response of a material to incident light. It follows that they are often extracted from spectra measured by absorbance, transmittance or reflectance. One convenient method to obtain optical constants is by curve fitting. Here, model curves should satisfy Kramer-Kronig relations, and preferably can be expressed in closed form or easily calculable. In this study we use dielectric constants of three different molecular ices in the infrared region to evaluate four different model curves that are generally used for fitting optical constants: (1) the classical damped harmonic oscillator, (2) Voigt line shape, (3) Fourier series, and (4) the Triangular basis. Among these, only the classical damped harmonic oscillator model strictly satisfies the Kramer-Kronig relation. If considering the trade-off between accuracy and speed, Fourier series fitting is the best option when spectral bands are broad while for narrow peaks the classical damped harmonic oscillator and the Triangular basis fitting model are the best choice.
Unified Technical Concepts. Module 12: Time Constants.
ERIC Educational Resources Information Center
Technical Education Research Center, Waco, TX.
This concept module on time constants is one of thirteen modules that provide a flexible, laboratory-based physics instructional package designed to meet the specialized needs of students in two-year, postsecondary technical schools. Each of the thirteen concept modules discusses a single physics concept and how it is applied to each energy…
Damping constant estimation in magnetoresistive readers
Stankiewicz, Andrzej Hernandez, Stephanie
2015-05-07
The damping constant is a key design parameter in magnetic reader design. Its value can be derived from bulk or sheet film ferromagnetic resonance (FMR) line width. However, dynamics of nanodevices is usually defined by presence of non-uniform modes. It triggers new damping mechanisms and produces stronger damping than expected from traditional FMR. This work proposes a device-level technique for damping evaluation, based on time-domain analysis of thermally excited stochastic oscillations. The signal is collected using a high bandwidth oscilloscope, by direct probing of a biased reader. Recorded waveforms may contain different noise signals, but free layer FMR is usually a dominating one. The autocorrelation function is a reflection of the damped oscillation curve, averaging out stochastic contributions. The damped oscillator formula is fitted to autocorrelation data, producing resonance frequency and damping constant values. Restricting lag range allows for mitigation of the impact of other phenomena (e.g., reader instability) on the damping constant. For a micromagnetically modeled reader, the technique proves to be much more accurate than the stochastic FMR line width approach. Application to actual reader waveforms yields a damping constant of ∼0.03.
The Elastic Constants for Wrought Aluminum Alloys
NASA Technical Reports Server (NTRS)
Templin, R L; Hartmann, E C
1945-01-01
There are several constants which have been devised as numerical representations of the behavior of metals under the action of loadings which stress the metal within the range of elastic action. Some of these constants, such as Young's modulus of elasticity in tension and compression, shearing modulus of elasticity, and Poisson's ratio, are regularly used in engineering calculations. Precise tests and experience indicate that these elastic constants are practically unaffected by many of the factors which influence the other mechanical properties of materials and that a few careful determinations under properly controlled conditions are more useful and reliable than many determinations made under less favorable conditions. It is the purpose of this paper to outline the methods employed by the Aluminum Research Laboratories for the determination of some of these elastic constants, to list the values that have been determined for some of the wrought aluminum alloys, and to indicate the variations in the values that may be expected for some of the commercial products of these alloys.
Constant capacitance in nanopores of carbon monoliths.
García-Gómez, Alejandra; Moreno-Fernández, Gelines; Lobato, Belén; Centeno, Teresa A
2015-06-28
The results obtained for binder-free electrodes made of carbon monoliths with narrow micropore size distributions confirm that the specific capacitance in the electrolyte (C2H5)4NBF4/acetonitrile does not depend significantly on the micropore size and support the foregoing constant result of 0.094 ± 0.011 F m(-2).
Textbook Deficiencies: Ambiguities in Chemical Kinetics Rates and Rate Constants
NASA Astrophysics Data System (ADS)
Quisenberry, Keith T.; Tellinghuisen, Joel
2006-03-01
Balanced chemical reactions often have at least some stoichiometry coefficients that are not unity. To avoid ambiguity in defining the kinetics rate for a reaction, the IUPAC has established the convention, rate = (1/ν i )/(d[A i ]/d t ) relating the reaction rate to the rate of change of concentration of any reactant or product A i and its stoichiometry number ν i (negative for reactants, positive for products). The rate is a product of the rate constant k and some function of the concentrations of reactants and products that must be determined experimentally. While most general chemistry textbooks correctly state this convention, most also proceed to ignore it in subsequent development, particularly in the use of integrated rate laws and the definition of the reaction half-life. We recommend that in future editions, authors make it clear that (i) the reaction rate and rate constant cannot be defined unambiguously without explicitly stating the reaction for which they apply and therefore (ii) the relation between the half-life, which is a physical property of the reaction system, and the rate constant depends upon how the reaction is written. The errors have arisen in part because most texts simply state the integrated rate expressions for first- and second-order reactions without deriving them. It is both appropriate and easy to include such derivations in texts oriented toward students intending careers in science, engineering, and medicine.
When is the growth index constant?
NASA Astrophysics Data System (ADS)
Polarski, David; Starobinsky, Alexei A.; Giacomini, Hector
2016-12-01
The growth index γ is an interesting tool to assess the phenomenology of dark energy (DE) models, in particular of those beyond general relativity (GR). We investigate the possibility for DE models to allow for a constant γ during the entire matter and DE dominated stages. It is shown that if DE is described by quintessence (a scalar field minimally coupled to gravity), this behaviour of γ is excluded either because it would require a transition to a phantom behaviour at some finite moment of time, or, in the case of tracking DE at the matter dominated stage, because the relative matter density Ωm appears to be too small. An infinite number of solutions, with Ωm and γ both constant, are found with wDE = 0 corresponding to Einstein-de Sitter universes. For all modified gravity DE models satisfying Geff >= G, among them the f(R) DE models suggested in the literature, the condition to have a constant wDE is strongly violated at the present epoch. In contrast, DE tracking dust-like matter deep in the matter era, but with Ωm <1, requires Geff > G and an example is given using scalar-tensor gravity for a range of admissible values of γ. For constant wDE inside GR, departure from a quasi-constant value is limited until today. Even a large variation of wDE may not result in a clear signature in the change of γ. The change however is substantial in the future and the asymptotic value of γ is found while its slope with respect to Ωm (and with respect to z) diverges and tends to ‑∞.
Fundamental Physics for Probing and Imaging
NASA Astrophysics Data System (ADS)
Allison, Wade
2006-12-01
This book addresses the question 'What is physics for?' Physics has provided many answers for mankind by extending his ability to see. Modern technology has enabled the power of physics to see into objects to be used in archaeology, medicine including therapy, geophysics, forensics and other spheres important to the good of society. The book looks at the fundamental physics of the various methods and how they are used by technology. These methods are magnetic resonance, ionising radiation and sound. By taking a broad view over the whole field it encourages comparisons, but also addresses questions of risk and benefit to society from a fundamental viewpoint. This textbook has developed from a course given to third year students at Oxford and is written so that it can be used coherently as a basis for shortened courses by omitting a number of chapters.
Fundamental implications of intergalactic magnetic field observations
NASA Astrophysics Data System (ADS)
Vachaspati, Tanmay
2017-03-01
Helical intergalactic magnetic fields at the ˜10-14 G level on ˜10 Mpc length scales are indicated by current gamma ray observations. The existence of magnetic fields in cosmic voids and their nontrivial helicity suggest that they must have originated in the early Universe and thus have implications for the fundamental interactions. We derive the spectrum of the cosmological magnetic field as implied by observations and MHD evolution, yielding order nano Gauss fields on kiloparsec scales and a "large helicity puzzle" that needs to be resolved by the fundamental interactions. The importance of C P violation and a possible crucial role for chiral effects or axions in the early Universe are pointed out.
40 year retrospective of fundamental mechanisms
NASA Astrophysics Data System (ADS)
Soileau, M. J.
2008-10-01
Fundamental mechanisms of laser induced damage (LID) have been one of the most controversial topics during the forty years of the Boulder Damage Symposium (Ref. 1.) LID is fundamentally a very nonlinear process and sensitive to a variety of parameters including wavelength, pulse width, spot size, focal conditions, material band gap, thermal-mechanical prosperities, and component design considerations. The complex interplay of many of these parameters and sample to sample materials variations combine to make detailed, first principle, models very problematic at best. The phenomenon of self-focusing, the multi spatial and temporal mode structure of most lasers, and the fact that samples are 'consumed' in testing complicate experiential results. This paper presents a retrospective of the work presented at this meeting.
DOE fundamentals handbook: Mechanical science. Volume 2
Not Available
1993-01-01
The Mechanical Science Handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mechanical components and mechanical science. The handbook includes information diesel engines, heat exchangers, pumps, valves, and miscellaneous mechanical components. This information will provide personnel with a foundation for understanding the construction and operation of mechanical components that are associated with various DOE nuclear facility operations and maintenance.
Baryogenesis and its implications to fundamental physics
Yoshimura, M.
2008-08-08
In this talk I shall explain some basic concepts of baryogenesis and leptogenesis theory, and a new idea of experimental method of verification of fundamental ingredients of leptogenesis theory; the Majorana nature and the absolute magnitude of neutrino masses. Both of these are important to the quest of physics beyond the standard theory, and have far reaching implications irrespective of any particular medel of leptogenesis. If this new method works ideally, there is even a further possibility of detecting relic neutrinos.
GN and C Fault Protection Fundamentals
NASA Technical Reports Server (NTRS)
Rasmussen, Robert D.
2008-01-01
This is a companion presentation for a paper by the same name for the same conference. The objective of this paper is to shed some light on the fundamentals of fault tolerant design for GN&C. The common heritage of ideas behind both faulted and normal operation is explored, as is the increasingly indistinct line between these realms in complex missions. Techniques in common practice are then evaluated in this light to suggest a better direction for future efforts.
New Fundamental Station in Ny-Alesund
NASA Technical Reports Server (NTRS)
Langkaas, Line; Dahlen, Terje; Opseth, Per Erik
2010-01-01
The Norwegian Mapping Authority s (NMA) geodetic observatory has been operating in Ny-Alesund since 1994. To adapt to the VLBI2010 standard and extend our activity to also integrate SLR, NMA is in the process of funding a new fundamental station. Handling more intensive observations in real time requires a fiber optic cable to Ny-Alesund. The Norwegian Mapping Authority is currently applying for project funding of 26 million euros.
Chiral phases of fundamental and adjoint quarks
Natale, A. A.
2016-01-22
We consider a QCD chiral symmetry breaking model where the gap equation contains an effective confining propagator and a dressed gluon propagator with a dynamically generated mass. This model is able to explain the ratios between the chiral transition and deconfinement temperatures in the case of fundamental and adjoint quarks. It also predicts the recovery of the chiral symmetry for a large number of quarks (n{sub f} ≈ 11 – 13) in agreement with lattice data.
DOE fundamentals handbook: Material science. Volume 1
Not Available
1993-01-01
This handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of the structure and properties of metals. This volume contains the two modules: structure of metals (bonding, common lattic types, grain structure/boundary, polymorphis, alloys, imperfections in metals) and properties of metals (stress, strain, Young modulus, stress-strain relation, physical properties, working of metals, corrosion, hydrogen embrittlement, tritium/material compatibility).
DOE fundamentals handbook: Material science. Volume 1
Not Available
1993-01-01
The Mechanical Science Handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mechanical components and mechanical science. The handbook includes information on diesel engines, heat exchangers, pumps, valves, and miscellaneous mechanical components. This information will provide personnel with a foundation for understanding the construction and operation of mechanical components that are associated with various DOE nuclear facility operations and maintenance.
Chiral phases of fundamental and adjoint quarks
NASA Astrophysics Data System (ADS)
Natale, A. A.
2016-01-01
We consider a QCD chiral symmetry breaking model where the gap equation contains an effective confining propagator and a dressed gluon propagator with a dynamically generated mass. This model is able to explain the ratios between the chiral transition and deconfinement temperatures in the case of fundamental and adjoint quarks. It also predicts the recovery of the chiral symmetry for a large number of quarks (nf ≈ 11 - 13) in agreement with lattice data.
Fundamental plasma emission involving ion sound waves
NASA Technical Reports Server (NTRS)
Cairns, Iver H.
1987-01-01
The theory for fundamental plasma emission by the three-wave processes L + or - S to T (where L, S and T denote Langmuir, ion sound and transverse waves, respectively) is developed. Kinematic constraints on the characteristics and growth lengths of waves participating in the wave processes are identified. In addition the rates, path-integrated wave temperatures, and limits on the brightness temperature of the radiation are derived.
Fundamentals of Physics, Extended 7th Edition
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2004-05-01
No other book on the market today can match the 30-year success of Halliday, Resnick and Walker's Fundamentals of Physics! Fundamentals of Physics, 7th Edition and the Extended Version, 7th Edition offer a solid understanding of fundamental physics concepts, helping readers apply this conceptual understanding to quantitative problem solving, in a breezy, easy-to-understand style. A unique combination of authoritative content and stimulating applications. * Numerous improvements in the text, based on feedback from the many users of the sixth edition (both instructors and students) * Several thousand end-of-chapter problems have been rewritten to streamline both the presentations and answers * 'Chapter Puzzlers' open each chapter with an intriguing application or question that is explained or answered in the chapter * Problem-solving tactics are provided to help beginning Physics students solve problems and avoid common error * The first section in every chapter introduces the subject of the chapter by asking and answering, "What is Physics?" as the question pertains to the chapter * Numerous supplements available to aid teachers and students The extended edition provides coverage of developments in Physics in the last 100 years, including: Einstein and Relativity, Bohr and others and Quantum Theory, and the more recent theoretical developments like String Theory.