Fundamental Physical Constants
National Institute of Standards and Technology Data Gateway
SRD 121 CODATA Fundamental Physical Constants (Web, free access) This site, developed in the Physics Laboratory at NIST, addresses three topics: fundamental physical constants, the International System of Units (SI), which is the modern metric system, and expressing the uncertainty of measurement results.
Are Fundamental Constants Really Constant?
ERIC Educational Resources Information Center
Swetman, T. P.
1972-01-01
Dirac's classical conclusions, that the values of e2, M and m are constants and the quantity of G decreases with time. Evoked considerable interest among researchers and traces historical development by which further experimental evidence points out that both e and G are constant values. (PS)
Are Fundamental Constants Really Constant?
ERIC Educational Resources Information Center
Swetman, T. P.
1972-01-01
Dirac's classical conclusions, that the values of e2, M and m are constants and the quantity of G decreases with time. Evoked considerable interest among researchers and traces historical development by which further experimental evidence points out that both e and G are constant values. (PS)
Olive, Keith A.; Peloso, Marco; Uzan, Jean-Philippe
2011-02-15
We consider the signatures of a domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of the fine-structure constant, {alpha}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in {alpha} relative to the terrestrial value, depending on our relative position with respect to the wall. This wall could resolve the contradiction between claims of a variation of {alpha} based on Keck/Hires data and of the constancy of {alpha} based on Very Large Telescope data. We derive the properties of the wall and the parameters of the underlying microscopic model required to reproduce the possible spatial variation of {alpha}. We discuss the constraints on the existence of the low-energy domain wall and describe its observational implications concerning the variation of the fundamental constants.
Quantum electrodynamics and fundamental constants
NASA Astrophysics Data System (ADS)
Wundt, Benedikt Johannes Wilhelm
The unprecedented precision achieved both in the experimental measurements as well as in the theoretical description of atomic bound states make them an ideal study object for fundamental physics and the determination of fundamental constants. This requires a careful study of the effects from quantum electrodynamics (QED) on the interaction between the electron and the nucleus. The two theoretical approaches for the evaluation of QED corrections are presented and discussed. Due to the presence of two energy scales from the binding potential and the radiation field, an overlapping parameter has to be used in both approaches in order to separate the energy scales. The different choices for the overlapping parameter in the two methods are further illustrated in a model example. With the nonrelativistic theory, relativistic corrections in order ( Zalpha)2 to the two-photon decay rate of ionic states are calculated, as well as the leading radiative corrections of alpha( Zalpha)2ln[(Zalpha)-2 ]. It is shown that the corrections is gauge-invariant under a "hybrid" gauge transformation between Coulomb and Yennie gauge. Furthermore, QED corrections for Rydberg states in one-electron ions are investigated. The smallness of the corrections and the absence of nuclear size corrections enable very accurate theoretical predictions. Measuring transition frequencies and comparing them to the theoretical predictions, QED theory can be tested more precisely. In turn, this could yield a more accurate value for the Rydberg constant. Using a transition in a nucleus with a well determined mass, acting as a reference, a comparison to transition in other nuclei can even allow to determined nuclear masses. Finally, in order to avoid an additional uncertainty in nuclei with non zero nuclear spin, QED self-energy corrections to the hyperfine structure up to order alpha(Zalpha)2Delta EHFS are determined for highly excited Rydberg states.
Sankaran, Ramanan; Mason, Scott D.; Chen, Jacqueline H.; Hawkes, Evatt R.; Im, Hong G.
2005-01-01
The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry. Parametric studies on the effect of the initial amplitude of the temperature fluctuations, the initial length scales of the temperature and velocity fluctuations, and the turbulence intensity are performed. The combustion mode is characterized using the diagnostic measures developed in Part I of this study. Specifically, the ignition front speed and the scalar mixing timescales are used to identify the roles of molecular diffusion and heat conduction in each case. Predictions from a multizone model initialized from the DNS fields are presented and differences are explained using the diagnostic tools developed.
New Quasar Studies Keep Fundamental Physical Constant Constant
NASA Astrophysics Data System (ADS)
2004-03-01
fundamental constant at play here, alpha. However, the observed distribution of the elements is consistent with calculations assuming that the value of alpha at that time was precisely the same as the value today. Over the 2 billion years, the change of alpha has therefore to be smaller than about 2 parts per 100 millions. If present at all, this is a rather small change indeed. But what about changes much earlier in the history of the Universe? To measure this we must find means to probe still further into the past. And this is where astronomy can help. Because, even though astronomers can't generally do experiments, the Universe itself is a huge atomic physics laboratory. By studying very remote objects, astronomers can look back over a long time span. In this way it becomes possible to test the values of the physical constants when the Universe had only 25% of is present age, that is, about 10,000 million years ago. Very far beacons To do so, astronomers rely on spectroscopy - the measurement of the properties of light emitted or absorbed by matter. When the light from a flame is observed through a prism, a rainbow is visible. When sprinkling salt on the flame, distinct yellow lines are superimposed on the usual colours of the rainbow, so-called emission lines. Putting a gas cell between the flame and the prism, one sees however dark lines onto the rainbow: these are absorption lines. The wavelength of these emission and absorption lines is directly related to the energy levels of the atoms in the salt or in the gas. Spectroscopy thus allows us to study atomic structure. The fine structure of atoms can be observed spectroscopically as the splitting of certain energy levels in those atoms. So if alpha were to change over time, the emission and absorption spectra of these atoms would change as well. One way to look for any changes in the value of alpha over the history of the Universe is therefore to measure the spectra of distant quasars, and compare the wavelengths of
Man's Size in Terms of Fundamental Constants.
ERIC Educational Resources Information Center
Press, William H.
1980-01-01
Reviews calculations that derive an order of magnitude expression for the size of man in terms of fundamental constants, assuming that man satifies these three properties: he is made of complicated molecules; he requires an atmosphere which is not hydrogen and helium; he is as large as possible. (CS)
Man's Size in Terms of Fundamental Constants.
ERIC Educational Resources Information Center
Press, William H.
1980-01-01
Reviews calculations that derive an order of magnitude expression for the size of man in terms of fundamental constants, assuming that man satifies these three properties: he is made of complicated molecules; he requires an atmosphere which is not hydrogen and helium; he is as large as possible. (CS)
PREFACE: Fundamental Constants in Physics and Metrology
NASA Astrophysics Data System (ADS)
Klose, Volkmar; Kramer, Bernhard
1986-01-01
This volume contains the papers presented at the 70th PTB Seminar which, the second on the subject "Fundamental Constants in Physics and Metrology", was held at the Physikalisch-Technische Bundesanstalt in Braunschweig from October 21 to 22, 1985. About 100 participants from the universities and various research institutes of the Federal Republic of Germany participated in the meeting. Besides a number of review lectures on various broader subjects there was a poster session which contained a variety of topical contributed papers ranging from the theory of the quantum Hall effect to reports on the status of the metrological experiments at the PTB. In addition, the participants were also offered the possibility to visit the PTB laboratories during the course of the seminar. During the preparation of the meeting we noticed that even most of the general subjects which were going to be discussed in the lectures are of great importance in connection with metrological experiments and should be made accessible to the scientific community. This eventually resulted in the idea of the publication of the papers in a regular journal. We are grateful to the editor of Metrologia for providing this opportunity. We have included quite a number of papers from basic physical research. For example, certain aspects of high-energy physics and quantum optics, as well as the many-faceted role of Sommerfeld's fine-structure constant, are covered. We think that questions such as "What are the intrinsic fundamental parameters of nature?" or "What are we doing when we perform an experiment?" can shed new light on the art of metrology, and do, potentially, lead to new ideas. This appears to be especially necessary when we notice the increasing importance of the role of the fundamental constants and macroscopic quantum effects for the definition and the realization of the physical units. In some cases we have reached a point where the limitations of our knowledge of a fundamental constant and
The fundamental and universal nature of Boltzmann`s constant
Biedenharn, L.C.; Solem, J.C.
1996-07-01
The nature of Boltzmann`s constant is very unclear in the physics literature. In the first part of this paper, on general considerations, the authors examine this situation in detail and demonstrate the conclusion that Boltzmann`s constant is indeed both fundamental and universal. As a consequence of their development they find there is an important implication of this work for the problem of the entropy of information. In the second part they discuss, Szilard`s famous construction showing in detail how his result is incompatible with the demonstrations in both parts 1 and 2.
Search for a Variation of Fundamental Constants
NASA Astrophysics Data System (ADS)
Ubachs, W.
2013-06-01
Since the days of Dirac scientists have speculated about the possibility that the laws of nature, and the fundamental constants appearing in those laws, are not rock-solid and eternal but may be subject to change in time or space. Such a scenario of evolving constants might provide an answer to the deepest puzzle of contemporary science, namely why the conditions in our local Universe allow for extreme complexity: the fine-tuning problem. In the past decade it has been established that spectral lines of atoms and molecules, which can currently be measured at ever-higher accuracies, form an ideal test ground for probing drifting constants. This has brought this subject from the realm of metaphysics to that of experimental science. In particular the spectra of molecules are sensitive for probing a variation of the proton-electron mass ratio μ, either on a cosmological time scale, or on a laboratory time scale. A comparison can be made between spectra of molecular hydrogen observed in the laboratory and at a high redshift (z=2-3), using the Very Large Telescope (Paranal, Chile) and the Keck telescope (Hawaii). This puts a constraint on a varying mass ratio Δμ/μ at the 10^{-5} level. The optical work can also be extended to include CO molecules. Further a novel direction will be discussed: it was discovered that molecules exhibiting hindered internal rotation have spectral lines in the radio-spectrum that are extremely sensitive to a varying proton-electron mass ratio. Such lines in the spectrum of methanol were recently observed with the radio-telescope in Effelsberg (Germany). F. van Weerdenburg, M.T. Murphy, A.L. Malec, L. Kaper, W. Ubachs, Phys. Rev. Lett. 106, 180802 (2011). A. Malec, R. Buning, M.T. Murphy, N. Milutinovic, S.L. Ellison, J.X. Prochaska, L. Kaper, J. Tumlinson, R.F. Carswell, W. Ubachs, Mon. Not. Roy. Astron. Soc. 403, 1541 (2010). E.J. Salumbides, M.L. Niu, J. Bagdonaite, N. de Oliveira, D. Joyeux, L. Nahon, W. Ubachs, Phys. Rev. A 86, 022510
Spatial and temporal variations of fundamental constants
NASA Astrophysics Data System (ADS)
Levshakov, S. A.; Agafonova, I. I.; Molaro, P.; Reimers, D.
2010-11-01
Spatial and temporal variations in the electron-to-proton mass ratio, μ, and in the fine-structure constant, α, are not present in the Standard Model of particle physics but they arise quite naturally in grant unification theories, multidimensional theories and in general when a coupling of light scalar fields to baryonic matter is considered. The light scalar fields are usually attributed to a negative pressure substance permeating the entire visible Universe and known as dark energy. This substance is thought to be responsible for a cosmic acceleration at low redshifts, z < 1. A strong dependence of μ and α on the ambient matter density is predicted by chameleon-like scalar field models. Calculations of atomic and molecular spectra show that different transitions have different sensitivities to changes in fundamental constants. Thus, measuring the relative line positions, Δ V, between such transitions one can probe the hypothetical variability of physical constants. In particular, interstellar molecular clouds can be used to test the matter density dependence of μ, since gas density in these clouds is ~15 orders of magnitude lower than that in terrestrial environment. We use the best quality radio spectra of the inversion transition of NH3 (J,K)=(1,1) and rotational transitions of other molecules to estimate the radial velocity offsets, Δ V ≡ Vrot - Vinv. The obtained value of Δ V shows a statistically significant positive shift of 23±4stat±3sys m s-1 (1σ). Being interpreted in terms of the electron-to-proton mass ratio variation, this gives Δμ/μ = (22±4stat±3sys)×10-9. A strong constraint on variation of the quantity F = α2/μ in the Milky Way is found from comparison of the fine-structure transition J=1-0 in atomic carbon C i with the low-J rotational lines in carbon monoxide 13CO arising in the interstellar molecular clouds: |Δ F/F| < 3×10-7. This yields |Δ α/α| < 1.5×10-7 at z = 0. Since extragalactic absorbers have gas densities
Fundamental Approach to the Cosmological Constant Issue
NASA Astrophysics Data System (ADS)
Carmeli, Moshe
We use a Riemannian four-dimensional presentation for gravitation in which the coordinates are distances and velocity rather than the traditional space and time. We solve the field equations and show that there are three possibilities for the Universe to expand. The theory describes the Universe as having a three-phase evolution with a decelerating expansion, followed by a constant and an accelerating expansion, and it predicts that the Universe is now in the latter phase. It is shown, assuming Ωm = 0.245, that the time at which the Universe goes over from a decelerating to an accelerating expansion, occurs at 8.5 Gyr ago, at which time the cosmic radiation temperature was 146K. Recent observations show that the Universe's growth is accelerating. Our theory confirms these recent experimental results. The theory predicts also that now there is a positive pressure in the Universe. Although the theory has no cosmological constant, we extract from it its equivalence and show that Λ = 1.934 × 10-35 s-2. This value of Λ is in excellent agreement with measurements. It is also shown that the three-dimensional space of the Universe is Euclidean, as the Boomerang experiment shows.
Fundamental constants: The teamwork of precision
NASA Astrophysics Data System (ADS)
Myers, Edmund G.
2014-02-01
A new value for the atomic mass of the electron is a link in a chain of measurements that will enable a test of the standard model of particle physics with better than part-per-trillion precision. See Letter p.467
The Cosmology of Extra Dimensions and Varying Fundamental Constants
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.
2003-03-01
The workshop on the cosmology of extra dimensions and varying fundamental constants was part of JENAM 2002, held in Porto in September 2002. It was the first major international workshop devoted to this topic. It brought together string theorists, particle physicists, theoretical and observational cosmologists, relativists and observational astrophysicists. The overall motivation for the workshop was to discuss the current theoretical motivations for the existence of additional space-time dimensions, and to confront these expectations with existing or upcoming observational and experimental tests. The interaction between specialists in different areas was quite fruitful, and a number of outstanding issues were identified which are likely to become the main paths of research to be explored in this area in the coming years. Link: http://www.wkap.nl/prod/b/1-4020-1138-5
Systematic harmonic power laws inter-relating multiple fundamental constants
NASA Astrophysics Data System (ADS)
Chakeres, Donald; Buckhanan, Wayne; Andrianarijaona, Vola
2017-01-01
Power laws and harmonic systems are ubiquitous in physics. We hypothesize that 2, π, the electron, Bohr radius, Rydberg constant, neutron, fine structure constant, Higgs boson, top quark, kaons, pions, muon, Tau, W, and Z when scaled in a common single unit are all inter-related by systematic harmonic powers laws. This implies that if the power law is known it is possible to derive a fundamental constant's scale in the absence of any direct experimental data of that constant. This is true for the case of the hydrogen constants. We created a power law search engine computer program that randomly generated possible positive or negative powers searching when the product of logical groups of constants equals 1, confirming they are physically valid. For 2, π, and the hydrogen constants the search engine found Planck's constant, Coulomb's energy law, and the kinetic energy law. The product of ratios defined by two constants each was the standard general format. The search engine found systematic resonant power laws based on partial harmonic fraction powers of the neutron for all of the constants with products near 1, within their known experimental precision, when utilized with appropriate hydrogen constants. We conclude that multiple fundamental constants are inter-related within a harmonic power law system.
Quantum electrodynamics, high-resolution spectroscopy and fundamental constants
NASA Astrophysics Data System (ADS)
Karshenboim, Savely G.; Ivanov, Vladimir G.
2017-01-01
Recent progress in high-resolution spectroscopy has delivered us a variety of accurate optical results, which can be used for the determination of the atomic fundamental constants and for constraining their possible time variation. We present a brief overview of the results discussing in particular, the determination of the Rydberg constant, the relative atomic weight of the electron and proton, their mass ratio and the fine structure constant. Many individual results on those constants are obtained with use of quantum electrodynamics, and we discuss which sectors of QED are involved. We derive constraints on a possible time variation of the fine structure constants and me/mp.
Numerical values of fundamental constants and the anthropocentric principle
NASA Astrophysics Data System (ADS)
Novikov, I.; Polnarev, A.; Rozental, I.
The numerical values of fundamental physical constants are analyzed, and it is pointed out that the existence of complex structure and life would be impossible in universe models with constants significantly different in value from the observed. The problem of a search for universe models with the values of the physical constants very much different from the values characteristic of our universe is formulated and an example of such a search is given.
Revising your world-view of the fundamental constants
NASA Astrophysics Data System (ADS)
Ralston, John P.
2013-10-01
Fundamental constants" are thought to be discoveries about Nature that are xed and eternal, and not dependent on theory. Actually constants have no de nition outside the theory that uses them. For a century units and constants have been based on the physics of the previous millennium. The constants of physics changed radically with quantum mechanics and modern theory, but their use and interpretation was unfortunately locked in early. By critically re-examining the actual structure of the present system in a new light, we nd that obsolete concepts of Newtonian physics impede the understanding and use of quantum theory. Confronting the di erence nds that Planck's constant cannot be observed in quantum theory, and is entirely a construct of human history and convention. A cascade of seeming paradoxes and contradictions occurs when Plancks constant is eliminated, yet the end result is a simpler and cleaner vision of what quantum mechanics and quantum eld theory really involve. By eliminating redundant holdovers the number and nature of fundamental constants is revised. By avoiding the Newtonian conception of mass and associated experimental errors the electron mass is determined with a relative error 67 times smaller than before. The fundamental unit of electric charge is determined more than 100 times more accurately than the current determination of international committees.
Differential Mobility Spectrometry: Preliminary Findings on Determination of Fundamental Constants
NASA Technical Reports Server (NTRS)
Limero, Thomas; Cheng, Patti; Boyd, John
2007-01-01
The electron capture detector (ECD) has been used for 40+ years (1) to derive fundamental constants such as a compound's electron affinity. Given this historical perspective, it is not surprising that differential mobility spectrometry (DMS) might be used in a like manner. This paper will present data from a gas chromatography (GC)-DMS instrument that illustrates the potential capability of this device to derive fundamental constants for electron-capturing compounds. Potential energy curves will be used to provide possible explanation of the data.
Redefinition of SI Units Based on Fundamental Physical Constants
NASA Astrophysics Data System (ADS)
Fujii, Kenichi
The definitions of some units of the International System are likely to be revised as early as 2011 by basing them on fixed values of fundamental constants of nature, provided experimental realizations are demonstrated with sufficiently small uncertainties. As regards the kilogram, experiments aiming at linking it to the Avogadro constant and the Planck constant are under way in several laboratories. Details are given on the experimental techniques developed to achieve the target. The other units likely to be redefined are the ampere, the kelvin and the mole. Advantages and disadvantages of different alternatives for revised definitions are discussed.
Numerical values of the fundamental constants and the anthropic principle
NASA Astrophysics Data System (ADS)
Novikov, I. D.; Polnarev, A. G.; Rozental, I. L.
It is noted that complex structures (especially living organisms) could not exist if the values of the fundamental constants were slightly different. This paper formulates the problem of searching for the set of parameters (islands of stability) describing the universe which admit the appearance of complex structural formations.
The determination of best values of the fundamental physical constants.
Taylor, Barry N
2005-09-15
The purpose of this paper is to provide an overview of how a self-consistent set of 'best values' of the fundamental physical constants for use worldwide by all of science and technology is obtained from all of the relevant data available at a given point in time. The basis of the discussion is the 2002 Committee on Data for Science and Technology (CODATA) least-squares adjustment of the values of the constants, the most recent such study available, which was carried out under the auspices of the CODATA Task group on fundamental constants. A detailed description of the 2002 CODATA adjustment, which took into account all relevant data available by 31 December 2002, plus selected data that became available by Fall of 2003, may be found in the January 2005 issue of the Reviews of Modern Physics. Although the latter publication includes the full set of CODATA recommended values of the fundamental constants resulting from the 2002 adjustment, the set is also available electronically at http://physics.nist.gov/constants.
Planck intermediate results. XXIV. Constraints on variations in fundamental constants
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Butler, R. C.; Calabrese, E.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombo, L. P. L.; Couchot, F.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Diego, J. M.; Dole, H.; Doré, O.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Fabre, O.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Pratt, G. W.; Prunet, S.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Ristorcelli, I.; Rocha, G.; Roudier, G.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Uzan, J.-P.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Yvon, D.; Zacchei, A.; Zonca, A.
2015-08-01
Any variation in the fundamental physical constants, more particularly in the fine structure constant, α, or in the mass of the electron, me, affects the recombination history of the Universe and cause an imprint on the cosmic microwave background angular power spectra. We show that the Planck data allow one to improve the constraint on the time variation of the fine structure constant at redshift z ~ 103 by about a factor of 5 compared to WMAP data, as well as to break the degeneracy with the Hubble constant, H0. In addition to α, we can set a constraint on the variation in the mass of the electron, me, and in the simultaneous variation of the two constants. We examine in detail the degeneracies between fundamental constants and the cosmological parameters, in order to compare the limits obtained from Planck and WMAP and to determine the constraining power gained by including other cosmological probes. We conclude that independent time variations of the fine structure constant and of the mass of the electron are constrained by Planck to Δα/α = (3.6 ± 3.7) × 10-3 and Δme/me = (4 ± 11) × 10-3 at the 68% confidence level. We also investigate the possibility of a spatial variation of the fine structure constant. The relative amplitude of a dipolar spatial variation in α (corresponding to a gradient across our Hubble volume) is constrained to be δα/α = (-2.4 ± 3.7) × 10-2. Appendices are available in electronic form at http://www.aanda.org
The Relation between Fundamental Constants and Particle Physics Parameters
NASA Astrophysics Data System (ADS)
Thompson, Rodger
2017-01-01
The observed constraints on the variability of the proton to electron mass ratio $\\mu$ and the fine structure constant $\\alpha$ are used to establish constraints on the variability of the Quantum Chromodynamic Scale and a combination of the Higgs Vacuum Expectation Value and the Yukawa couplings. Further model dependent assumptions provide constraints on the Higgs VEV and the Yukawa couplings separately. A primary conclusion is that limits on the variability of dimensionless fundamental constants such as $\\mu$ and $\\alpha$ provide important constraints on the parameter space of new physics and cosmologies.
Early universe constraints on time variation of fundamental constants
Landau, Susana J.; Mosquera, Mercedes E.; Scoccola, Claudia G.; Vucetich, Hector
2008-10-15
We study the time variation of fundamental constants in the early Universe. Using data from primordial light nuclei abundances, cosmic microwave background, and the 2dFGRS power spectrum, we put constraints on the time variation of the fine structure constant {alpha} and the Higgs vacuum expectation value
Recommended Values of the Fundamental Physical Constants: A Status Report
Taylor, Barry N.; Cohen, E. Richard
1990-01-01
We summarize the principal advances made in the fundamental physical constants field since the completion of the 1986 CODATA least-squares adjustment of the constants and discuss their implications for both the 1986 set of recommended values and the next least-squares adjustment. In general, the new results lead to values of the constants with uncertainties 5 to 7 times smaller than the uncertainties assigned the 1986 values. However, the changes in the values themselves are less than twice the 1986 assigned one-standard-deviation uncertainties and thus are not highly significant. Although much new data has become available since 1986, three new results dominate the analysis: a value of the Planck constant obtained from a realization of the watt; a value of the fine-structure constant obtained from the magnetic moment anomaly of the electron; and a value of the molar gas constant obtained from the speed of sound in argon. Because of their dominant role in determining the values and uncertainties of many of the constants, it is highly desirable that additional results of comparable uncertainty that corroborate these three data items be obtained before the next adjustment is carried out. Until then, the 1986 CODATA set of recommended values will remain the set of choice. PMID:28179787
Machine Shop Fundamentals: Part I.
ERIC Educational Resources Information Center
Kelly, Michael G.; And Others
These instructional materials were developed and designed for secondary and adult limited English proficient students enrolled in machine tool technology courses. Part 1 includes 24 lessons covering introduction, safety and shop rules, basic machine tools, basic machine operations, measurement, basic blueprint reading, layout, and bench tools.…
Dynamical dark energy and variation of fundamental "constants"
NASA Astrophysics Data System (ADS)
Stern, Steffen
2008-12-01
In this thesis we study the influence of a possible variation of fundamental "constants" on the process of Big Bang Nucleosynthesis (BBN). Our findings are combined with further studies on variations of constants in other physical processes to constrain models of grand unification (GUT) and quintessence. We will find that the 7Li problem of BBN can be ameliorated if one allows for varying constants, where especially varying light quark masses show a strong influence. Furthermore, we show that recent studies of varying constants are in contradiction with each other and BBN in the framework of six exemplary GUT scenarios, if one assumes monotonic variation with time. We conclude that there is strong tension between recent claims of varying constants, hence either some claims have to be revised, or there are much more sophisticated GUT relations (and/or non-monotonic variations) realized in nature. The methods introduced in this thesis prove to be powerful tools to probe regimes well beyond the Standard Model of particle physics or the concordance model of cosmology, which are currently inaccessible by experiments. Once the first irrefutable proofs of varying constants are available, our method will allow for probing the consistency of models beyond the standard theories like GUT or quintessence and also the compatibility between these models.
ESO Future Facilities to Probe Fundamental Physical Constants
NASA Astrophysics Data System (ADS)
Molaro, Paolo; Liske, Jochen
Following HARPS, two ESO projects are aimed at the ambitious goal of trying to reach the highest possible precision in measuring the radial velocity of astronomical sources. ESPRESSO spectrograph, located at the incoherent combined 4VLT focus, but able to work either with one or all VLT units, and CODEX for E-ELT will mark ESO roadmap towards the cm s - 1level of precision and possibly to an unlimited temporal baseline. By providing photon noise limited measures their promise is to improve the present limits in the variability of fundamental physical constants by one and two orders of magnitude, respectively, thus allowing for instance to verify the claim discussed at this conference by John Webb of a possible spatial dipole in the variation of the fine structure constant.
Is there further evidence for spatial variation of fundamental constants?
NASA Astrophysics Data System (ADS)
Berengut, J. C.; Flambaum, V. V.; King, J. A.; Curran, S. J.; Webb, J. K.
2011-06-01
Indications of spatial variation of the fine-structure constant, α, based on study of quasar absorption systems have recently been reported [J. K. Webb, J. A. King, M. T. Murphy, V. V. Flambaum, R. F. Carswell, and M. B. Bainbridge, arXiv:1008.3907.]. The physics that causes this α-variation should have other observable manifestations, and this motivates us to look for complementary astrophysical effects. In this paper we propose a method to test whether spatial variation of fundamental constants existed during the epoch of big bang nucleosynthesis and study existing measurements of deuterium abundance for a signal. We also examine existing quasar absorption spectra data that are sensitive to variation of the electron-to-proton mass ratio μ and x=α2μgp for spatial variation.
CONSTRAINING FUNDAMENTAL CONSTANT EVOLUTION WITH H I AND OH LINES
Kanekar, N.; Langston, G. I.; Stocke, J. T.; Carilli, C. L.; Menten, K. M.
2012-02-20
We report deep Green Bank Telescope spectroscopy in the redshifted H I 21 cm and OH 18 cm lines from the z = 0.765 absorption system toward PMN J0134-0931. A comparison between the 'satellite' OH 18 cm line redshifts, or between the redshifts of the H I 21 cm and 'main' OH 18 cm lines, is sensitive to changes in different combinations of three fundamental constants, the fine structure constant {alpha}, the proton-electron mass ratio {mu} {identical_to} m{sub p} /m{sub e} , and the proton g-factor g{sub p} . We find that the satellite OH 18 cm lines are not perfectly conjugate, with both different line shapes and stronger 1612 MHz absorption than 1720 MHz emission. This implies that the satellite lines of this absorber are not suitable to probe fundamental constant evolution. A comparison between the redshifts of the H I 21 cm and OH 18 cm lines, via a multi-Gaussian fit, yields the strong constraint [{Delta}F/F] = [ - 5.2 {+-} 4.3] Multiplication-Sign 10{sup -6}, where F {identical_to} g{sub p} [{mu}{alpha}{sup 2}]{sup 1.57} and the error budget includes contributions from both statistical and systematic errors. We thus find no evidence for a change in the constants between z = 0.765 and the present epoch. Incorporating the constraint [{Delta}{mu}/{mu}] < 3.6 Multiplication-Sign 10{sup -7} from another absorber at a similar redshift and assuming that fractional changes in g{sub p} are much smaller than those in {alpha}, we obtain [{Delta}{alpha}/{alpha}] = (- 1.7 {+-} 1.4) Multiplication-Sign 10{sup -6} over a look-back time of 6.7 Gyr.
Is a Fundamental ``Constant'' Changing in Space/Time?
NASA Astrophysics Data System (ADS)
Riofrio, Louise
2009-05-01
Exploration of the Moon and Mars may yield benefits for physics. Geology and paleontology show that early Earth and Mars had conditions for liquid water and possibly life nearly 3.5 Gyr ago. According to standard models, solar luminosity was only 75% of today's value. Earth and Mars would have been frozen solid. Models must infer an extremely high concentration of gases such as CH4 or CO2 simultaneously heating both planets. Research on variability of fundamental constants is highly recommended in the Science Vision Document, in the ESA-ESO Working Group (WG) report on Fundamental Cosmology and is one of the science cases considered by the ESO WG on ELT. Since the sun turns fuel to energy according to E=mc^2 an expanding cosmology where c is related to time would provide nearly constant solar luminosity. The Lunar Laser Ranging Experiment from 1969 measures the Moon's recession at precisely 3.82 cm/yr, anomalously high. Geological evidence states that average recession is only 2.9 ± 0.6 cm/yr. If c slows according to GM=tc^3, that would precisely account for the discrepancy. The ``most profound mystery'' of Type Ia supernovae may also be explained. Supernova redhsifts appear to accelerate, leading to speculation about dark energies. A theory's prediction provides a precise fit to observations. Corroborating data from the Moon and Mars may indicate a ``c change'' in physics.
Evaluation of uncertainty in the adjustment of fundamental constants
NASA Astrophysics Data System (ADS)
Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza
2016-02-01
Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.
Laboratory Limits for Temporal Variations of Fundamental Constants:. AN Update
NASA Astrophysics Data System (ADS)
Peik, E.; Lipphardt, B.; Schnatz, H.; Tamm, C.; Weyers, S.; Wynands, R.
2008-09-01
Precision comparisons of different atomic frequency standards over a period of a few years can be used for a sensitive search for temporal variations of fundamental constants. We present recent frequency measurements of the 688 THz transition in the 171Yb+ ion. For this transition frequency a record over six years is now available, showing that a possible frequency drift relative to a cesium clock can be constrained to (-0.54 ± 0.97) Hz/yr, i.e. at the level of 2 · 10-15 per year. Combined with precision frequency measurements of an optical frequency in 199Hg+ and of the hyperfine ground state splitting in 87Rb a stringent limit on temporal variations of the fine structure constant α: d ln α/dt = (-0.26 ± 0.39) · 10-15 yr-1 and a model-dependent limit for variations of the proton-to-electron mass ratio μ in the present epoch can be derived: d ln μ/dt = (-1.2 ± 2.2) · 10-15 yr-1. We discuss these results in the context of astrophysical observations that apparently indicate changes in both of these constants over the last 5-10 billion years.
Base units of the SI, fundamental constants and modern quantum physics.
Bordé, Christian J
2005-09-15
Over the past 40 years, a number of discoveries in quantum physics have completely transformed our vision of fundamental metrology. This revolution starts with the frequency stabilization of lasers using saturation spectroscopy and the redefinition of the metre by fixing the velocity of light c. Today, the trend is to redefine all SI base units from fundamental constants and we discuss strategies to achieve this goal. We first consider a kinematical frame, in which fundamental constants with a dimension, such as the speed of light c, the Planck constant h, the Boltzmann constant k(B) or the electron mass m(e) can be used to connect and redefine base units. The various interaction forces of nature are then introduced in a dynamical frame, where they are completely characterized by dimensionless coupling constants such as the fine structure constant alpha or its gravitational analogue alpha(G). This point is discussed by rewriting the Maxwell and Dirac equations with new force fields and these coupling constants. We describe and stress the importance of various quantum effects leading to the advent of this new quantum metrology. In the second part of the paper, we present the status of the seven base units and the prospects of their possible redefinitions from fundamental constants in an experimental perspective. The two parts can be read independently and they point to these same conclusions concerning the redefinitions of base units. The concept of rest mass is directly related to the Compton frequency of a body, which is precisely what is measured by the watt balance. The conversion factor between mass and frequency is the Planck constant, which could therefore be fixed in a realistic and consistent new definition of the kilogram based on its Compton frequency. We discuss also how the Boltzmann constant could be better determined and fixed to replace the present definition of the kelvin.
Producing the deuteron in stars: anthropic limits on fundamental constants
NASA Astrophysics Data System (ADS)
Barnes, Luke A.; Lewis, Geraint F.
2017-07-01
Stellar nucleosynthesis proceeds via the deuteron (D), but only a small change in the fundamental constants of nature is required to unbind it. Here, we investigate the effect of altering the binding energy of the deuteron on proton burning in stars. We find that the most definitive boundary in parameter space that divides probably life-permitting universes from probably life-prohibiting ones is between a bound and unbound deuteron. Due to neutrino losses, a ball of gas will undergo rapid cooling or stabilization by electron degeneracy pressure before it can form a stable, nuclear reaction-sustaining star. We also consider a less-bound deuteron, which changes the energetics of the pp and pep reactions. The transition to endothermic pp and pep reactions, and the resulting beta-decay instability of the deuteron, do not seem to present catastrophic problems for life.
NASA Astrophysics Data System (ADS)
Berengut, J. C.; Flambaum, V. V.; Kava, E. M.
2011-10-01
Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes of experimental interest including 201,199Hg and 87,85Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.
Berengut, J. C.; Flambaum, V. V.; Kava, E. M.
2011-10-15
Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes of experimental interest including {sup 201,199}Hg and {sup 87,85}Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.
The fundamental constants of nature from lattice gauge theory simulations
Mackenzie, Paul B.; /Fermilab
2005-01-01
The fundamental laws of nature as we now know them are governed the fundamental parameters of the Standard Model. Some of these, such as the masses of the quarks, have been hidden from direct observation by the confinement of quarks. They are now being revealed through large scale numerical simulation of lattice gauge theory.
Fundamental Insight on Developing Low Dielectric Constant Polyimides
NASA Technical Reports Server (NTRS)
Simpson, J. O.; SaintClair, A. K.
1997-01-01
Thermally stable, durable, insulative polyimides are in great demand for the fabrication of microelectronic devices. In this investigation dielectric and optical properties have been studied for several series of aromatic polyimides. The effect of polarizability, fluorine content, and free volume on dielectric constant was examined. In general, minimizing polarizability, maximizing free volume and fluorination all lowered dielectric constants in the polyimides studied.
The fundamental constants of orthotropic affine plate/slab equations
NASA Technical Reports Server (NTRS)
Brunelle, E. J.
1984-01-01
The global constants associated with orthotropic slab/plate equations are discussed, and the rotational behavior of the modulus/compliance components associated with orthotropic slabs/plates are addressed. It is concluded that one cluster constant is less than or equal to unity for all physically possible materials. Rotationally anomalous behavior is found in two materials, and a simple inequality which can be used to identify regular or anomalous behavior is presented and discussed in detail.
CODATA recommended values of the fundamental physical constants: 2014*
NASA Astrophysics Data System (ADS)
Mohr, Peter J.; Newell, David B.; Taylor, Barry N.
2016-07-01
This paper gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. Details of the data selection and methodology of the adjustment are described. The recommended values may also be found at physics.nist.gov/constants.
CODATA Recommended Values of the Fundamental Physical Constants: 2014*
NASA Astrophysics Data System (ADS)
Mohr, Peter J.; Newell, David B.; Taylor, Barry N.
2016-12-01
This paper gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. Details of the data selection and methodology of the adjustment are described. The recommended values may also be found at http://physics.nist.gov/constants.
Search for variations of fundamental constants using atomic fountain clocks.
Marion, H; Pereira Dos Santos, F; Abgrall, M; Zhang, S; Sortais, Y; Bize, S; Maksimovic, I; Calonico, D; Grünert, J; Mandache, C; Lemonde, P; Santarelli, G; Laurent, Ph; Clairon, A; Salomon, C
2003-04-18
Over five years, we have compared the hyperfine frequencies of 133Cs and 87Rb atoms in their electronic ground state using several laser-cooled 133Cs and 87Rb atomic fountains with an accuracy of approximately 10(-15). These measurements set a stringent upper bound to a possible fractional time variation of the ratio between the two frequencies: d/dt ln([(nu(Rb))/(nu(Cs))]=(0.2+/-7.0)x 10(-16) yr(-1) (1sigma uncertainty). The same limit applies to a possible variation of the quantity (mu(Rb)/mu(Cs))alpha(-0.44), which involves the ratio of nuclear magnetic moments and the fine structure constant.
Trapped Hydrogen Spectroscopy: Fundamental Constants and Atomic Clocks
NASA Astrophysics Data System (ADS)
Willmann, Lorenz
2002-05-01
Ultra high resolution spectroscopy was an essential ingredient in the realisation and observation of Bose-Einstein condensation of atomic hydrogen(D.G. Fried, T. Killian, L. Willmann, D. Landhuis, S. Moss, D. Kleppner, and T. Greytak, Phys. Rev. Lett. 81), 3807 (1998). That experiment is a good starting point to explore the possibilities for future spectroscopy of trapped ultracold hydrogen. Of particular interest are two aspects. Firstly, the exploitation of the intrinsically small linewidth of the 1S-2S transition of only 1.3 Hz as an optical frequency standard. Secondly, the precision determination of the 2S-nS energy splittings in hydrogen, which can be used to determine the Rydberg constant, the Lamb shift or the proton charge radius. We will combine these two aspects in the experiment. The absolut value of the hydrogen 1S-2S transition frequency(M. Niering, R. Holzwarth, J. Reichert, P. Pokasov, Th. Udem, M. Weitz, T. W. Hänsch, P. Lemonde, G. Santarelli, M. Abgrall, P. Laurent, C. Salomon, and A. Clairon, Phys. Rev. Lett. 84), 5496 (2000) serves as an optical frequency standard for the measurements of the 2S-nS transition frequencies. The frequencies will be linked by a frequency comb generated by a mode locked laser. Currently, a femto second laser is being set up in collaboration with the group of F. Kärtner at MIT. The source of trapped atoms in the metastable 2S state is laser excitation of the 1S-2S transition, thus the 2S-nS spectroscopy can be done at the same time and in the same trapping field to reduce systematic effects.
Mineral scale management. Part II, Fundamental chemistry
Alan W. Rudie; Peter W. Hart
2006-01-01
The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...
Constraints on alternate universes: stars and habitable planets with different fundamental constants
NASA Astrophysics Data System (ADS)
Adams, Fred C.
2016-02-01
This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant α and the gravitational structure constant αG, and find the region in the α-αG plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper delineates the portion of the α-αG plane that satisfies all of these constraints. The results indicate that viable universes—with working stars and habitable planets—can exist within a parameter space where the structure constants α and αG vary by several orders of magnitude. These constraints also provide upper bounds on the structure constants (α,αG) and their ratio. We find the limit αG/α lesssim 10-34, which shows that habitable universes must have a large hierarchy between the strengths of the gravitational force and the electromagnetic force.
Constraints on alternate universes: stars and habitable planets with different fundamental constants
Adams, Fred C.
2016-02-01
This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant α and the gravitational structure constant α{sub G}, and find the region in the α-α{sub G} plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper delineates the portion of the α-α{sub G} plane that satisfies all of these constraints. The results indicate that viable universes—with working stars and habitable planets—can exist within a parameter space where the structure constants α and α{sub G} vary by several orders of magnitude. These constraints also provide upper bounds on the structure constants (α,α{sub G}) and their ratio. We find the limit α{sub G}/α ∼< 10{sup −34}, which shows that habitable universes must have a large hierarchy between the strengths of the gravitational force and the electromagnetic force.
NASA Astrophysics Data System (ADS)
Bize, Sebastien
2008-05-01
We will report on recent work performed with LNE-SYRTE fountain ensemble. This fountain ensemble includes a Cs fountain FO1, a transportable Cs fountain FOM and a dual fountain FO2, operating both with Rb and Cs. These three fountains are using the same ultra low phase noise interrogation oscillator based on a continuously operated cryogenic sapphire resonator oscillator (CSO), leading to best short term fractional frequency instabilities ranging from 1.6 to 6 parts in 1014. Recent work with FO2 focused on improving the rubidium part to reach accuracy similar to those achieved with Cs fountains (!4 to 12 parts in 1016). Recent comparisons in November 2007 with FOM show fractional frequency instability down to 3 parts in 1016 at 2 days. These comparisons provide new measurements of the Rb hyperfine frequency and improve the test of the variation of fundamental constants based on comparing the Rb and Cs hyperfine frequency over time. This work was performed in collaboration with Jocelyne Guena, Frederic Chapelet, Peter Rosenbusch, Philippe Laurent, Michel Abgrall, Daniele Rovera, Giorgio Santarelli, Lne-Syrte and Michael Tobar, University of Western Australia; and Andre Clairon, LNE-SYRTE-Observatoire de Paris.
Fundamental molecular physics and chemistry, part 1
NASA Astrophysics Data System (ADS)
Stehney, A. F.; Inokuti, M.
1983-12-01
Scientifically, the work of the program deals with aspects of the physics and chemistry of molecules related to their interactions with photons, electrons, and other external agents. These areas of study were chosen in view of our goals; that is to say, they were chosen so that the eventual outcome of the work meets some of the needs of the US Department of Energy (DOE) and of other government agencies that support the research. First, cross sections for electron and photon interactions with molecules were determined theoretically and experimently, because those cross sections are indispensable for detailed microscopic analyses of the earliest processes of radiation action on any molecular substance, including biological materials. Those analyses in turn provide a sound basis for radiology and radiation dosimetry. Second, the spectroscopy of certain molecules and of small clusters of molecules were studied because this topic is fundamental to the full understanding of atmospheric-pollutant chemistry.
[Aerosinusitis: part 1: Fundamentals, pathophysiology and prophylaxis].
Weber, R; Kühnel, T; Graf, J; Hosemann, W
2014-01-01
The relevance of aerosinusitis stems from the high number of flight passengers and the impaired fitness for work of the flight personnel. The frontal sinus is more frequently affected than the maxillary sinus and the condition generally occurs during descent. Sinonasal diseases and anatomic variations leading to obstruction of paranasal sinus ventilation favor the development of aerosinusitis. This Continuing Medical Education (CME) article is based on selective literature searches of the PubMed database (search terms: "aerosinusitis", "barosinusitis", "barotrauma" AND "sinus", "barotrauma" AND "sinusitis", "sinusitis" AND "flying" OR "aviator"). Additionally, currently available monographs and further articles that could be identified based on the publication reviews were also included. Part 1 presents the pathophysiology, symptoms, risk factors, epidemiology and prophylaxis of aerosinusitis. In part 2, diagnosis, conservative and surgical treatment will be discussed.
Quasar searches for variations in fundamental constants: the need for laboratory spectroscopy
NASA Astrophysics Data System (ADS)
Murphy, Michael Thomas
2015-08-01
I will briefly review the main advances in the search for cosmological variations in the fundamental constants of Nature using quasars that rely on, and have sometimes driven, improvements in laboratory spectroscopy. These focus on just two main fundamental parameters - the fine-structure constant and the proton-electron mass ratio - but require laboratory measurements, from the radio through to the ultraviolet, of molecules, atoms and their ions. Although many limitations have been removed by concerted laboratory efforts, some still remain. Still greater precision maybe be required by frequency-comb calibration of future astronomical spectrographs (astrocombs) and the Atacama Large Millimeter/submillimeter Array (ALMA).
NASA Technical Reports Server (NTRS)
Huang, Xinchuan; Fortenberry, Ryan C.; Lee, Timothy J.
2013-01-01
The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(subJ) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(exp-1), and the vibrational configuration interaction computed result is 3330.9 cm(exp-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the ISM and the laboratory.
Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD+
Biesheuvel, J.; Karr, J.-Ph.; Hilico, L.; Eikema, K. S. E.; Ubachs, W.; Koelemeij, J. C. J.
2016-01-01
The simplest molecules in nature, molecular hydrogen ions in the form of H2+ and HD+, provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD+ by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws. PMID:26815886
Biesheuvel, J; Karr, J-Ph; Hilico, L; Eikema, K S E; Ubachs, W; Koelemeij, J C J
2016-01-27
The simplest molecules in nature, molecular hydrogen ions in the form of H2(+) and HD(+), provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD(+) by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws.
Running vacuum in the Universe and the time variation of the fundamental constants of Nature
NASA Astrophysics Data System (ADS)
Fritzsch, Harald; Solà, Joan; Nunes, Rafael C.
2017-03-01
We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine-structure constant and Newton's constant) within the context of the so-called running vacuum models (RVMs) of the cosmic evolution. Recently, compelling evidence has been provided that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance Λ CDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level ≳ 3σ . Here we use such remarkable status of the RVMs to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time evolution of the dark matter particle masses should be crucially involved in the total mass variation of our Universe. A positive measurement of this kind of effects could be interpreted as strong support to the "micro-macro connection" (viz. the dynamical feedback between the evolution of the cosmological parameters and the time variation of the fundamental constants of the microscopic world), previously proposed by two of us (HF and JS).
Gyromagnetic factors and atomic clock constraints on the variation of fundamental constants
Luo Feng; Olive, Keith A.; Uzan, Jean-Philippe
2011-11-01
We consider the effect of the coupled variations of fundamental constants on the nucleon magnetic moment. The nucleon g-factor enters into the interpretation of the measurements of variations in the fine-structure constant, {alpha}, in both the laboratory (through atomic clock measurements) and in astrophysical systems (e.g. through measurements of the 21 cm transitions). A null result can be translated into a limit on the variation of a set of fundamental constants, that is usually reduced to {alpha}. However, in specific models, particularly unification models, changes in {alpha} are always accompanied by corresponding changes in other fundamental quantities such as the QCD scale, {Lambda}{sub QCD}. This work tracks the changes in the nucleon g-factors induced from changes in {Lambda}{sub QCD} and the light quark masses. In principle, these coupled variations can improve the bounds on the variation of {alpha} by an order of magnitude from existing atomic clock and astrophysical measurements. Unfortunately, the calculation of the dependence of g-factors on fundamental parameters is notoriously model-dependent.
Gyromagnetic factors and atomic clock constraints on the variation of fundamental constants
NASA Astrophysics Data System (ADS)
Luo, Feng; Olive, Keith A.; Uzan, Jean-Philippe
2011-11-01
We consider the effect of the coupled variations of fundamental constants on the nucleon magnetic moment. The nucleon g-factor enters into the interpretation of the measurements of variations in the fine-structure constant, α, in both the laboratory (through atomic clock measurements) and in astrophysical systems (e.g. through measurements of the 21 cm transitions). A null result can be translated into a limit on the variation of a set of fundamental constants, that is usually reduced to α. However, in specific models, particularly unification models, changes in α are always accompanied by corresponding changes in other fundamental quantities such as the QCD scale, ΛQCD. This work tracks the changes in the nucleon g-factors induced from changes in ΛQCD and the light quark masses. In principle, these coupled variations can improve the bounds on the variation of α by an order of magnitude from existing atomic clock and astrophysical measurements. Unfortunately, the calculation of the dependence of g-factors on fundamental parameters is notoriously model-dependent.
NASA Astrophysics Data System (ADS)
Fortenberry, Ryan C.; Huang, Xinchuan; Francisco, Joseph S.; Crawford, T. Daniel; Lee, Timothy J.
2012-06-01
Only one fundamental vibrational frequency of protonated carbon dioxide (HOCO+) has been experimentally observed in the gas phase: the ν1 O-H stretch. Utilizing quartic force fields defined from CCSD(T)/aug-cc-pVXZ (X = T,Q,5) complete basis set limit extrapolated energies modified to include corrections for core correlation and scalar relativistic effects coupled to vibrational perturbation theory and vibrational configuration interaction computations, we are predicting the full set of gas phase fundamental vibrational frequencies of HOCO+. Our prediction of ν1 is within less than 1 cm-1 of the experimental value. Our computations also include predictions of the gas phase fundamental vibrational frequencies of the deuterated form of the cation, DOCO+. Additionally, other spectroscopic constants for both systems are reported as part of this study, and a search for a cis-HOCO+ minimum found no such stationary point on the potential surface indicating that only the trans isomer is stable.
Measuring Variations in the Fundamental Constants with the Square Kilometre Array
NASA Astrophysics Data System (ADS)
Curran, S.
Recent theories of the fundamental interactions naturally predict space-time variations of the fundamental constants. In these theories (e.g. superstring and Mtheory), the constants naturally emerge as functions of the scale-lengths of the extra dimensions (e.g., [1,2]). At present, no mechanism has been found for keeping the compactified scale-lengths fixed and so, if extra dimensions exist and their sizes undergo any cosmological evolution, our 3-D coupling constants may vary in time. Several other modern theories also provide strong motivation for an experimental search for variation in the fine structure constant, α ≡ e 2/ħc. Interestingly, varying constants can provide alternative solutions to the "cosmological problems", e.g. flatness, horizon, etc. The most effective and well understood method of measuring variations in a is by observing absorption lines due to gas clouds along the line-of-sight to distant quasars. Recent detailed studies of the relative positions of heavy element optical transitions and comparison with present day (laboratory) wavelengths, may indeed suggest that the a may have evolved with time [3,4], although this consensus is be no means universal [5]. It is therefore clear that an independent check is required, which can refute or confirm the optical results, thus providing a sound experimental test of possible unified theories. The study of redshifted radio absorption lines offers the best test of cosmological changes in the fundamental constants, although presently, the paucity of systems exhibiting Hi 21-cm and molecular absorption severely limits our ability to carry out statistically sound comparisons.
Stadnik, Y V; Flambaum, V V
2015-04-24
Any slight variations in the fundamental constants of nature, which may be induced by dark matter or some yet-to-be-discovered cosmic field, would characteristically alter the phase of a light beam inside an interferometer, which can be measured extremely precisely. Laser and maser interferometry may be applied to searches for the linear-in-time drift of the fundamental constants, detection of topological defect dark matter through transient-in-time effects, and for a relic, coherently oscillating condensate, which consists of scalar dark matter fields, through oscillating effects. Our proposed experiments require either minor or no modifications of existing apparatus, and offer extensive reach into important and unconstrained spaces of physical parameters.
Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?
Stadnik, Y V; Flambaum, V V
2015-11-13
We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ, which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ, produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial (4)He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial (4)He abundance.
Dependence of macrophysical phenomena on the values of the fundamental constants
NASA Astrophysics Data System (ADS)
Press, W. H.; Lightman, A. P.
1983-12-01
Using simple arguments, it is considered how the fundamental constants determine the scales of various macroscopic phenomena, including the properties of solid matter; the distinction between rocks, asteroids, planets, and stars; the conditions on habitable planets; the length of the day and year; and the size and athletic ability of human beings. Most of the results, where testable, are accurate to within a couple of orders of magnitude.
Truppe, S.; Hendricks, R.J.; Tokunaga, S.K.; Lewandowski, H.J.; Kozlov, M.G.; Henkel, Christian; Hinds, E.A.; Tarbutt, M.R.
2013-01-01
Many modern theories predict that the fundamental constants depend on time, position or the local density of matter. Here we develop a spectroscopic method for pulsed beams of cold molecules, and use it to measure the frequencies of microwave transitions in CH with accuracy down to 3 Hz. By comparing these frequencies with those measured from sources of CH in the Milky Way, we test the hypothesis that fundamental constants may differ between the high- and low-density environments of the Earth and the interstellar medium. For the fine structure constant we find Δα/α=(0.3±1.1) × 10−7, the strongest limit to date on such a variation of α. For the electron-to-proton mass ratio we find Δμ/μ=(−0.7±2.2) × 10−7. We suggest how dedicated astrophysical measurements can improve these constraints further and can also constrain temporal variation of the constants. PMID:24129439
Competing bounds on the present-day time variation of fundamental constants
Dent, Thomas; Stern, Steffen; Wetterich, Christof
2009-04-15
We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time-varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the weak equivalence principle also lead to comparable bounds on present variations of constants. The 'winner in sensitivity' depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. Weak equivalence principle tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favor dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.
A Different Look at Dark Energy and the Time Variation of Fundamental Constants
Weinstein, Marvin; /SLAC
2011-02-07
This paper makes the simple observation that a fundamental length, or cutoff, in the context of Friedmann-Lemaitre-Robertson-Walker (FRW) cosmology implies very different things than for a static universe. It is argued that it is reasonable to assume that this cutoff is implemented by fixing the number of quantum degrees of freedom per co-moving volume (as opposed to a Planck volume) and the relationship of the vacuum-energy of all of the fields in the theory to the cosmological constant (or dark energy) is re-examined. The restrictions that need to be satisfied by a generic theory to avoid conflicts with current experiments are discussed, and it is shown that in any theory satisfying these constraints knowing the difference between w and minus one allows one to predict w. It is argued that this is a robust result and if this prediction fails the idea of a fundamental cutoff of the type being discussed can be ruled out. Finally, it is observed that, within the context of a specific theory, a co-moving cutoff implies a predictable time variation of fundamental constants. This is accompanied by a general discussion of why this is so, what are the strongest phenomenological limits upon this predicted variation, and which limits are in tension with the idea of a co-moving cutoff. It is pointed out, however, that a careful comparison of the predicted time variation of fundamental constants is not possible without restricting to a particular model field-theory and that is not done in this paper.
High Resolution Microwave Spectroscopy of CH as a Search for Variation of Fundamental Constants
NASA Astrophysics Data System (ADS)
Truppe, S.; Hendricks, R. J.; Tokunaga, S. K.; Hinds, E. A.; Tarbutt, M. R.
2013-06-01
The Standard Model of particle physics assumes that fundamental, dimensionless constants like the fine-structure constant, α, or the ratio of the proton to electron mass, μ, remain constant through time and space. Laboratory experiments have set tight bounds on variations of such constants on a short time scale. Astronomical observations, however, provide vital information about possible changes on long time scales. Recent measurements using quasar absorption spectra provide some evidence for a space-time variation of the fine-structure constant α. It is thus important to verify this discovery by using an entirely different method. Recently the prospect of using rotational microwave spectra of molecules as a probe of fundamental constants variation has attracted much attention. Generally these spectra depend on μ, but if fine and hyperfine structure is involved they also become sensitive to variations of α and the nuclear g-factor. Recent calculations show that the Λ-doublet and rotational spectra of CH are particularly sensitive to possible variations of μ and α. We present recent laboratory based high-resolution spectra of the Λ-doublet transition frequencies of the {F}_2, J=1/2 and {F}_1, J=3/2 states of CH, X^{2}{Π} (v=0) at 3.3GHz and 0.7GHz respectively, with {F} labelling the different spin-orbit manifolds of CH. We also present a measurement of the transition frequency between the two spin-orbit manifolds {F}_2, J=1/2 and {F}_1, J=3/2 at 530GHz. By using a molecular beam of CH in combination with a laser-microwave double-resonance technique and Ramsey's method of separated oscillatory fields, we have measured these transition frequencies to unprecedented accuracy. Hence CH can now be used as a sensitive probe to detect changes in fundamental constants by comparing lab based frequencies to radio-astronomical observations from distant gas clouds. T. Rosenband et al., Science {319}(5871), 1808, 2008 J. K. Webb et al., Physical Review Letters {107
Spectroscopy of antiprotonic helium atoms and its contribution to the fundamental physical constants
Hayano, Ryugo S.
2010-01-01
Antiprotonic helium atom, a metastable neutral system consisting of an antiproton, an electron and a helium nucleus, was serendipitously discovered, and has been studied at CERN’s antiproton decelerator facility. Its transition frequencies have recently been measured to nine digits of precision by laser spectroscopy. By comparing these experimental results with three-body QED calculations, the antiproton-to-electron massratio was determined as 1836.152674(5). This result contributed to the CODATA recommended values of the fundamental physical constants. PMID:20075605
Fundamental constant observational bounds on the variability of the QCD scale
NASA Astrophysics Data System (ADS)
Thompson, Rodger I.
2017-06-01
Many physical theories beyond the Standard Model predict time variations of basic physics parameters. Direct measurement of the time variations of these parameters is very difficult or impossible to achieve. By contrast, measurements of fundamental constants are relatively easy to achieve, both in the laboratory and by astronomical spectra of atoms and molecules in the early universe. In this work, measurements of the proton to electron mass ratio μ and the fine structure constant α are combined to place mildly model-dependent limits on the fractional variation of the quantum chromodynamic scale and the sum of the fractional variations of the Higgs vacuum expectation value (VEV) and the Yukawa couplings on time-scales of more than half the age of the universe. The addition of another model parameter allows the fractional variation of the Higgs VEV and the Yukawa couplings to be computed separately. Limits on their variation are found at the level of less than 5 × 10-5 over the past 7 Gyr. A model-dependent relation between the expected fractional variation of α relative to μ tightens the limits to 10-7 over the same time span. Limits on the present day rate of change of the constants and parameters are then calculated using slow roll quintessence. A primary result of this work is that studies of the dimensionless fundamental constants such as α and μ, whose values depend on the values of the physics parameters, are excellent monitors of the limits on the time variation of these parameters.
Fundamentals of Physics, Part 5 (Chapters 38-44)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2004-05-01
Chapter 38. Photons and Matter Waves. Chapter 39. More About Matter Waves. Chapter 40. All About Atoms. Chapter 41. Conduction of Electricity in Solids. Chapter 42. Nuclear Physics. Chapter 43. Energy from the Nucleus. Chapter 44. Quarks, Leptons, and the Big Bang. Appendix A: The International System of Units (SI). Appendix B: Some Fundamental Constants of Physics. Appendix C: Some Astronomical Data. Appendix D: Conversion Factors. Appendix E: Mathematical Formulas. Appendix F: Properties of the Elements. Appendix G: Periodic Tables of the Elements. Answers to Checkpoints and Odd-Numbered Questions, Exercises, and Problems. Index.
NASA Astrophysics Data System (ADS)
Flambaum, Victor
2016-05-01
Low-mass boson dark matter particles produced after Big Bang form classical field and/or topological defects. In contrast to traditional dark matter searches, effects produced by interaction of an ordinary matter with this field and defects may be first power in the underlying interaction strength rather than the second or fourth power (which appears in a traditional search for the dark matter). This may give a huge advantage since the dark matter interaction constant is extremely small. Interaction between the density of the dark matter particles and ordinary matter produces both `slow' cosmological evolution and oscillating variations of the fundamental constants including the fine structure constant alpha and particle masses. Recent atomic dysprosium spectroscopy measurements and the primordial helium abundance data allowed us to improve on existing constraints on the quadratic interactions of the scalar dark matter with the photon, electron and light quarks by up to 15 orders of magnitude. Limits on the linear and quadratic interactions of the dark matter with W and Z bosons have been obtained for the first time. In addition to traditional methods to search for the variation of the fundamental constants (atomic clocks, quasar spectra, Big Bang Nucleosynthesis, etc) we discuss variations in phase shifts produced in laser/maser interferometers (such as giant LIGO, Virgo, GEO600 and TAMA300, and the table-top silicon cavity and sapphire interferometers), changes in pulsar rotational frequencies (which may have been observed already in pulsar glitches), non-gravitational lensing of cosmic radiation and the time-delay of pulsar signals. Other effects of dark matter and dark energy include apparent violation of the fundamental symmetries: oscillating or transient atomic electric dipole moments, precession of electron and nuclear spins about the direction of Earth's motion through an axion condensate, and axion-mediated spin-gravity couplings, violation of Lorentz
Placing constraints on the time-variation of fundamental constants using atomic clocks
NASA Astrophysics Data System (ADS)
Nisbet-Jones, Peter
2015-05-01
Optical atomic frequency standards, such as those based on a single trapped ion of 171Yb+, now demonstrate systematic frequency uncertainties in the 10-17 -10-18 range. These standards rely on the principle that the unperturbed energy levels in atoms are fixed and can thus provide absolute frequency references. A frequency standard's uncertainty is therefore limited by the uncertainty in realising the idealized unperturbed environment. There exists the possibility however that the unperturbed level spacing is not fixed. Some theories that go beyond the Standard Model involve a time-variation of the fundamental ``constants'' - such as the fine structure constant - which determine these energy levels. Measurements of spectral lines in radiation emitted from distant galaxies around 1010 years ago are inconclusive, with some results suggesting the existence of a time-variation, and others observing nothing. By virtue of their very small measurement uncertainty atomic-clock experiments can, in timescales of only a few years, perform tests of present-day variation that are complementary to astrophysical data. Comparisons of frequency measurements between two or more atomic ``clock'' transitions that have different sensitivities to these constants enables us to directly measure any present-day time-variation. Combining recent results from the NPL 171Yb+ clock with measurements from other experiments worldwide places upper limits on the present-day time-variation of the proton-to-electron mass ratio μ and the fine-structure constant α of μ˙ / μ = 0 . 2 (1 . 1) ×10-16 yr-1 and μ˙ / μ = - 0 . 7 (2 . 1) ×10-17 .
An upper limit to the variation in the fundamental constants at redshift z = 5.2
NASA Astrophysics Data System (ADS)
Levshakov, S. A.; Combes, F.; Boone, F.; Agafonova, I. I.; Reimers, D.; Kozlov, M. G.
2012-04-01
Aims: We constrain a hypothetical variation in the fundamental physical constants over the course of cosmic time. Methods: We use unique observations of the CO(7-6) rotational line and the [C i] 3P2 - 3P1 3P2 fine-structure line towards a lensed galaxy at redshift z = 5.2 to constrain temporal variations in the constant F = α2/μ, where μ is the electron-to-proton mass ratio and α is the fine-structure constant. The relative change in F between z = 0 and z = 5.2, ΔF/F = (Fobs - Flab)/Flab, is estimated from the radial velocity offset, ΔV = Vrot - Vfs, between the rotational transitions in carbon monoxide and the fine-structure transition in atomic carbon. Results: We find a conservative value ΔV = (1 ± 5) km s-1 (1σ C.L.), which when interpreted in terms of ΔF/F gives ΔF/F < 2 × 10-5. Independent methods restrict the μ-variations at the level of Δμ/μ < 1 × 10-7 at z = 0.7 (look-back time tz0.7 = 6.4 Gyr). Assuming that temporal variations in μ, if any, are linear, this leads to an upper limit on Δμ/μ < 2 × 10-7 at z = 5.2 (tz5.2 = 12.9 Gyr). From both constraints on ΔF/F and Δμ/μ , one obtains for the relative change in α the estimate Δα/α < 8 × 10-6, which is at present the tightest limit on Δα/α at early cosmological epochs.
Kanekar, N; Carilli, C L; Langston, G I; Rocha, G; Combes, F; Subrahmanyan, R; Stocke, J T; Menten, K M; Briggs, F H; Wiklind, T
2005-12-31
We have detected the four 18 cm OH lines from the z approximaetely 0.765 gravitational lens toward PMN J0134-0931. The 1612 and 1720 MHz lines are in conjugate absorption and emission, providing a laboratory to test the evolution of fundamental constants over a large lookback time. We compare the HI and OH main line absorption redshifts of the different components in the z approximately 0.765 absorber and the z approximately 0.685 lens toward B0218 + 357 to place stringent constraints on changes in F triple-bond g(p)[alpha(2)/mu](1.57). We obtain [DeltaF/F] = (0.44 +/- 0.36(stat) +/- 1.0(sys)t) x 10(-5), consistent with no evolution over the redshift range 0 < z < or = 0.7. The measurements have a 2sigma sensitivity of [Deltaalpha/alpha] < 6.7 x 10(-6) or [Deltamu/mu] < 1.4 x 10(-5) to fractional changes in alpha and mu over a period of approximately 6.5 G yr, half the age of the Universe. These are among the most sensitive constraints on changes in mu.
The fundamentals of fetal magnetic resonance imaging: Part 2.
Plunk, Matthew R; Chapman, Teresa
2014-01-01
Careful assessment of fetal anatomy by a combination of ultrasound and fetal magnetic resonance imaging offers the clinical teams and counselors caring for the patient information that can be critical for the management of both the mother and the fetus. In the second half of this 2-part review, we focus on space-occupying lesions in the fetal body. Because developing fetal tissues are programmed to grow rapidly, mass lesions can have a substantial effect on the formation of normal adjacent organs. Congenital diaphragmatic hernia and lung masses, fetal teratoma, and intra-abdominal masses are discussed, with an emphasis on differential etiologies and on fundamental management considerations. Copyright © 2014 Mosby, Inc. All rights reserved.
Data stewardship - a fundamental part of the scientific method (Invited)
NASA Astrophysics Data System (ADS)
Foster, C.; Ross, J.; Wyborn, L. A.
2013-12-01
This paper emphasises the importance of data stewardship as a fundamental part of the scientific method, and the need to effect cultural change to ensure engagement by earth scientists. It is differentiated from the science of data stewardship per se. Earth System science generates vast quantities of data, and in the past, data analysis has been constrained by compute power, such that sub-sampling of data often provided the only way to reach an outcome. This is analogous to Kahneman's System 1 heuristic, with its simplistic and often erroneous outcomes. The development of HPC has liberated earth sciences such that the complexity and heterogeneity of natural systems can be utilised in modelling at any scale, global, or regional, or local; for example, movement of crustal fluids. Paradoxically, now that compute power is available, it is the stewardship of the data that is presenting the main challenges. There is a wide spectrum of issues: from effectively handling and accessing acquired data volumes [e.g. satellite feeds per day/hour]; through agreed taxonomy to effect machine to machine analyses; to idiosyncratic approaches by individual scientists. Except for the latter, most agree that data stewardship is essential. Indeed it is an essential part of the science workflow. As science struggles to engage and inform on issues of community importance, such as shale gas and fraccing, all parties must have equal access to data used for decision making; without that, there will be no social licence to operate or indeed access to additional science funding (Heidorn, 2008). The stewardship of scientific data is an essential part of the science process; but often it is regarded, wrongly, as entirely in the domain of data custodians or stewards. Geoscience Australia has developed a set of six principles that apply to all science activities within the agency: Relevance to Government Collaborative science Quality science Transparent science Communicated science Sustained
Fundamentals of Physics, Part 1 (Chapters 1-11)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2003-12-01
. 10-8 Torque. 10-9 Newton's Second Law for Rotation. 10-10 Work and Rotational Kinetic Energy. Review & Summary. Questions. Problems. Chapter 11.Rolling, Torque, and Angular Momentum. When a jet-powered car became supersonic in setting the land-speed record, what was the danger to the wheels? 11-1 What Is Physics? 11-2 Rolling as Translation and Rotation Combined. 11-3 The Kinetic Energy of Rolling. 11-4 The Forces of Rolling. 11-5 The Yo-Yo. 11-6 Torque Revisited. 11-7 Angular Momentum. 11-8 Newton's Second Law in Angular Form. 11-9 The Angular Momentum of a System of Particles. 11-10 The Angular Momentum of a Rigid Body Rotating About a Fixed Axis. 11-11 Conservation of Angular Momentum. 11-12 Precession of a Gyroscope. Review & Summary. Questions. Problems. Appendix A: The International System of Units (SI). Appendix B: Some Fundamental Constants of Physics. Appendix C: Some Astronomical Data. Appendix D: Conversion Factors. Appendix E: Mathematical Formulas. Appendix F: Properties of the Elements. Appendix G: Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics
NASA Astrophysics Data System (ADS)
Silveira, Joshua A.; Michelmann, Karsten; Ridgeway, Mark E.; Park, Melvin A.
2016-04-01
Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory.
Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics.
Silveira, Joshua A; Michelmann, Karsten; Ridgeway, Mark E; Park, Melvin A
2016-04-01
Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory.
Fundamentals of Physics, Part 4 (Chapters 34-38)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2004-04-01
of Time. 37-6 The Relativity of Length. 37-7 The Lorentz Transformation. 37-8 Some Consequences of the Lorentz Equations. 37-9 The Relativity of Velocities. 37-10 Doppler Effect for Light. 37-11 A New Look at Momentum. 37-12 A New Look at Energy. Review & Summary. Questions. Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
Broeckhoven, K; Verstraeten, M; Choikhet, K; Dittmann, M; Witt, K; Desmet, G
2011-02-25
We report on a general theoretical assessment of the potential kinetic advantages of running LC gradient elution separations in the constant-pressure mode instead of in the customarily used constant-flow rate mode. Analytical calculations as well as numerical simulation results are presented. It is shown that, provided both modes are run with the same volume-based gradient program, the constant-pressure mode can potentially offer an identical separation selectivity (except from some small differences induced by the difference in pressure and viscous heating trajectory), but in a significantly shorter time. For a gradient running between 5 and 95% of organic modifier, the decrease in analysis time can be expected to be of the order of some 20% for both water-methanol and water-acetonitrile gradients, and only weakly depending on the value of V(G)/V₀ (or equivalently t(G)/t₀). Obviously, the gain will be smaller when the start and end composition lie closer to the viscosity maximum of the considered water-organic modifier system. The assumptions underlying the obtained results (no effects of pressure and temperature on the viscosity or retention coefficient) are critically reviewed, and can be inferred to only have a small effect on the general conclusions. It is also shown that, under the adopted assumptions, the kinetic plot theory also holds for operations where the flow rate varies with the time, as is the case for constant-pressure operation. Comparing both operation modes in a kinetic plot representing the maximal peak capacity versus time, it is theoretically predicted here that both modes can be expected to perform equally well in the fully C-term dominated regime (where H varies linearly with the flow rate), while the constant pressure mode is advantageous for all lower flow rates. Near the optimal flow rate, and for linear gradients running from 5 to 95% organic modifier, time gains of the order of some 20% can be expected (or 25-30% when accounting for
Inostroza, Natalia; Fortenberry, Ryan C.; Lee, Timothy J.; Huang, Xinchuan
2013-12-01
Through established, highly accurate ab initio quartic force fields, a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1 {sup 1} A' and bent 2 {sup 1} A' DCCN, H{sup 13}CCN, HC{sup 13}CN, and HCC{sup 15}N isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good, with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1-3.2 cm{sup –1} range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X {sup 3} A' HCCN.
NASA Technical Reports Server (NTRS)
Inostroza, Natalia; Fortenberry, Ryan C.; Huang, Xinchuan; Lee, Timothy J.
2013-01-01
Through established, highly-accurate ab initio quartic force fields (QFFs), a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1(sup 1) 1A' and bent 2(sup 1)A' DCCN, H(C13)CCN, HC(C-13)N, and HCC(N-15) isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1 to 3.2 / cm range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly-dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X 3A0 HCCN.
Verstraeten, M; Broeckhoven, K; Dittmann, M; Choikhet, K; Witt, K; Desmet, G
2011-02-25
We report on a first series of experiments comparing the selectivity and the kinetic performance of constant flow rate and constant pressure mode gradient elution separations. Both water-methanol and water-acetonitrile mobile phase mixtures have been considered, as well as different samples and gradient programs. Instrument pressures up to 1200 bar have been used. Neglecting some small possible deviations caused by viscous heating effects, the experiments could confirm the theoretical expectation that both operation modes should lead to identical separation selectivities provided the same mobile phase gradient program is run in reduced volumetric coordinates. Also in agreement with the theoretical expectations, the cP-mode led to a gain in analysis time amounting up to some 17% for linear gradients running from 5 to 95% of organic modifier at ultra-high pressures. Gains of over 25% were obtained for segmented gradients, at least when the flat portions of the gradient program were situated in regions where the gradient composition was the least viscous. Detailed plate height measurements showed that the single difference between the constant flow rate and the constant pressure mode is a (small) difference in efficiency caused by the difference in average flow rate, in turn leading to a different intrinsic band broadening. Separating a phenone sample with a 20-95% water-acetonitrile gradient, the cP-mode leads to gradient plate heights that are some 20-40% smaller than in the cF-mode in the B-term dominated regime, while they are some 5-10% larger in the C-term dominated regime. Considering a separation with sub 2-μm particles on a 350 mm long coupled column, switching to the constant pressure mode allowed to finish the run in 29 instead of in 35 min, while also a larger peak capacity is obtained (going from 334 in the cF-mode to 339 in the cP-mode) and the mutual selectivity between the different peaks is fully retained. Copyright © 2010 Elsevier B.V. All rights
Fundamentals of Physics, Part 2 (Chapters 12-20)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2003-12-01
Engines. 20-8 A Statistical View of Entropy. Review & Summary Questions Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
NASA Astrophysics Data System (ADS)
Molaro, Paolo
An ideal instrument to probe fundamental constants such as the fine structure constant and the electron-to-proton mass ratio by means of absorption lines in QSOs spectra is a spectrograph which combine high throughput, high resolution and high stability and is compulsorily attached to a telescope with a large photon collecting area. Both the ESPRESSO proposal for the incoherent combined VLT focus and CODEX for the E-ELT keep these recipes and, although they are not optimized for this purpose, they hold the promise to improve the present limits by about two orders of magnitude. Thus either these physical constants are varying within this range or they would likely escape astronomical detection.
Fundamentals of Physics, Part 3 (Chapters 22-33)
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2004-03-01
magnetic .eld used in an MRI scan cause a patient to be burned? 30-1 What Is Physics? 30-2 Two Experiments. 30-3 Faraday's Law of Induction. 30-4 Lenz's Law. 30-5 Induction and Energy Transfers. 30-6 Induced Electric Fields. 30-7 Inductors and Inductance. 30-8 Self-Induction. 30-9 RL Circuits. 30-10 Energy Stored in a Magnetic Field. 30-11 Energy Density of a Magnetic Field. 30-12 Mutual Induction. Review & Summary. Questions. Problems. Chapter 31. Electromagnetic Oscillations and Alternating Current. How did a solar eruption knock out the power-grid system of Quebec? 31-1 What Is Physics? 31-2 LC Oscillations, Qualitatively. 31-3 The Electrical-Mechanical Analogy. 31-4 LC Oscillations, Quantitatively. 31-5 Damped Oscillations in an RLC Circuit. 31-6 Alternating Current. 31-7 Forced Oscillations. 31-8 Three Simple Circuits. 31-9 The Series RLC Circuit. 31-10 Power in Alternating-Current Circuits. 31-11 Transformers. Review & Summary. Questions. Problems. Chapter 32. Maxwell's Equations; Magnetism of Matter. How can a mural painting record the direction of Earth's magnetic field? 32-1 What Is Physics? 32-2 Gauss' Law for Magnetic Fields. 32-3 Induced Magnetic Fields. 32-4 Displacement Current. 32-5 Maxwell's Equations. 32-6 Magnets. 32-7 Magnetism and Electrons. 32-8 Magnetic Materials. 32-9 Diamagnetism. 32-10 Paramagnetism. 32-11 Ferromagnetism. Review & Summary. Questions. Problems. Appendices. A. The International System of Units (SI). B. Some Fundamental Constants of Physics. C. Some Astronomical Data. D. Conversion Factors. E. Mathematical Formulas. F. Properties of the Elements. G. Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.
Quinn, Terry; Burnett, Keith
2005-09-15
This is a short introductory note to the texts of lectures presented at a Royal Society Discussion meeting held on 14-15 February 2005 and now published in this issue of Philosophical Transactions A. It contains a brief resumé of the papers in the order they were presented at the meeting. This issue contains the texts of all of the presentations except those of Christophe Salomon, on cold atom clocks and tests of fundamental theory, and Francis Everitt, on Gravity Probe B, which were, unfortunately, not available.
New limits on coupling of fundamental constants to gravity using 87Sr optical lattice clocks.
Blatt, S; Ludlow, A D; Campbell, G K; Thomsen, J W; Zelevinsky, T; Boyd, M M; Ye, J; Baillard, X; Fouché, M; Le Targat, R; Brusch, A; Lemonde, P; Takamoto, M; Hong, F-L; Katori, H; Flambaum, V V
2008-04-11
The 1S0-3P0 clock transition frequency nuSr in neutral 87Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1 x 10(-15) level makes nuSr the best agreed-upon optical atomic frequency. We combine periodic variations in the 87Sr clock frequency with 199Hg+ and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant alpha, electron-proton mass ratio mu, and light quark mass. Furthermore, after 199Hg+, 171Yb+, and H, we add 87Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of alpha and mu.
New Limits on Coupling of Fundamental Constants to Gravity Using {sup 87}Sr Optical Lattice Clocks
Blatt, S.; Ludlow, A. D.; Campbell, G. K.; Thomsen, J. W.; Zelevinsky, T.; Boyd, M. M.; Ye, J.; Baillard, X.; Fouche, M.; Le Targat, R.; Brusch, A.; Lemonde, P.; Takamoto, M.; Hong, F.-L.; Katori, H.; Flambaum, V. V.
2008-04-11
The {sup 1}S{sub 0}-{sup 3}P{sub 0} clock transition frequency {nu}{sub Sr} in neutral {sup 87}Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1x10{sup -15} level makes {nu}{sub Sr} the best agreed-upon optical atomic frequency. We combine periodic variations in the {sup 87}Sr clock frequency with {sup 199}Hg{sup +} and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant {alpha}, electron-proton mass ratio {mu}, and light quark mass. Furthermore, after {sup 199}Hg{sup +}, {sup 171}Yb{sup +}, and H, we add {sup 87}Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of {alpha} and {mu}.
Anesthesia systems. Part 1: Operating principles of fundamental components.
Cicman, J H; Jacoby, M I; Skibo, V F; Yoder, J M
1992-10-01
This article is the first in a two-part series on the operation of principal components within Narkomed anesthesia systems. Part 1 illustrates the structure and function of various sections of the machine's internal piping, including components of the pneumatic circuit and the oxygen flush valve, and several safety features, such as the oxygen supply pressure alarm, oxygen failure protection device, and oxygen ratio monitor controller. The article progresses to other basic components of the anesthesia system. Topics include the function of the absorber unit and the flow of gas through it, the principle of operation of the positive end-expiratory pressure valve, the function and mechanics of the adjustable pressure limiter valve, and the open reservoir scavenger system. Part 1 is a valuable tool in understanding the function and pneumatics of the primary components of the anesthesia system.
[Mild dementia and driving ability. Part 1: Fundamentals].
Wolter, D K
2014-04-01
Physiological changes, but most of all diseases, affect driving ability in old age, whereby cognitive and mental performance plays an important part. Impaired health and feeling of unease while driving are the main reasons for driving cessation in the elderly. The causes of crashes and crash development show typical features compared to younger drivers. In the assessment of accident frequency and crash risk, sophisticated analyses are necessary. A person with moderate to severe dementia is certainly no longer fit to drive, whereas driving ability may be maintained in mild dementia for some time. In part 2, comprehensive information on the practice of assessment and judgement of driving ability is provided.
Limits on variations in fundamental constants from 21-cm and ultraviolet Quasar absorption lines.
Tzanavaris, P; Webb, J K; Murphy, M T; Flambaum, V V; Curran, S J
2005-07-22
Quasar absorption spectra at 21-cm and UV rest wavelengths are used to estimate the time variation of x [triple-bond] alpha(2)g(p)mu, where alpha is the fine structure constant, g(p) the proton g factor, and m(e)/m(p) [triple-bond] mu the electron/proton mass ratio. Over a redshift range 0.24 < or = zeta(abs) < or = 2.04, (Deltax/x)(weighted)(total) = (1.17 +/- 1.01) x 10(-5). A linear fit gives x/x = (-1.43 +/- 1.27) x 10(-15) yr(-1). Two previous results on varying alpha yield the strong limits Deltamu/mu = (2.31 +/- 1.03) x 10(-5) and Deltamu/mu=(1.29 +/- 1.01) x10(-5). Our sample, 8 x larger than any previous, provides the first direct estimate of the intrinsic 21-cm and UV velocity differences 6 km s(-1).
Du, Lin; Mackeprang, Kasper; Kjaergaard, Henrik G
2013-07-07
We have measured gas phase vibrational spectra of the bimolecular complex formed between methanol (MeOH) and dimethylamine (DMA) up to about 9800 cm(-1). In addition to the strong fundamental OH-stretching transition we have also detected the weak second overtone NH-stretching transition. The spectra of the complex are obtained by spectral subtraction of the monomer spectra from spectra recorded for the mixture. For comparison, we also measured the fundamental OH-stretching transition in the bimolecular complex between MeOH and trimethylamine (TMA). The enthalpies of hydrogen bond formation (ΔH) for the MeOH-DMA and MeOH-TMA complexes have been determined by measurements of the fundamental OH-stretching transition in the temperature range from 298 to 358 K. The enthalpy of formation is found to be -35.8 ± 3.9 and -38.2 ± 3.3 kJ mol(-1) for MeOH-DMA and MeOH-TMA, respectively, in the 298 to 358 K region. The equilibrium constant (Kp) for the formation of the MeOH-DMA complex has been determined from the measured and calculated transition intensities of the OH-stretching fundamental transition and the NH-stretching second overtone transition. The transition intensities were calculated using an anharmonic oscillator local mode model with dipole moment and potential energy curves calculated using explicitly correlated coupled cluster methods. The equilibrium constant for formation of the MeOH-DMA complex was determined to be 0.2 ± 0.1 atm(-1), corresponding to a ΔG value of about 4.0 kJ mol(-1).
Identification of Parts Failures. FOS: Fundamentals of Service.
ERIC Educational Resources Information Center
John Deere Co., Moline, IL.
This parts failures identification manual is one of a series of power mechanics texts and visual aids covering theory of operation, diagnosis of trouble problems, and repair of automotive and off-the-road construction and agricultural equipment. Materials provide basic information with many illustrations for use by vocational students and teachers…
Identification of Parts Failures. FOS: Fundamentals of Service.
ERIC Educational Resources Information Center
John Deere Co., Moline, IL.
This parts failures identification manual is one of a series of power mechanics texts and visual aids covering theory of operation, diagnosis of trouble problems, and repair of automotive and off-the-road construction and agricultural equipment. Materials provide basic information with many illustrations for use by vocational students and teachers…
NASA Astrophysics Data System (ADS)
Pašteka, L. F.; Borschevsky, A.; Flambaum, V. V.; Schwerdtfeger, P.
2015-07-01
We investigate a number of diatomic molecular ions to search for strongly enhanced effects of variation of fundamental constants important for physics beyond the standard model. The relative enhancements due to fine structure and electron-to-proton mass ratio variation occur in transitions between nearly degenerate levels of different nature. Since the trapping techniques for molecular ions have already been developed, the proposed molecules HBr+, HI+, Br2+ , I2+ , IBr+, ICl+, and IF+ are very promising candidates for future high-resolution experiments.
Writing biomedical manuscripts part I: fundamentals and general rules.
Ohwovoriole, A E
2011-01-01
It is a professional obligation for health researchers to investigate and communicate their findings to the medical community. The writing of a publishable scientific manuscript can be a daunting task for the beginner and to even some established researchers. Many manuscripts fail to get off the ground and/or are rejected. The writing task can be made easier and the quality improved by using and following simple rules and leads that apply to general scientific writing .The manuscript should follow a standard structure:(e.g. (Abstract) plus Introduction, Methods, Results, and Discussion/Conclusion, the IMRAD model. The authors must also follow well established fundamentals of good communication in science and be systematic in approach. The manuscript must move from what is currently known to what was unknown that was investigated using a hypothesis, research question or problem statement. Each section has its own style of structure and language of presentation. The beginning of writing a good manuscript is to do a good study design and to pay attention to details at every stage. Many manuscripts are rejected because of errors that can be avoided if the authors follow simple guidelines and rules. One good way to avoid potential disappointment in manuscript writing is to follow the established general rules along with those of the journal in which the paper is to be published. An important injunction is to make the writing precise, clear, parsimonious, and comprehensible to the intended audience. The purpose of this article is to arm and encourage potential biomedical authors with tools and rules that will enable them to write contemporary manuscripts, which can stand the rigorous peer review process. The expectations of standard journals, and common pitfalls the major elements of a manuscript are covered.
Higgs potential from extended Brans–Dicke theory and the time-evolution of the fundamental constants
NASA Astrophysics Data System (ADS)
Solà, Joan; Karimkhani, Elahe; Khodam-Mohammadi, A.
2017-01-01
Despite the enormous significance of the Higgs potential in the context of the standard model of electroweak interactions and in grand unified theories, its ultimate origin is fundamentally unknown and must be introduced by hand in accordance with the underlying gauge symmetry and the requirement of renormalizability. Here we propose a more physical motivation for the structure of the Higgs potential, which we derive from a generalized Brans–Dicke (BD) theory containing two interacting scalar fields. One of these fields is coupled to curvature as in the BD formulation, whereas the other is coupled to gravity both derivatively and non-derivatively through the curvature scalar and the Ricci tensor. By requiring that the cosmological solutions of the model are consistent with observations, we show that the effective scalar field potential adopts the Higgs potential form with a mildly time-evolving vacuum expectation value. This residual vacuum dynamics could be responsible for the possible time variation of the fundamental constants, and is reminiscent of former Bjorken’s ideas on the cosmological constant problem.
Menestrina, Fiorella; Ronco, Nicolás R; Castells, Cecilia B
2016-10-07
Chiral capillary GC columns containing different amounts of octakis(6-O-tert-butyldimethylsilyl-2,3-di-O-acetil)-γ-cyclodextrin as chiral selector dissolved in a polymeric matrix were constructed with the aim of determining enantiomeric association constants between a group of well resolved chiral N-trifluoroacetyl amino acid methyl esters and this specific selector at different temperatures. The most relevant sources of uncertainties in the experimental data (hold-up and retention times, and column phase ratios at each temperature) were assessed. These cyclodextrin-based columns are known to enantioseparate a wide variety of chemical compounds, thus, the measurement of the absolute enantioselective constants of a group of solutes with this selector can be useful for systematic studies aimed to a general understanding about how these selectors work. These absolute association constants were estimated from data collected from very simplified experimental systems, and by using the fundamental gas-liquid chromatography equations. Copyright © 2016 Elsevier B.V. All rights reserved.
Petrov, Yu. V.; Nazarov, A. I.; Onegin, M. S.; Petrov, V. Yu.; Sakhnovsky, E. G.
2006-12-15
Using modern methods of reactor physics, we performed full-scale calculations of the Oklo natural reactor. For reliability, we used recent versions of two Monte Carlo codes: the Russian code MCU-REA and the well-known international code MCNP. Both codes produced similar results. We constructed a computer model of the Oklo reactor zone RZ2 which takes into account all details of design and composition. The calculations were performed for three fresh cores with different uranium contents. Multiplication factors, reactivities, and neutron fluxes were calculated. We also estimated the temperature and void effects for the fresh core. As would be expected, we found for the fresh core a significant difference between reactor and Maxwell spectra, which had been used before for averaging cross sections in the Oklo reactor. The averaged cross section of {sub 62}{sup 149}Sm and its dependence on the shift of a resonance position E{sub r} (due to variation of fundamental constants) are significantly different from previous results. Contrary to the results of previous papers, we found no evidence of a change of the samarium cross section: a possible shift of the resonance energy is given by the limits -73{<=}{delta}E{sub r}{<=}62 meV. Following tradition, we have used formulas of Damour and Dyson to estimate the rate of change of the fine structure constant {alpha}. We obtain new, more accurate limits of -4x10{sup -17}{<=}{alpha}{center_dot}/{alpha}{<=}3x10{sup -17} yr{sup -1}. Further improvement of the accuracy of the limits can be achieved by taking account of the core burn-up. These calculations are in progress.
Cook, Gray; Burton, Lee; Hoogenboom, Barbara J; Voight, Michael
2014-05-01
To prepare an athlete for the wide variety of activities needed to participate in or return to their sport, the analysis of fundamental movements should be incorporated into screening in order to determine who possesses, or lacks, the ability to perform certain essential movements. In a series of two articles, the background and rationale for the analysis of fundamental movement will be provided. The Functional Movement Screen (FMS™) will be described, and any evidence related to its use will be presented. Three of the seven fundamental movement patterns that comprise the FMS™ are described in detail in Part I: the Deep Squat, Hurdle Step, and In-Line Lunge. Part II of this series which will be provided in the August issue of IJSPT, will provide a detailed description of the four additional patterns that complement those presented in Part I (to complete the seven total fundamental movements): Shoulder Mobility, the Active Straight Leg Raise, the Trunk Stability Push-up, and Rotary Stability, as well as a discussion about the utility of functional movement screening, and the future of functional movement. The intent of this two part series is to present the concepts associated with screening of fundamental movements, whether it is the FMS™ system or a different system devised by another clinician. Such a functional assessment should be incorporated into pre-participation screening and return to sport testing in order to determine whether the athlete has the essential movements needed to participate in sports activities at a level of minimum competency. 5.
Functional movement screening: the use of fundamental movements as an assessment of function-part 2.
Cook, Gray; Burton, Lee; Hoogenboom, Barbara J; Voight, Michael
2014-08-01
Part 1 of this two-part series (presented in the June issue of IJSPT) provided an introduction to functional movement screening, as well as the history, background, and a summary of the evidence regarding the reliability of the Functional Movement Screen (FMS™). Part 1 presented three of the seven fundamental movement patterns that comprise the FMS™, and the specific ordinal grading system from 0-3, used in the their scoring. Specifics for scoring each test are presented. Part 2 of this series provides a review of the concepts associated with the analysis of fundamental movement as a screening system for functional movement competency. In addition, the four remaining movements of the FMS™, which complement those described in Part 1, will be presented (to complete the total of seven fundamental movements): Shoulder Mobility, the Active Straight Leg Raise, the Trunk Stability Push-up, and Rotary Stability. The final four patterns are described in detail, and the specifics for scoring each test are presented, as well as the proposed clinical implications for receiving a grade less than a perfect "3". The intent of this two part series is to present the concepts associated with screening of fundamental movements, whether it is the FMS™ system or a different system devised by another clinician. Such a fundamental screen of the movement system should be incorporated into pre-participation screening and return to sport testing in order to determine whether an athlete has the essential movements needed to participate in sports activities at a level of minimum competency. Part 2 concludes with a discussion of the evidence related to functional movement screening, myths related to the FMS™, the future of functional movement screening, and the concept of movement as a system. 5.
NASA Astrophysics Data System (ADS)
Tobar, M. E.; Stanwix, P. L.; McFerran, J. J.; Guéna, J.; Abgrall, M.; Bize, S.; Clairon, A.; Laurent, Ph.; Rosenbusch, P.; Rovera, D.; Santarelli, G.
2013-06-01
The frequencies of three separate Cs fountain clocks and one Rb fountain clock have been compared to various hydrogen masers to search for periodic changes correlated with the changing solar gravitational potential at the Earth and boost with respect to the cosmic microwave background rest frame. The data sets span over more than 8 yr. The main sources of long-term noise in such experiments are the offsets and linear drifts associated with the various H-masers. The drift can vary from nearly immeasurable to as high as 1.3×10-15 per day. To circumvent these effects, we apply a numerical derivative to the data, which significantly reduces the standard error when searching for periodic signals. We determine a standard error for the putative local position invariance coefficient with respect to gravity for a Cs-fountain H-maser comparison of |βH-βCs|≤4.8×10-6 and |βH-βRb|≤10-5 for a Rb-Fountain H-maser comparison. From the same data, the putative boost local position invariance coefficients were measured to a precision of up to parts in 1011 with respect to the cosmic microwave background rest frame. By combining these boost invariance experiments to a cryogenic sapphire oscillator vs H-maser comparison, independent limits on all nine coefficients of the boost-violation vector with respect to fundamental constant invariance, Bα, Be, and Bq (fine structure constant, electron mass, and quark mass, respectively), were determined to a precision of parts up to 1010.
NASA Astrophysics Data System (ADS)
Campoamor-Stursberg, R.
2014-04-01
It is shown that for any α,β in {R} and kin {Z}, the Hamiltonian Hk=p1p2 -α q2^{(2k+1)}q1^{(-2k-3)}-β /2 q2kq1^{(-k-2)} is super-integrable, possessing fundamental constants of motion of degrees 2 and 2k + 2 in the momenta.
Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts.
Tamosiunaite, Minija; Sutterlütti, Rahel M; Stein, Simon C; Wörgötter, Florentin
2015-01-01
Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them.
Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts
Tamosiunaite, Minija; Sutterlütti, Rahel M.; Stein, Simon C.; Wörgötter, Florentin
2015-01-01
Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them. PMID:26441797
NASA Astrophysics Data System (ADS)
Atanasov, Atanas Todorov
2016-12-01
Here is developed the hypothesis that the cell parameters of unicellular organisms (Prokaryotes and Eukaryotes) are determined by the gravitational constant (G, N.m2 /kg2), Planck constant (h, J.s) and growth rate of cells. By scaling analyses it was shown that the growth rate vgr(m/s) of unicellular bacteria and protozoa is relatively constant parameter, ranging in a narrow window of 10-12 - 10-10 m/s, in comparison to the diapason of cell mass, ranging 10 orders of magnitudes from 10-17 kg in bacteria to 10-7 kg in amoebas. By dimensional analyses it was shown that the combination between the growth rate of cells, gravitational constant and Planck constant gives equations with dimension of mass M(vgr)=(h.vgr/G)½ in kg, length L(v gr)=(hṡG/vgr3)1/2 in meter, time T(vgr)=(hṡG/vgr5)1/2 in seconds, and density ρ ((vgr)=vgr.3.5/hG2 in kg/m3 . For growth rate vgr in diapason of 1×10-11 m/s - 1×10-9.5 m/s the calculated numerical values for mass (3×10-18 -1×10-16 kg), length (5×10-8 -1×10-5 m), time (1×102 -1×106 s) and density (1×10-1 - 1×104 kg/m3) overlaps with diapason of experimentally measured values for cell mass (3×10-18 -1×10-15 kg), volume to surface ratio (1×10-7 -1×10-4 m), doubling time (1×103 -1×107 s), and density (1050 - 1300 kg/m3) in bacteria and protozoa. These equations show that appearance of the first living cells could be mutually connected to the physical constants.
Lasers in prosthodontics - An overview part 1: Fundamentals of dental lasers.
Bhat, Aruna M
2010-03-01
The introduction of lasers in the field of prosthodontics has replaced many conventional surgical and technical procedures and is beginning to replace the dental handpiece. Although lasers were introduced in dentistry as early as the 1960s it has gained widespread popularity mainly in the developed countries only from the early 90s. Today, prosthodontists can select from a variety of laser wavelengths available in dentistry. This has led to great confusion regarding laser operation and selection of the most appropriate laser wavelength for a given procedure. This article reviews literature on lasers with the aim of providing a complete understanding of the fundamentals of lasers and their applications in the various disciplines of prosthodontics. Peer reviewed literature published in English language between 1991 and 2007 obtained using Medline, and hand searches is reviewed in a series of three articles: Part 1 will describe the fundamentals of laser science, laser tissue interaction, laser wavelengths available in dentistry, laser parameters and safety measures in brief to enable the clinician to select the best laser for a certain procedure and also understand the biologic rationale for its use. Part 2 will deal in brief with the applications of lasers in the various branches of prosthodontics and their advantages over conventional techniques. Part 3 will deal with lasers in prosthodontics from an Indian perspective.
NASA Astrophysics Data System (ADS)
Stadnik, Y. V.; Flambaum, V. V.
2016-06-01
We outline laser interferometer measurements to search for variation of the electromagnetic fine-structure constant α and particle masses (including a nonzero photon mass). We propose a strontium optical lattice clock—silicon single-crystal cavity interferometer as a small-scale platform for these measurements. Our proposed laser interferometer measurements, which may also be performed with large-scale gravitational-wave detectors, such as LIGO, Virgo, GEO600, or TAMA300, may be implemented as an extremely precise tool in the direct detection of scalar dark matter that forms an oscillating classical field or topological defects.
NASA Astrophysics Data System (ADS)
Kegerise, M. A.; Spina, E. F.
2000-12-01
The dynamic response of the constant-voltage anemometer (CVA) system was investigated both analytically and experimentally and compared to that of the CTA. The frequency response functions of the CVA system for a number of different circuit parameters and flow conditions were determined via laser-based radiative heating of the hot-wire sensor. A 2nd-order linear systems model of the CVA was developed to provide insight to the dynamic response and to interpret the experimental results. The qualitative variations in the frequency response function with changes in circuit parameters are in agreement with the model. The experimentally determined frequency-response functions of the CVA systems used in this study were found to have little dependence on the wire overheat ratio and Reynolds number.
NASA Astrophysics Data System (ADS)
Kegerise, M. A.; Spina, E. F.
2000-12-01
The static response of the constant-voltage anemometer (CVA) was investigated analytically for both subsonic and supersonic flow, and a corroborative experiment was performed at Mach 3.5 using both CVA and CTA systems. This experiment allowed a direct comparison of the static sensitivities of the two systems by utilizing the identical flow conditions and the same wire sensors. The subsonic analysis of the CVA indicates that the anemometer has primary sensitivity to velocity fluctuations at high overheat ratios and to temperature fluctuations at low overheat ratios. The theoretical and empirical relative static sensitivity of the CVA system to mass-flux and total-temperature variations appears very similar to that of the CTA and CCA systems over a wide range of overheat ratio.
The fundamental nature of life as a chemical system: the part played by inorganic elements.
Williams, Robert J P
2002-02-01
In this article we show why inorganic metal elements from the environment were an essential part of the origin of living aqueous systems of chemicals in flow. Unavoidably such systems have many closely fixed parameters, related to thermodynamic binding constants, for the interaction of the essential exchangeable inorganic metal elements with both inorganic and organic non-metal materials. The binding constants give rise to fixed free metal ion concentration profiles for different metal ions and ligands in the cytoplasm of all cells closely related to the Irving-Williams series. The amounts of bound elements depend on the organic molecules present as well as these free ion concentrations. This system must have predated coding which is probably only essential for reproductive life. Later evolution in changing chemical environments became based on the development of extra cytoplasmic compartments containing quite different energised free (and bound) element contents but in feed-back communication with the central primitive cytoplasm which changed little. Hence species multiplied late in evolution in large part due to the coupling with the altered inorganic environment.
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Dateo, Christopher E.
2005-01-01
The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, denoted CCSD(T), has been used, in conjunction with approximate integral techniques, to compute highly accurate rovibrational spectroscopic constants of cyclopropenylidene, C3H2. The approximate integral technique was proposed in 1994 by Rendell and Lee in order to avoid disk storage and input/output bottlenecks, and today it will also significantly aid in the development of algorithms for distributed memory, massively parallel computer architectures. It is shown in this study that use of approximate integrals does not impact the accuracy of CCSD(T) calculations. In addition, the most accurate spectroscopic data yet for C3H2 is presented based on a CCSD(T)/cc-pVQZ quartic force field that is modified to include the effects of core-valence electron correlation. Cyclopropenylidene is of great astronomical and astrobiological interest because it is the smallest aromatic ringed compound to be positively identified in the interstellar medium, and is thus involved in the prebiotic processing of carbon and hydrogen. The singles and doubles coupled-cluster method that includes a perturbational estimate of
Chen, Jacqueline H.; Hawkes, Evatt R.; Sankaran, Ramanan; Mason, Scott D.; Im, Hong G.
2006-04-15
The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry with a view to providing better understanding and modeling of combustion processes in homogeneous charge compression-ignition engines. Numerical diagnostics are developed to analyze the mode of combustion and the dependence of overall ignition progress on initial mixture conditions. The roles of dissipation of heat and mass are divided conceptually into transport within ignition fronts and passive scalar dissipation, which modifies the statistics of the preignition temperature field. Transport within ignition fronts is analyzed by monitoring the propagation speed of ignition fronts using the displacement speed of a scalar that tracks the location of maximum heat release rate. The prevalence of deflagrative versus spontaneous ignition front propagation is found to depend on the local temperature gradient, and may be identified by the ratio of the instantaneous front speed to the laminar deflagration speed. The significance of passive scalar mixing is examined using a mixing timescale based on enthalpy fluctuations. Finally, the predictions of the multizone modeling strategy are compared with the DNS, and the results are explained using the diagnostics developed. (author)
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Dateo, Christopher E.
2005-01-01
The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, denoted CCSD(T), has been used, in conjunction with approximate integral techniques, to compute highly accurate rovibrational spectroscopic constants of cyclopropenylidene, C3H2. The approximate integral technique was proposed in 1994 by Rendell and Lee in order to avoid disk storage and input/output bottlenecks, and today it will also significantly aid in the development of algorithms for distributed memory, massively parallel computer architectures. It is shown in this study that use of approximate integrals does not impact the accuracy of CCSD(T) calculations. In addition, the most accurate spectroscopic data yet for C3H2 is presented based on a CCSD(T)/cc-pVQZ quartic force field that is modified to include the effects of core-valence electron correlation. Cyclopropenylidene is of great astronomical and astrobiological interest because it is the smallest aromatic ringed compound to be positively identified in the interstellar medium, and is thus involved in the prebiotic processing of carbon and hydrogen. The singles and doubles coupled-cluster method that includes a perturbational estimate of
Campoamor-Stursberg, R.
2014-04-15
It is shown that for any α,β∈R and k∈Z, the Hamiltonian H{sub k}=p{sub 1}p{sub 2}−αq{sub 2}{sup (2k+1)}q{sub 1}{sup (−2k−3)}−(β)/2 q{sub 2}{sup k}q{sub 1}{sup (−k−2)} is super-integrable, possessing fundamental constants of motion of degrees 2 and 2k + 2 in the momenta.
Reduction of iron-oxide-carbon composites: part I. Estimation of the rate constants
Halder, S.; Fruehan, R.J.
2008-12-15
A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO{sub 2} and wustite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wustite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wustite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wustite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (> 1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.
Rellergert, Wade G.; Hudson, Eric R.; DeMille, D.; Greco, R. R.; Hehlen, M. P.; Torgerson, J. R.
2010-05-21
We describe a novel approach to directly measure the energy of the narrow, low-lying isomeric state in {sup 229}Th. Since nuclear transitions are far less sensitive to environmental conditions than atomic transitions, we argue that the {sup 229}Th optical nuclear transition may be driven inside a host crystal with a high transition Q. This technique might also allow for the construction of a solid-state optical frequency reference that surpasses the short-term stability of current optical clocks, as well as improved limits on the variability of fundamental constants. Based on analysis of the crystal lattice environment, we argue that a precision (short-term stability) of 3x10{sup -17}<{Delta}f/f<1x10{sup -15} after 1 s of photon collection may be achieved with a systematic-limited accuracy (long-term stability) of {Delta}f/f{approx}2x10{sup -16}. Improvement by 10{sup 2}-10{sup 3} of the constraints on the variability of several important fundamental constants also appears possible.
Reduction of Iron-Oxide-Carbon Composites: Part I. Estimation of the Rate Constants
NASA Astrophysics Data System (ADS)
Halder, S.; Fruehan, R. J.
2008-12-01
A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO2 and wüstite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wüstite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wüstite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wüstite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (>1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.
NASA Astrophysics Data System (ADS)
Baldacci, A.; Stoppa, P.; Visinoni, R.; Wugt Larsen, R.
2012-09-01
The high resolution infrared absorption spectrum of CH2D81Br has been recorded by Fourier transform spectroscopy in the range 550-1075 cm-1, with an unapodized resolution of 0.0025 cm-1, employing a synchrotron radiation source. This spectral region is characterized by the ν6 (593.872 cm-1), ν5 (768.710 cm-1) and ν9 (930.295 cm-1) fundamental bands. The ground state constants up to sextic centrifugal distortion terms have been obtained for the first time by ground-state combination differences from the three bands and subsequently employed for the evaluation of the excited state parameters. Watson's A-reduced Hamiltonian in the Ir representation has been used in the calculations. The ν 6 = 1 level is essentially free from perturbation whereas the ν 5 = 1 and ν 9 = 1 states are mutually interacting through a-type Coriolis coupling. Accurate spectroscopic parameters of the three excited vibrational states and a high-order coupling constant which takes into account the interaction between ν5 and ν9 have been determined.
Belli, Renan; Wendler, Michael; de Ligny, Dominique; Cicconi, Maria Rita; Petschelt, Anselm; Peterlik, Herwig; Lohbauer, Ulrich
2017-01-01
A deeper understanding of the mechanical behavior of dental restorative materials requires an insight into the materials elastic constants and microstructure. Here we aim to use complementary methodologies to thoroughly characterize chairside CAD/CAM materials and discuss the benefits and limitations of different analytical strategies. Eight commercial CAM/CAM materials, ranging from polycrystalline zirconia (e.max ZirCAD, Ivoclar-Vivadent), reinforced glasses (Vitablocs Mark II, VITA; Empress CAD, Ivoclar-Vivadent) and glass-ceramics (e.max CAD, Ivoclar-Vivadent; Suprinity, VITA; Celtra Duo, Dentsply) to hybrid materials (Enamic, VITA; Lava Ultimate, 3M ESPE) have been selected. Elastic constants were evaluated using three methods: Resonant Ultrasound Spectroscopy (RUS), Resonant Beam Technique (RBT) and Ultrasonic Pulse-Echo (PE). The microstructures were characterized using Scanning Electron Microscopy (SEM), Energy Dispersive X-ray Spectroscopy (EDX), Raman Spectroscopy and X-ray Diffraction (XRD). Young's modulus (E), Shear modulus (G), Bulk modulus (B) and Poisson's ratio (ν) were obtained for each material. E and ν reached values ranging from 10.9 (Lava Ultimate) to 201.4 (e.max ZirCAD) and 0.173 (Empress CAD) to 0.47 (Lava Ultimate), respectively. RUS showed to be the most complex and reliable method, while the PE method the easiest to perform but most unreliable. All dynamic methods have shown limitations in measuring the elastic constants of materials showing high damping behavior (hybrid materials). SEM images, Raman spectra and XRD patterns were made available for each material, showing to be complementary tools in the characterization of their crystal phases. Here different methodologies are compared for the measurement of elastic constants and microstructural characterization of CAD/CAM restorative materials. The elastic properties and crystal phases of eight materials are herein fully characterized. Copyright © 2016 The Academy of Dental Materials
ERIC Educational Resources Information Center
Environmental Protection Agency, Research Triangle Park, NC. Air Pollution Training Inst.
This workbook is part five of a self-instructional course prepared for the United States Environmental Protection Agency. The student proceeds at his own pace and when questions are asked, after answering, he either turns to the next page to check his response or refers to the previously covered material. The purpose of this course is to prepare…
Fundamentals of Physics, Part 2, Chapters 13 - 21 , Enhanced Problems Version
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2002-10-01
PART II. Equilibrium and Elasticity. Gravitation. Fluids. Oscillations. Waves--I. Waves--II. Temperature, Heat, and the First Law of Thermodynamics. The Kinetic Theory of Gases. Entropy and the Second Law of Thermodynamics.
Fundamentals of Physics, Part 3, Chapters 22 - 33, Enhanced Problems Version
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert
2002-04-01
PART III. Electric Charge. Electric Fields. Gauss' Law. Electric Potential. Capacitance. Current and Resistance. Circuits. Magnetic Fields. Magnetic Fields Due to Currents. Induction and Inductance. Magnetism of Matter; Maxwell's Equation. Electromagnetic Oscillations and Alternating Current.
40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X
Code of Federal Regulations, 2014 CFR
2014-07-01
... chloride 79-44-7 Dimethyldisulfide 624-92-0 Dimethylformamide 68-12-2 1,1-Dimethylhydrazine 57-14-7... Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...
40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X
Code of Federal Regulations, 2013 CFR
2013-07-01
... chloride 79-44-7 Dimethyldisulfide 624-92-0 Dimethylformamide 68-12-2 1,1-Dimethylhydrazine 57-14-7... Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...
40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X
Code of Federal Regulations, 2011 CFR
2011-07-01
... chloride 79-44-7 Dimethyldisulfide 624-92-0 Dimethylformamide 68-12-2 1,1-Dimethylhydrazine 57-14-7... Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...
Fundamentals in Biostatistics for Research in Pediatric Dentistry: Part I - Basic Concepts.
Garrocho-Rangel, J A; Ruiz-Rodríguez, M S; Pozos-Guillén, A J
The purpose of this report was to provide the reader with some basic concepts in order to better understand the significance and reliability of the results of any article on Pediatric Dentistry. Currently, Pediatric Dentists need the best evidence available in the literature on which to base their diagnoses and treatment decisions for the children's oral care. Basic understanding of Biostatistics plays an important role during the entire Evidence-Based Dentistry (EBD) process. This report describes Biostatistics fundamentals in order to introduce the basic concepts used in statistics, such as summary measures, estimation, hypothesis testing, effect size, level of significance, p value, confidence intervals, etc., which are available to Pediatric Dentists interested in reading or designing original clinical or epidemiological studies.
Fundamentals of Physics, Part 1, Chapters 1 - 12, Enhanced Problems Version
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert
2002-04-01
PART I. Measurement. Motion Along a Straight Line. Vectors. Motion in Two and Three Dimensions. Force and Motion--I. Force and Motion--II. Kinetic Energy and Work. Potential Energy and Conservation of Energy. Systems of Particles. Collisions. Rotation. Rolling, Torque, and Angular Momentum.
NASA Technical Reports Server (NTRS)
Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.
1987-01-01
An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.
On decay constants and orbital distance to the Sun—part II: beta minus decay
NASA Astrophysics Data System (ADS)
Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.
2017-02-01
Claims that proximity to the Sun causes variations of decay constants at the permille level have been investigated for beta-minus decaying nuclides. Repeated activity measurements of 3H, 14C, 60Co, 85Kr, 90Sr, 124Sb, 134Cs, 137Cs, and 154Eu sources were performed over periods of 259 d up to 5 decades at various nuclear metrology institutes. Residuals from the exponential decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ in amplitude and phase from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. Oscillations in phase with Earth’s orbital distance to the Sun could not be observed within 10-4-10-5 range precision. The most stable activity measurements of β - decaying sources set an upper limit of 0.003%-0.007% to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months.
On decay constants and orbital distance to the Sun—part I: alpha decay
NASA Astrophysics Data System (ADS)
Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.
2017-02-01
Claims that proximity to the Sun causes variation of decay constants at permille level have been investigated for alpha decaying nuclides. Repeated decay rate measurements of 209Po, 226Ra, 228Th, 230U, and 241Am sources were performed over periods of 200 d up to two decades at various nuclear metrology institutes around the globe. Residuals from the exponential decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ in amplitude and phase from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. The most stable activity measurements of α decaying sources set an upper limit between 0.0006% and 0.006% to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months. Oscillations in phase with Earth’s orbital distance to the sun could not be observed within 10-5-10-6 range precision.
Fundamentals of chronic pain in children and young people. Part 2.
Forgeron, Paula A; Stinson, Jennifer
2014-11-01
Chronic pain is common in childhood and can have severe physical and psychological consequences but, unlike acute pain, it is not always recognised by nurses and other health professionals. A holistic and multidisciplinary approach to treatment is required and nurses can play a significant role in helping children and families to cope with the negative effects of the condition. The first part of this article, published in October, looked at the prevalence, anatomy and physiology of pain, and factors associated with chronic pain and its consequences. In part 2, assessment strategies as well as pharmacological and psychological interventions, are discussed, along with self-help programmes and strategies that can be used to aid sleep and help the child at school manage their pain.
NASA Astrophysics Data System (ADS)
Hwang, Seho; Shin, Jehyun; Kim, Jongman; Won, Byeongho; Song, Wonkyoung; Kim, Changryol; Ki, Jungseok
2014-05-01
One of the most important physical properties is the measurement of the elastic constants of the formation in the evaluation of shale gas. Normally the elastic constants by geophysical well logging and the laboratory test are used in the design of hydraulic fracturing . The three inches diameter borehole of the depth of 505 m for the evaluation of shale gas drilled and was fully cored at the Haenan Basin, southwestern part of Korea Peninsula. We performed a various laboratory tests and geophysical well logging using slime hole logging system. Geophysical well logs include the radioactive logs such as natural gamma log, density log and neutron log, and monopole and dipole sonic log, and image logs. Laboratory tests are the axial compression test, elastic wave velocities and density, and static elastic constants measurements for 21 shale and sandstone cores. We analyzed the relationships between the physical properties by well logs and laboratory test as well as static elastic constants by laboratory tests. In the case of an sonic log using a monopole source of main frequency 23 kHz, measuring P-wave velocity was performed reliably. When using the dipole excitation of low frequency, the signal to noise ratio of the measured shear wave was very low. But when measuring using time mode in a predetermined depth, the signal to noise ratio of measured data relatively improved to discriminate the shear wave. P-wave velocities by laboratory test and sonic logging agreed well overall, but S-wave velocities didn't. The reason for the discrepancy between the laboratory test and sonic log is mainly the low signal to noise ratio of sonic log data by low frequency dipole source, and measuring S-wave in the small diameter borehole is still challenge. The relationship between the P-wave velocity and two dynamic elastic constants, Young's modulus and Poisson's ratio, shows a good correlation. And the relationship between the static elastic constants and dynamic elastic constants also
Tam, K Y; Takács-Novák, K
1999-03-01
Acid dissociation constants (pKa values) denote the extent of ionization of drug molecules at different pH values, which is important in understanding their penetration through biological membranes and their interaction with the receptors. However, many drug molecules are sparingly soluble in water or contain ionization centres with overlapping pKa values, making precise pKa determination difficult using conventional spectrophotometric titration. In this work, we investigate a multiwavelength spectrophotometric titration (WApH) method for the determination of pKa values. Spectral changes which arise during pH-metric titrations of substances with concentration of about 10(-5) M were captured by means of an optical system developed in this study. All experiments were carried out in 0.15 M KCI solution at 25 +/- 0.5 degrees C. Mathematical treatments based on the first derivative spectrophotometry procedure and the target factor analysis method were applied to calculate the pKa values from the multiwavelength absorption titration data. pKa values were determined by the WApH technique for six ionizable substances, namely, benzoic acid, phenol, phthalic acid, nicotinic acid, p-aminosalicylic acid and phenolphthalein. The pKa values measured using the WApH technique are in excellent agreement with those measured pH-metrically. We have demonstrated that the first derivative spectrophometry procedure provides a relatively simple way to visualize the pKa values which are consistent with those determined using the target factor analysis method. However, for ionization systems with insufficient spectral data obtained around the sought pKa values or with closely overlapping pKa values, the target factor analysis method outperforms the first derivative procedure in terms of obtaining the results. Using the target factor analysis method, it has been shown that the two-step ionization of phenolphthalein involves a colorless anion intermediate and a red colored di-anion.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
NASA Astrophysics Data System (ADS)
Singh, Surjit; Luck, Werner A. P.
1981-05-01
The various expressions considered in Part I for the transition moment matrix elements of fundamental and first two overtones are applied to carbon monoxide. The coefficients aij in the expressions Rio = Σ aijpj (where Rio is the transition moment integral for the O → i vibrational transition and pj is the dipole moment derivative ∂ j P/∂XXX j, XXX = ( r — re)/ re, re is equilibrium bond distance) are reported for i, j = 1, 2, 3. It is found that these coefficients do not vary by more than 5% when compared for the same i, j values in various expressions irrespective of the most exhaustive treatments used in deriving the original expressions. On the basis of the values of the coefficients obtained for CO, generalisations have been suggested on the effects of inclusion of mechanical and electrical anharmonicity to the intensities of fundamental and first two overtones. It is generally observed that the contribution of p'1, is about 100 fold more than the contribution of p'2, for R10. On the other hand the contributions of p'1, and p, for R20 and R30 are of nearly equal magnitude but opposite in sign. The contribution of p'1 to R10 is much higher than its contribution to R20 and R20. The various observations lead us to conclude that, whereas the effect of inclusion of mechanical anharmonicity on the intensity of the fundamental band is negligible, this effect is almost comparable to the effect of inclusion of electrical anharmonicity for the first two overtones. Simple forms of the aij expressions are applied to HC1 and OH to demonstrate the effect of variation of molecular constants on the aij values. On the basis of the observed trend in the values of these coefficients for CO, HCl and OH general remarks on the effects of hydrogen bonding on IR band intensities are given.
Singh, Bhupinder; Kumar, Rajiv; Ahuja, Naveen
2005-01-01
, postulation of mathematical models for various chosen response characteristics, fitting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology. The broad topic of DoE optimization methodology is covered in two parts. Part I of the review attempts to provide thought-through and thorough information on diverse DoE aspects organized in a seven-step sequence. Besides dealing with basic DoE terminology for the novice, the article covers the niceties of several important experimental designs, mathematical models, and optimum search techniques using numeric and graphical methods, with special emphasis on computer-based approaches, artificial neural networks, and judicious selection of designs and models.
Pozos-Guillén, Amaury; Ruiz-Rodríguez, Socorro; Garrocho-Rangel, Arturo
The main purpose of the second part of this series was to provide the reader with some basic aspects of the most common biostatistical methods employed in health sciences, in order to better understand the validity, significance and reliability of the results from any article on Pediatric Dentistry. Currently, as mentioned in the first paper, Pediatric Dentists need basic biostatistical knowledge to be able to apply it when critically appraise a dental article during the Evidence-based Dentistry (EBD) process, or when participating in the development of a clinical study with dental pediatric patients. The EBD process provides a systematic approach of collecting, review and analyze current and relevant published evidence about oral health care in order to answer a particular clinical question; then this evidence should be applied in everyday practice. This second report describes the most commonly used statistical methods for analyzing and interpret collected data, and the methodological criteria to be considered when choosing the most appropriate tests for a specific study. These are available to Pediatric Dentistry practicants interested in reading or designing original clinical or epidemiological studies.
Morgera, S D
1987-01-01
Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.
Tiller, William A
2010-04-01
In Part I of this pair of articles, the fundamental experimental observations and theoretical perspectives were provided for one to understand the key differences between our normal, uncoupled state of physical reality and the human consciousness-induced coupled state of physical reality. Here in Part II, the thermodynamics of complementary and alternative medicine, which deals with the partially coupled state of physical reality, is explored via the use of five different foci of relevance to today's science and medicine: (1) homeopathy; (2) the placebo effect; (3) long-range, room temperature, macroscopic size-scale, information entanglement; (4) an explanation for dark matter/energy plus human levitation possibility; and (5) electrodermal diagnostic devices. The purpose of this pair of articles is to clearly differentiate the use and limitations of uncoupled state physics in both nature and today's orthodox medicine from coupled state physics in tomorrow's complementary and alternative medicine.
Windberger, A; Crespo López-Urrutia, J R; Bekker, H; Oreshkina, N S; Berengut, J C; Bock, V; Borschevsky, A; Dzuba, V A; Eliav, E; Harman, Z; Kaldor, U; Kaul, S; Safronova, U I; Flambaum, V V; Keitel, C H; Schmidt, P O; Ullrich, J; Versolato, O O
2015-04-17
We measure optical spectra of Nd-like W, Re, Os, Ir, and Pt ions of particular interest for studies of a possibly varying fine-structure constant. Exploiting characteristic energy scalings we identify the strongest lines, confirm the predicted 5s-4f level crossing, and benchmark advanced calculations. We infer two possible values for optical M2/E3 and E1 transitions in Ir^{17+} that have the highest predicted sensitivity to a variation of the fine-structure constant among stable atomic systems. Furthermore, we determine the energies of proposed frequency standards in Hf^{12+} and W^{14+}.
Greenbury, S. F.; Ahnert, S. E.
2015-01-01
Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype–phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into ‘constrained' and ‘unconstrained' sequences, in the broadest possible sense. As ‘constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. ‘Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with ‘coding' and ‘non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063
Greenbury, S F; Ahnert, S E
2015-12-06
Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype-phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into 'constrained' and 'unconstrained' sequences, in the broadest possible sense. As 'constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. 'Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with 'coding' and 'non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps.
Jeong, H. ); Hsu, D.K. . Center for Nondestructive Evaluation); Shannon, R.E. . Materials Reliability Dept.); Liaw, P.K. . Dept. of Materials Science and Engineering)
1994-04-01
The anisotropic elastic properties of silicon-carbide particulate (SiC[sub p]) reinforced Al metal matrix composites were characterized using ultrasonic techniques and microstructural analysis. The composite materials, fabricated by a powder metallurgy extrusion process, included 2124, 6061, and 7091 Al alloys reinforced by 10 to 30 pct of [alpha]-SiC[sub p] by volume. Results were presented for the assumed orthotropic elastic constants obtained from ultrasonic velocities and for the microstructural data on particulate shape, aspect ratio, and orientation distribution. All of the composite samples exhibited a systematic anisotropy: the stiffness in the extrusion direction was the highest, and the stiffness in the out-of-plane direction was the lowest. Microstructural analysis suggested that the observed anisotropy could be attributed to the preferred orientation of SiC[sub p]. The ultrasonic velocity was found to be sensitive to internal defects such as porosity and intermetallic compounds. It has been observed that ultrasonics may be a useful, nondestructive technique for detecting small directional differences in the overall elastic constants of the composites since a good correlation has been noted between the velocity and microstructure and the mechanical test. By incorporating the observed microstructural characteristics, a theoretical model for predicting the anisotropic stiffnesses of the composites has been developed and is presented in a companion article (Part 2).
NASA Astrophysics Data System (ADS)
Jeong, Hyunjo; Hsu, David K.; Shannon, Robert E.; Liaw, Peter K.
1994-04-01
The anisotropic elastic properties of silicon-carbide particulate (SiC p ) reinforced Al metal matrix composites were characterized using ultrasonic techniques and microstructural analysis. The composite materials, fabricated by a powder metallurgy extrusion process, included 2124, 6061, and 7091 Al alloys reinforced by 10 to 30 pct of α-SiC p by volume. Results were presented for the assumed orthotropic elastic constants obtained from ultrasonic velocities and for the microstructural data on particulate shape, aspect ratio, and orientation distribution. All of the composite samples exhibited a systematic anisotropy: the stiffness in the extrusion direction was the highest, and the stiffness in the out-of-plane direction was the lowest. Microstructural analysis suggested that the observed anisotropy could be attributed to the preferred orientation of SiC p . The ultrasonic velocity was found to be sensitive to internal defects such as porosity and intermetallic compounds. It has been observed that ultrasonics may be a useful, nondestructive technique for detecting small directional differences in the overall elastic constants of the composites since a good correlation has been noted between the velocity and microstructure and the mechanical test. By incorporating the observed microstructural characteristics, a theoretical model for predicting the anisotropic stiffnesses of the composites has been developed and is presented in a companion article (Part II).
Sankaran, Ramanan; Chen, Jacqueline H.; Hawkes, Evatt R.; Pebay, Philippe Pierre
2005-01-01
The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry. Parametric studies on the effect of the initial amplitude of the temperature fluctuations, the initial length scales of the temperature and velocity fluctuations, and the turbulence intensity are performed. The combustion mode is characterized using the diagnostic measures developed in Part I of this study. Specifically, the ignition front speed and the scalar mixing timescales are used to identify the roles of molecular diffusion and heat conduction in each case. Predictions from a multizone model initialized from the DNS fields are presented and differences are explained using the diagnostic tools developed.
NASA Technical Reports Server (NTRS)
Warren, Wayne H., Jr.
1990-01-01
The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.
Nifant'eva, T I; Shkinev, V M; Spivakov, B Y; Burba, P
1999-02-01
The assessment of conditional stability constants of aquatic humic substance (HS) metal complexes is overviewed with special emphasis on the application of ultrafiltration methods. Fundamentals and limitations of stability functions in the case of macromolecular and polydisperse metal-HS species in aquatic environments are critically discussed. The review summarizes the advantages and application of ultrafiltration for metal-HS complexation studies, discusses the comparibility and reliability of stability constants. The potential of ultrafiltration procedures for characterizing the lability of metal-HS species is also stressed.
Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; ...
2016-08-25
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO2, and CH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO2more » radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.« less
Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; Curran, Henry J.
2016-08-25
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO_{2}, and CH_{3} radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO_{2} radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.
Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J
2016-09-15
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.
Zhou, Chong-Wen; Simmie, John M.; Pitz, William J.; Curran, Henry J.
2016-08-25
Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. We present calculated thermodynamic and kinetic data for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. Furthermore, these radicals can be formed via H atom abstraction reactions by H and Ö atoms and OH, HO_{2}, and CH_{3} radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when OH is involved, but the reverse holds true for HO_{2} radicals. We also determined the subsequent β-scission of the radicals formed, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.
Fundamental ecology is fundamental.
Courchamp, Franck; Dunne, Jennifer A; Le Maho, Yvon; May, Robert M; Thébaud, Christophe; Hochberg, Michael E
2015-01-01
The primary reasons for conducting fundamental research are satisfying curiosity, acquiring knowledge, and achieving understanding. Here we develop why we believe it is essential to promote basic ecological research, despite increased impetus for ecologists to conduct and present their research in the light of potential applications. This includes the understanding of our environment, for intellectual, economical, social, and political reasons, and as a major source of innovation. We contend that we should focus less on short-term, objective-driven research and more on creativity and exploratory analyses, quantitatively estimate the benefits of fundamental research for society, and better explain the nature and importance of fundamental ecology to students, politicians, decision makers, and the general public. Our perspective and underlying arguments should also apply to evolutionary biology and to many of the other biological and physical sciences. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Weiss, A.; Henkel, C.; Menten, K. M.; Walter, F.; Downes, D.; Cox, P.; Carrili, C. L.
2012-07-10
We report on sensitive observations of the CO(J = 7{yields}6) and C I({sup 3}P{sub 2} {yields} {sup 3}P{sub 1}) transitions in the z = 2.79 QSO host galaxy RXJ0911.4+0551 using the IRAM Plateau de Bure interferometer. Our extremely high signal-to-noise spectra combined with the narrow CO line width of this source (FWHM = 120 km s{sup -1}) allows us to estimate sensitive limits on the spacetime variations of the fundamental constants using two emission lines. Our observations show that the C I and CO line shapes are in good agreement with each other but that the C I line profile is of the order of 10% narrower, presumably due to the lower opacity in the latter line. Both lines show faint wings with velocities up to {+-}250 km s{sup -1}, indicative of a molecular outflow. As such, the data provide direct evidence for negative feedback in the molecular gas phase at high redshift. Our observations allow us to determine the observed frequencies of both transitions with so far unmatched accuracy at high redshift. The redshift difference between the CO and C I lines is sensitive to variations of {Delta}F/F with F = {alpha}{sup 2}/{mu} where {alpha} is the fine structure constant and {mu} is the electron-to-proton mass ratio. We find {Delta}F/F (6.9 {+-} 3.7) Multiplication-Sign 10{sup -6} at a look-back time of 11.3 Gyr, which, within the uncertainties, is consistent with no variations of the fundamental constants.
Methanol as A Tracer of Fundamental Constants
NASA Astrophysics Data System (ADS)
Levshakov, S. A.; Kozlov, M. G.; Reimers, D.
2011-09-01
The methanol molecule CH3OH has a complex microwave spectrum with a large number of very strong lines. This spectrum includes purely rotational transitions as well as transitions with contributions of the internal degree of freedom associated with the hindered rotation of the OH group. The latter takes place due to the tunneling of hydrogen through the potential barriers between three equivalent potential minima. Such transitions are highly sensitive to changes in the electron-to-proton mass ratio, μ = m e/m p, and have different responses to μ-variations. The highest sensitivity is found for the mixed rotation-tunneling transitions at low frequencies. Observing methanol lines provides more stringent limits on the hypothetical variation of μ than ammonia observation with the same velocity resolution. We show that the best-quality radio astronomical data on methanol maser lines constrain the variability of μ in the Milky Way at the level of |Δμ/μ| < 28 × 10-9 (1σ) which is in line with the previously obtained ammonia result, |Δμ/μ| < 29 × 10-9 (1σ). This estimate can be further improved if the rest frequencies of the CH3OH microwave lines will be measured more accurately.
Fundamentally updating fundamentals.
Armstrong, Gail; Barton, Amy
2013-01-01
Recent educational research indicates that the six competencies of the Quality and Safety Education for Nurses initiative are best introduced in early prelicensure clinical courses. Content specific to quality and safety has traditionally been covered in senior level courses. This article illustrates an effective approach to using quality and safety as an organizing framework for any prelicensure fundamentals of nursing course. Providing prelicensure students a strong foundation in quality and safety in an introductory clinical course facilitates early adoption of quality and safety competencies as core practice values. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Perezhogin, I. A.; Grigoriev, K. S.; Potravkin, N. N.; Cherepetskaya, E. B.; Makarov, V. A.
2017-08-01
Considering sum-frequency generation in an isotropic chiral nonlinear medium, we analyze the transfer of the spin angular momentum of fundamental elliptically polarized Gaussian light beams to the signal beam, which appears as the superposition of two Laguerre-Gaussian modes with both spin and orbital angular momentum. Only for the circular polarization of the fundamental radiation is its angular momentum fully transferred to the sum-frequency beam; otherwise, part of it can be transferred to the medium. Its value, as well as the ratio of spin and orbital contributions in the signal beam, depends on the fundamental frequency ratio and the polarization of the incident beams. Higher energy conversion efficiency in sum-frequency generation does not always correspond to higher angular momentum conversion efficiency.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
NASA Technical Reports Server (NTRS)
Nelson, C. C.; Childs, D. W.; Nicks, C.; Elrod, D.
1985-01-01
The leakage and rotordynamic coefficients of constant-clearance and convergent-tapered annular gas seals were measured in an experimental test facility. The results are presented along with the theoretically predicted values. Of particular interest is the prediction that optimally tapered seals have significantly larger direct siffness than straight seals. The experimental results verify this prediction. Generally the theory does quite well, but fails to predict the large increase in direct stiffness when the fluid is pre-rotated.
Seethapathy, Suresh; Górecki, Tadeusz
2010-12-10
Polydimethylsiloxane (PDMS) has low permeability towards water vapour and low energy of activation of permeation towards volatile organic compounds (VOCs) when compared to many other polymers. Suitability of the material for use in permeation-type passive air samplers was tested as it theoretically should reduce uptake rate variations due to temperature changes and eliminate or reduce complications arising from sorbent saturation by water vapour. The calibration constants of a simple autosampler vial-based permeation passive sampler equipped with a PDMS membrane (Waterloo Membrane Sampler(®)) were determined for various analytes at different temperatures. From the data, the activation energy of permeation for PDMS towards the analytes was determined. The analytes studied belonged to various classes of compounds with wide ranging polarities, including n-alkanes, aromatic hydrocarbons, esters and alcohols. The results confirmed Arrhenius-type relationship between temperature and calibration constant and the energy of activation of permeation for PDMS ranged from -5kJ/mole for butylbenzene to -17kJ/mole for sec-butylacetate. Calibration constants of the samplers towards n-alkanes and aromatic hydrocarbons determined at humidities between 30% and 91% indicated no statistically significant variations in the uptake rate with changes in humidity for 9 of the 11 analytes studied. The results confirmed the suitability of the sampler for deployment in high humidity areas and under varying temperature conditions.
Geraedts, K; Maes, A
2008-09-01
The interaction between colloidal Tc(IV) species and colloidal Gorleben humic substances (HS) was quantified after application of the La-precipitation method on supernatant solutions obtained under various experimental conditions but at constant ionic strength of the Gorleben groundwater (0.04M). The determined interaction constant LogKHS (2.3+/-0.3) remained unchanged over a large range of Tc(IV) and HS concentrations and was independent of the pH of the original supernatant solution (pH range 6-10), Tc(IV)-HS loading (10(-3)-10(-6)molTcg(-1) HS) and the nature of the reducing surface (Magnetite, Pyrite and Gorleben sand) used for the pertechnetate reduction. The LogKHS value determined by the La-precipitation method is lower than the LogK value obtained from a previous study where the interaction between colloidal Tc(IV) species and Gorleben humic substances was quantified using a modified Schubert approach (2.6+/-0.3). The La-precipitation method allows to accurately determine the amount of Tc(IV) associated with HS but leads to a (small) overestimation of the free inorganic Tc(IV) species.
Jakovljević, Miro
2013-09-01
Psychopharmacotherapy is a fascinating field that can be understood in many different ways. It is both a science and an art of communication with a heavily subjective dimension. The advent of a significant number of the effective and well tolerated mental health medicines during and after 1990s decade of the brain has increased our possibilities to treat major mental disorders in more successful ways with much better treatment outcome including full recovery. However, there is a huge gap between our possibilities for achieving high treatment effectiveness and not satisfying results in day-to-day clinical practice. Creative approach to psychopharmacotherapy could advance everyday clinical practice and bridge the gap. Creative psychopharmacotherapy is a concept that incorporates creativity as its fundamental tool. Creativity involves the intention and ability to transcend limiting traditional ideas, rules, patterns and relationships and to create meaningful new ideas, interpretations, contexts and methods in clinical psychopharmacology.
NASA Technical Reports Server (NTRS)
Collins, D. J.; Coles, D. E.; Hicks, J. W.
1978-01-01
Experiments were carried out to test the accuracy of laser Doppler instrumentation for measurement of Reynolds stresses in turbulent boundary layers in supersonic flow. Two facilities were used to study flow at constant pressure. In one facility, data were obtained on a flat plate at M sub e = 0.1, with Re theta up to 8,000. In the other, data were obtained on an adiabatic nozzle wall at M sub e = 0.6, 0.8, 1.0, 1.3, and 2.2, with Re theta = 23,000 and 40,000. The mean flow as observed using Pitot tube, Preston tube, and floating element instrumentation is described. Emphasis is on the use of similarity laws with Van Driest scaling and on the inference of the shearing stress profile and the normal velocity component from the equations of mean motion. The experimental data are tabulated.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Murrelle, Lenn; Coggins, Christopher R E; Gennings, Chris; Carchman, Richard A; Carter, Walter H; Davies, Bruce D; Krauss, Marc R; Lee, Peter N; Schleef, Raymond R; Zedler, Barbara K; Heidbreder, Christian
2010-06-01
The risk-reducing effect of a potential reduced-risk tobacco product (PRRP) can be investigated conceptually in a long-term, prospective study of disease risks among cigarette smokers who switch to a PRRP and in appropriate comparison groups. Our objective was to provide guidance for establishing the fundamental design characteristics of a study intended to (1) determine if switching to a PRRP reduces the risk of lung cancer (LC) compared with continued cigarette smoking, and (2) compare, using a non-inferiority approach, the reduction in LC risk among smokers who switched to a PRRP to the reduction in risk among smokers who quit smoking entirely. Using standard statistical methods applied to published data on LC incidence after smoking cessation, we show that the sample size and duration required for a study designed to evaluate the potential for LC risk reduction for an already marketed PRRP, compared with continued smoking, varies depending on the LC risk-reducing effectiveness of the PRRP, from a 5-year study with 8000-30,000 subjects to a 15-year study with <5000 to 10,000 subjects. To assess non-inferiority to quitting, the required sample size tends to be about 10 times greater, again depending on the effectiveness of the PRRP. (c) 2009 Elsevier Inc. All rights reserved.
On decay constants and orbital distance to the Sun—part III: beta plus and electron capture decay
NASA Astrophysics Data System (ADS)
Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.
2017-02-01
The hypothesis that seasonal changes in proximity to the Sun cause variation of decay constants at permille level has been tested for radionuclides disintegrating through electron capture and beta plus decay. Activity measurements of 22Na, 54Mn, 55Fe, 57Co, 65Zn, 82+85Sr, 90Sr, 109Cd, 124Sb, 133Ba, 152Eu, and 207Bi sources were repeated over periods from 200 d up to more than four decades at 14 laboratories across the globe. Residuals from the exponential nuclear decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. Oscillations in phase with Earth’s orbital distance to the sun could not be observed within 10-4-10-5 range precision. The most stable activity measurements of β + and EC decaying sources set an upper limit of 0.006% or less to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months.
Webber, D M; Tishchenko, V; Peng, Q; Battu, S; Carey, R M; Chitwood, D B; Crnkovic, J; Debevec, P T; Dhamija, S; Earle, W; Gafarov, A; Giovanetti, K; Gorringe, T P; Gray, F E; Hartwig, Z; Hertzog, D W; Johnson, B; Kammel, P; Kiburg, B; Kizilgul, S; Kunkle, J; Lauss, B; Logashenko, I; Lynch, K R; McNabb, R; Miller, J P; Mulhauser, F; Onderwater, C J G; Phillips, J; Rath, S; Roberts, B L; Winter, P; Wolfe, B
2011-01-28
We report a measurement of the positive muon lifetime to a precision of 1.0 ppm; it is the most precise particle lifetime ever measured. The experiment used a time-structured, low-energy muon beam and a segmented plastic scintillator array to record more than 2×10(12) decays. Two different stopping target configurations were employed in independent data-taking periods. The combined results give τ(μ(+)) (MuLan)=2 196 980.3(2.2) ps, more than 15 times as precise as any previous experiment. The muon lifetime gives the most precise value for the Fermi constant: G(F) (MuLan)=1.166 378 8(7)×10(-5) GeV(-2) (0.6 ppm). It is also used to extract the μ(-)p singlet capture rate, which determines the proton's weak induced pseudoscalar coupling g(P).
Takács-Novák, K; Tam, K Y
2000-01-01
The acid-base equilibria of several diprotic amphoteric drugs, namely, niflumic acid, norfloxacin, piroxicam, pyridoxine and 2-methyl-4-oxo-3H-quinazoline-3-acetic acid have been characterized in terms of microconstants and tautomeric ratios. A multiwavelength spectrophotometric (WApH) titration method for determination of acid dissociation constants (pKa values) of ionizable compounds developed previously was applied for this purpose. Microspeciation was investigated by three approaches: (1) selective monitoring of ionizable group by spectrophotometry, (2) deductive method and (3) k(z) method for determination of tautomeric ratio from co-solvent mixtures. The formulation for (3) has been derived and found to invoke fewer assumptions than a reported procedure (K. Takács-Novák, A. Avdeef, K.J Box, B. Podányi, G. Szász, J. Pharm. Biomed. Anal., 12 (1994) 1369-1377). It has been shown that the WApH technique, for such types of ampholytes, is able to deduce the microconstants and tautomeric ratios which are in good agreement with literature data.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Johnson, David K.; Serin, Nadir; Risha, Grant A.; Merkle, Charles L.; Venkateswaran, Sankaran
1996-01-01
This final report summarizes the major findings on the subject of 'Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Processes with Applications to Hybrid Rocket Motors', performed from 1 April 1994 to 30 June 1996. Both experimental results from Task 1 and theoretical/numerical results from Task 2 are reported here in two parts. Part 1 covers the experimental work performed and describes the test facility setup, data reduction techniques employed, and results of the test firings, including effects of operating conditions and fuel additives on solid fuel regression rate and thermal profiles of the condensed phase. Part 2 concerns the theoretical/numerical work. It covers physical modeling of the combustion processes including gas/surface coupling, and radiation effect on regression rate. The numerical solution of the flowfield structure and condensed phase regression behavior are presented. Experimental data from the test firings were used for numerical model validation.
Sierra-Ramírez, Rocío; Garcia, Laura A; Holtzapple, Mark Thomas
2011-07-01
Kinetic models applied to oxygen bleaching of paper pulp focus on the degradation of polymers, either lignin or carbohydrates. Traditionally, they separately model different moieties that degrade at three different rates: rapid, medium, and slow. These models were successfully applied to lignin and carbohydrate degradation of poplar wood submitted to oxidative pretreatment with lime at the following conditions: temperature 110-180°C, total pressure 7.9-21.7 bar, and excess lime loading of 0.5 g Ca(OH)2 per gram dry biomass. These conditions were held constant for 1-6 h. The models properly fit experimental data and were used to determine pretreatment selectivity in two fashions: differential and integral. By assessing selectivity, the detrimental effect of pretreatment on carbohydrates at high temperatures and at low lignin content was determined. The models can be used to identify pretreatment conditions that selectively remove lignin while preserving carbohydrates. Lignin removal≥50% with glucan preservation≥90% was observed for differential glucan selectivities between ∼10 and ∼30 g lignin degraded per gram glucan degraded. Pretreatment conditions complying with these reference values were preferably observed at 140°C, total pressure≥14.7 bars, and for pretreatment times between 2 and 6 h depending on the total pressure (the higher the pressure, the less time). They were also observed at 160°C, total pressure of 14.7 and 21.7 bars, and pretreatment time of 2 h. Generally, at 110°C lignin removal is insufficient and at 180°C carbohydrates do not preserve well.
Tiller, William A
2010-03-01
In previous articles by this author and his colleagues in the Journal of Alternative and Complementary Medicine, it has been shown that physical reality consists of two uniquely different categories of substance, one being electric charge-based while the other appears to be magnetic charge-based. Normally, only the electric atom/molecule type of substance is accessible by our traditional measurement instruments. We label this condition as the uncoupled state of physical reality that is our long-studied, electric atom/molecule level of nature. The second level of physical reality is invisible to traditional measurement instruments when the system is in the uncoupled state but is accessible to these same instruments when the system is in the coupled state of physical reality. The coupling of these two unique levels has been shown to occur via the application of a sufficient intensity of human consciousness in the form of specific intentions. Part II of this article (in a forthcoming issue) explores the thermodynamics of complementary and 328 alternative medicine (CAM) through five different space-time applications involving coupled state physics to show their relevance to today's medicine: (1) homeopathy; (2) the placebo effect; (3) long-range, room temperature, macroscopic-size-scale, information entanglement; (4) explanation for dark matter/energy plus possible human levitation; and (5) electrodermal diagnostic devices. The purpose is to clearly differentiate the use and limitations of uncoupled state physics in nature and today's traditional medicine from coupled state physics in tomorrow's CAM. Existing orthodox science provides the technical underpinnings and mindset for today's orthodox medicine. Psycho-energetic science will provide the technical underpinnings and mindset for CAM.
NASA Technical Reports Server (NTRS)
Carroll, J. A.
1986-01-01
Some fundamental aspects of tethers are presented and briefly discussed. The effects of gravity gradients, dumbbell libration in circular orbits, tether control strategies and impact hazards for tethers are among those fundamentals. Also considered are aerodynamic drag, constraints in momentum transfer applications and constraints with permanently deployed tethers. The theoretical feasibility of these concepts are reviewed.
NASA Astrophysics Data System (ADS)
Razoumny, Yury N.
2016-11-01
This paper opens a series of articles expounding the fundamentals of the route theory for satellite constellation design for Earth discontinuous coverage. In Part 1 of the series the analytical model for Earth coverage by satellites' swath conforming to the essential of discontinuous coverage, in contrast to continuous coverage, is presented. The analytic relations are consecutively derived for calculation of single- and multi-satellite Earth surface latitude coverage as well as for generating full set of typical satellite visibility zone time streams realized in the repeating latitude coverage pattern for given arbitrary satellite constellation. The analytic relations mentioned are used for developing the method for analysis of discontinuous coverage of fixed arbitrary Earth region for given satellite constellation using both deterministic and stochastic approaches. The method provides analysis of the revisit time for given satellite constellation, as a result of high speed (fractions of a second or seconds) computer calculations in a wide range of possible revisit time variations for different practical purposes with high accuracy which is at least on par with that provided by known numerical simulating methods based on direct modeling of the satellite observation mission, or in a number of cases is even superior to it.
Bonhomme, Christian; Gervais, Christel; Coelho, Cristina; Pourpoint, Frédérique; Azaïs, Thierry; Bonhomme-Coury, Laure; Babonneau, Florence; Jacob, Guy; Ferrari, Maude; Canet, Daniel; Yates, Jonathan R; Pickard, Chris J; Joyce, Siân A; Mauri, Francesco; Massiot, Dominique
2010-12-01
In 2001, Pickard and Mauri implemented the gauge including projected augmented wave (GIPAW) protocol for first-principles calculations of NMR parameters using periodic boundary conditions (chemical shift anisotropy and electric field gradient tensors). In this paper, three potentially interesting perspectives in connection with PAW/GIPAW in solid-state NMR and pure nuclear quadrupole resonance (NQR) are presented: (i) the calculation of J coupling tensors in inorganic solids; (ii) the calculation of the antisymmetric part of chemical shift tensors and (iii) the prediction of (14)N and (35)Cl pure NQR resonances including dynamics. We believe that these topics should open new insights in the combination of GIPAW, NMR/NQR crystallography, temperature effects and dynamics. Points (i), (ii) and (iii) will be illustrated by selected examples: (i) chemical shift tensors and heteronuclear (2)J(P-O-Si) coupling constants in the case of silicophosphates and calcium phosphates [Si(5)O(PO(4))(6), SiP(2)O(7) polymorphs and α-Ca(PO(3))(2)]; (ii) antisymmetric chemical shift tensors in cyclopropene derivatives, C(3)X(4) (X = H, Cl, F) and (iii) (14)N and (35)Cl NQR predictions in the case of RDX (C(3)H(6)N(6)O(6)), β-HMX (C(4)H(8)N(8)O(8)), α-NTO (C(2)H(2)N(4)O(3)) and AlOPCl(6). RDX, β-HMX and α-NTO are explosive compounds. Copyright © 2010 John Wiley & Sons, Ltd.
Redmond, W H
2001-01-01
This chapter outlines current marketing practice from a managerial perspective. The role of marketing within an organization is discussed in relation to efficiency and adaptation to changing environments. Fundamental terms and concepts are presented in an applied context. The implementation of marketing plans is organized around the four P's of marketing: product (or service), promotion (including advertising), place of delivery, and pricing. These are the tools with which marketers seek to better serve their clients and form the basis for competing with other organizations. Basic concepts of strategic relationship management are outlined. Lastly, alternate viewpoints on the role of advertising in healthcare markets are examined.
Field Theory of Fundamental Interactions
NASA Astrophysics Data System (ADS)
Wang, Shouhong; Ma, Tian
2017-01-01
First, we present two basic principles, the principle of interaction dynamics (PID) and the principle of representation invariance (PRI). Intuitively, PID takes the variation of the action under energy-momentum conservation constraint. We show that the PID is the requirement of the presence of dark matter and dark energy, the Higgs field and the quark confinement. PRI requires that the SU(N) gauge theory be independent of representations of SU(N). It is clear that PRI is the logic requirement of any gauge theory. With PRI, we demonstrate that the coupling constants for the strong and the weak interactions are the main sources of these two interactions, reminiscent of the electric charge. Second, we emphasize that symmetry principles-the principle of general relativity and the principle of Lorentz invariance and gauge invariance-together with the simplicity of laws of nature, dictate the actions for the four fundamental interactions. Finally, we show that the PID and the PRI, together with the symmetry principles give rise to a unified field model for the fundamental interactions, which is consistent with current experimental observations and offers some new physical predictions. The research is supported in part by the National Science Foundation (NSF) grant DMS-1515024, and by the Office of Naval Research (ONR) grant N00014-15-1-2662.
Fundamentals of Library Instruction
ERIC Educational Resources Information Center
McAdoo, Monty L.
2012-01-01
Being a great teacher is part and parcel of being a great librarian. In this book, veteran instruction services librarian McAdoo lays out the fundamentals of the discipline in easily accessible language. Succinctly covering the topic from top to bottom, he: (1) Offers an overview of the historical context of library instruction, drawing on recent…
ERIC Educational Resources Information Center
Marine Corps Inst., Washington, DC.
Developed as part of the Marine Corps Institute (MCI) correspondence training program, this course on food service fundamentals is designed to provide a general background in the basic aspects of the food service program in the Marine Corps; it is adaptable for nonmilitary instruction. Introductory materials include specific information for MCI…
Balfour, Susan
2012-05-01
This article, Part 2 of a 2-part series, continues the examination of the Medicare compliance climate and its impact on hospice providers. This 2nd part focuses on hospice-specific compliance risk areas and specific risk-reduction strategies. The case example from Part 1 is continued.
NASA Technical Reports Server (NTRS)
Gupta, P. K.; Tessarzik, J. M.; Cziglenyi, L.
1974-01-01
Dynamic properties of a commerical polybutadiene compound were determined at a constant temperature of 32 C by a forced-vibration resonant mass type of apparatus. The constant thermal state of the elastomer was ensured by keeping the ambient temperature constant and by limiting the power dissipation in the specimen. Experiments were performed with both compression and shear specimens at several preloads (nominal strain varying from 0 to 5 percent), and the results are reported in terms of a complex stiffness as a function of frequency. Very weak frequency dependence is observed and a simple power law type of correlation is shown to represent the data well. Variations in the complex stiffness as a function of preload are also found to be small for both compression and shear specimens.
2003-01-22
Still photographs taken over 16 hours on Nov. 13, 2001, on the International Space Station have been condensed into a few seconds to show the de-mixing -- or phase separation -- process studied by the Experiment on Physics of Colloids in Space. Commanded from the ground, dozens of similar tests have been conducted since the experiment arrived on ISS in 2000. The sample is a mix of polymethylmethacrylate (PMMA or acrylic) colloids, polystyrene polymers and solvents. The circular area is 2 cm (0.8 in.) in diameter. The phase separation process occurs spontaneously after the sample is mechanically mixed. The evolving lighter regions are rich in colloid and have the structure of a liquid. The dark regions are poor in colloids and have the structure of a gas. This behavior carnot be observed on Earth because gravity causes the particles to fall out of solution faster than the phase separation can occur. While similar to a gas-liquid phase transition, the growth rate observed in this test is different from any atomic gas-liquid or liquid-liquid phase transition ever measured experimentally. Ultimately, the sample separates into colloid-poor and colloid-rich areas, just as oil and vinegar separate. The fundamental science of de-mixing in this colloid-polymer sample is the same found in the annealing of metal alloys and plastic polymer blends. Improving the understanding of this process may lead to improving processing of these materials on Earth.
Kauk, Justin; Hill, Austin D; Althausen, Peter L
2014-07-01
In order for a trauma surgeon to have an intelligent discussion with hospital administrators, healthcare plans, policymakers, or any other physicians, a basic understanding of the fundamentals of healthcare is paramount. It is truly shocking how many surgeons are unable to describe the difference between Medicare and Medicaid or describe how hospitals and physicians get paid. These topics may seem burdensome but they are vital to all business decision making in the healthcare field. The following chapter provides further insight about what we call "the basics" of providing medical care today. Most of the topics presented can be applied to all specialties of medicine. It is broken down into 5 sections. The first section is a brief overview of government programs, their influence on care delivery and reimbursement, and past and future legislation. Section 2 focuses on the compliance, care provision, and privacy statutes that regulate physicians who care for Medicare/Medicaid patient populations. With a better understanding of these obligations, section 3 discusses avenues by which physicians can stay informed of current and pending health policy and provides ways that they can become involved in shaping future legislation. The fourth section changes gears slightly by explaining how the concepts of trade restraint, libel, antitrust legislation, and indemnity relate to physician practice. The fifth, and final, section ties all of components together by describing how physician-hospital alignment can be mutually beneficial in providing patient care under current healthcare policy legislation.
Cosmology with varying constants.
Martins, Carlos J A P
2002-12-15
The idea of possible time or space variations of the 'fundamental' constants of nature, although not new, is only now beginning to be actively considered by large numbers of researchers in the particle physics, cosmology and astrophysics communities. This revival is mostly due to the claims of possible detection of such variations, in various different contexts and by several groups. I present the current theoretical motivations and expectations for such variations, review the current observational status and discuss the impact of a possible confirmation of these results in our views of cosmology and physics as a whole.
NASA Technical Reports Server (NTRS)
Wang, Jai-Ching; Watring, Dale A.; Lehoczky, Sandor L.; Su, Ching-Hua; Gillies, Don; Szofran, Frank
1999-01-01
Infrared detector materials, such as Hg(1-x)Cd(x)Te, Hg(1-x)Zn(x)Te have energy gaps almost linearly proportional to its composition. Due to the wide separation of liquidus and solidus curves of their phase diagram, there are compositional segregations in both of axial and radial directions of these crystals grown in the Bridgman system unidirectionally with constant growth rate. It is important to understand the mechanisms which affect lateral segregation such that large uniform radial composition crystal is possible. Following Coriell, etc's treatment, we have developed a theory to study the effect of a curved melt-solid interface shape on the lateral composition distribution. The system is considered to be cylindrical system with azimuthal symmetric with a curved melt-solid interface shape which can be expressed as a linear combination of a series of Bessell's functions. The results show that melt-solid interface shape has a dominate effect on lateral composition distribution of these systems. For small values of b, the solute concentration at the melt-solid interface scales linearly with interface shape with a proportional constant of the product of b and (1 - k), where b = VR/D, with V as growth velocity, R as sample radius, D as diffusion constant and k as distribution constant. A detailed theory will be presented. A computer code has been developed and simulations have been performed and compared with experimental results. These will be published in another paper.
NASA Technical Reports Server (NTRS)
Wang, Jai-Ching; Watring, Dale A.; Lehoczky, Sandor L.; Su, Ching-Hua; Gillies, Don; Szofran, Frank
1999-01-01
Infrared detector materials, such as Hg(1-x)Cd(x)Te, Hg(1-x)Zn(x)Te have energy gaps almost linearly proportional to its composition. Due to the wide separation of liquidus and solidus curves of their phase diagram, there are compositional segregations in both of axial and radial directions of these crystals grown in the Bridgman system unidirectionally with constant growth rate. It is important to understand the mechanisms which affect lateral segregation such that large uniform radial composition crystal is possible. Following Coriell, etc's treatment, we have developed a theory to study the effect of a curved melt-solid interface shape on the lateral composition distribution. The system is considered to be cylindrical system with azimuthal symmetric with a curved melt-solid interface shape which can be expressed as a linear combination of a series of Bessell's functions. The results show that melt-solid interface shape has a dominate effect on lateral composition distribution of these systems. For small values of b, the solute concentration at the melt-solid interface scales linearly with interface shape with a proportional constant of the product of b and (1 - k), where b = VR/D, with V as growth velocity, R as sample radius, D as diffusion constant and k as distribution constant. A detailed theory will be presented. A computer code has been developed and simulations have been performed and compared with experimental results. These will be published in another paper.
Webb, R.A.
1995-12-01
The need to have accurate petroleum measurement is obvious. Petroleum measurement is the basis of commerce between oil producers, royalty owners, oil transporters, refiners, marketers, the Department of Revenue, and the motoring public. Furthermore, petroleum measurements are often used to detect operational problems or unwanted releases in pipelines, tanks, marine vessels, underground storage tanks, etc. Therefore, consistent, accurate petroleum measurement is an essential part of any operation. While there are several methods and different types of equipment used to perform petroleum measurement, the basic process stays the same. The basic measurement process is the act of comparing an unknown quantity, to a known quantity, in order to establish its magnitude. The process can be seen in a variety of forms; such as measuring for a first-down in a football game, weighing meat and produce at the grocery, or the use of an automobile odometer.
NASA Astrophysics Data System (ADS)
Krykunov, Mykhaylo; Seth, Michael; Ziegler, Tom; Autschbach, Jochen
2007-12-01
A time-dependent density functional theory (TDDFT) formalism with damping for the calculation of the magnetic optical rotatory dispersion and magnetic circular dichroism (MCD) from the complex Verdet constant is presented. For a justification of such an approach, we have derived the TDDFT analog of the sum-over-states formula for the Verdet constant. The results of the MCD calculations by this method for ethylene, furan, thiophene, selenophene, tellurophene, and pyrrole are in good agreement with our previous theoretical sum-over-states MCD spectra. For the π →π* transition of propene, we have obtained a positive Faraday B term. It is located between the two negative B terms. This finding is in agreement with experiment in the range of 6-8eV.
Bou Malham, I; Letellier, P; Turmine, M
2007-04-15
The autoprotolysis constants (K(s)) of water - 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)) mixtures were determined at 298K over the composition range of 0 to 77.43vol.% bmimBF(4) using potentiometric method with a glass electrode. A slight increase in the autoprotolysis constant was observed when the salt was added to the water. The value of the ionic product of the medium then decreases as the bmimBF(4) content increases from about 20vol.%. The acid-base properties of these media were perfectly described by Bahe's approaches that were completed by Varela et al. concerning structured electrolyte solutions with large short-range interactions.
2003-02-09
This image depicts the formation of multiple whirlpools in a sodium gas cloud. Scientists who cooled the cloud and made it spin created the whirlpools in a Massachusetts Institute of Technology laboratory, as part of NASA-funded research. This process is similar to a phenomenon called starquakes that appear as glitches in the rotation of pulsars in space. MIT's Wolgang Ketterle and his colleagues, who conducted the research under a grant from the Biological and Physical Research Program through NASA's Jet Propulsion Laboratory, Pasadena, Calif., cooled the sodium gas to less than one millionth of a degree above absolute zero (-273 Celsius or -460 Fahrenheit). At such extreme cold, the gas cloud converts to a peculiar form of matter called Bose-Einstein condensate, as predicted by Albert Einstein and Satyendra Bose of India in 1927. No physical container can hold such ultra-cold matter, so Ketterle's team used magnets to keep the cloud in place. They then used a laser beam to make the gas cloud spin, a process Ketterle compares to stroking a ping-pong ball with a feather until it starts spirning. The spinning sodium gas cloud, whose volume was one- millionth of a cubic centimeter, much smaller than a raindrop, developed a regular pattern of more than 100 whirlpools.
Dielectric Constant of Suspensions
NASA Astrophysics Data System (ADS)
Mendelson, Kenneth S.; Ackmann, James J.
1997-03-01
We have used a finite element method to calculate the dielectric constant of a cubic array of spheres. Extensive calculations support preliminary conclusions reported previously (K. Mendelson and J. Ackmann, Bull. Am. Phys. Soc. 41), 657 (1996).. At frequencies below 100 kHz the real part of the dielectric constant (ɛ') shows oscillations as a function of the volume fraction of suspension. These oscillations disappear at low conductivities of the suspending fluid. Measurements of the dielectric constant (J. Ackmann, et al., Ann. Biomed. Eng. 24), 58 (1996). (H. Fricke and H. Curtis, J. Phys. Chem. 41), 729 (1937). are not sufficiently sensitive to show oscillations but appear to be consistent with the theoretical results.
Energy conservation and constants variation.
NASA Astrophysics Data System (ADS)
Kraiselburd, L.; Miller Bertolami, M. M.; Sisterna, P.; Vucetich, H.
If fundamental constants vary, the internal energy of macroscopic bodies should change. This should produce observable effects. It is shown that those effects can produce upper bounds on the variation of much lower than those coming from Eötvös experiments.
Balfour, Susan
2012-02-01
This article, Part 1 of a 2-part series, provides an overview of the current Medicare compliance climate and its implications for hospice providers. Content focuses on the 7 elements of a comprehensive compliance framework as defined by the Health and Human Services Office of the Inspector General in its 1999 Compliance Guidance for Hospices. A brief case example is provided and serves to set the stage for Part 2, which will explore hospice-specific risk areas and specific risk-reduction strategies.
Peng, Ya; Jiang, Zhong'an; Chen, Jushi
2017-03-23
The mechanism and kinetics of gas-phase hydrogen-abstraction by the O((3)P) from methane are investigated using ab initio calculations and dynamical methods. Not only are the electronic structure properties including the optimized geometries, relative energies, and vibrational frequencies of all the stationary points obtained from state-averaged complete active space self-consistent field calculations, but also the single-point energies for all points on the intrinsic reaction coordinate are evaluated using the internally contracted multireference configuration interaction approach with modified optimized cc-pCVDZ basis sets. Our calculations give a fairly accurate description of the regions around the (3)A″ transition state in the O((3)P) attacking a near-collinear H-CH3 direction with a barrier height of 12.53 kcal/mol, which is lower than those reported before. Subsequently, thermal rate constants for this hydrogen-abstraction are calculated using the canonical unified statistical theory method with the temperature ranging from 298 K to 1000 K. These calculated rate constants are in agreement with experiments. The present work reveals the reaction mechanism of hydrogen-abstraction by the O((3)P) from methane, and it is helpful for the understanding of methane combustion.
Lawler, J.S.
2001-10-29
An inverter topology and control scheme has been developed that can drive low-inductance, surface-mounted permanent magnet motors over the wide constant power speed range required in electric vehicle applications. This new controller is called the dual-mode inverter control (DMIC) [1]. The DMIC can drive either the Permanent Magnet Synchronous Machine (PMSM) with sinusoidal back emf, or the brushless dc machine (BDCM) with trapezoidal emf in the motoring and regenerative braking modes. In this paper we concentrate on the BDCM under high-speed motoring conditions. Simulation results show that if all motor and inverter loss mechanisms are neglected, the constant power speed range of the DMIC is infinite. The simulation results are supported by closed form expressions for peak and rms motor current and average power derived from analytical solution to the differential equations governing the DMIC/BDCM drive for the lossless case. The analytical solution shows that the range of motor inductance that can be accommodated by the DMIC is more than an order of magnitude such that the DMIC is compatible with both low- and high-inductance BDCMs. Finally, method is given for integrating the classical hysteresis band current control, used for motor control below base speed, with the phase advance of DMIC that is applied above base speed. The power versus speed performance of the DMIC is then simulated across the entire speed range.
Fundamental Physics and Precision Measurements
NASA Astrophysics Data System (ADS)
Hänsch, T. W.
2006-11-01
"Very high precision physics has always appealed to me. The steady improvement in technologies that afford higher and higher precision has been a regular source of excitement and challenge during my career. In science, as in most things, whenever one looks at something more closely, new aspects almost always come into play …" With these word from the book "How the Laser happened", Charles H. Townes expresses a passion for precision that is now shared by many scientists. Masers and lasers have become indispensible tools for precision measurements. During the past few years, the advent of femtosecond laser frequency comb synthesizers has revolutionized the art of directly comparing optical and microwave frequencies. Inspired by the needs of precision laser spectroscopy of the simple hydrogen atom, such frequency combs are now enabling ultra-precise spectroscopy over wide spectral ranges. Recent laboratory experiments are already setting stringent limits for possible slow variations of fundamental constants. Laser frequency combs also provide the long missing clockwork for optical atomic clocks that may ultimately reach a precision of parts in 1018 and beyond. Such tools will open intriguing new opportunities for fundamental experiments including new tests of special and general relativity. In the future, frequency comb techniques may be extended into the extreme ultraviolet and soft xray regime, opening a vast new spectral territory to precision measurements. Frequency combs have also become a key tool for the emerging new field of attosecond science, since they can control the electric field of ultrashort laser pulses on an unprecedented time scale. The biggest surprise in these endeavours would be if we found no surprise.
Li, Aihua; Meyre, David
2014-01-01
With the decrease in sequencing costs, personalized genome sequencing will eventually become common in medical practice. We therefore write this series of three reviews to help non-geneticist clinicians to jump into the fast-moving field of personalized medicine. In the first article of this series, we reviewed the fundamental concepts in molecular genetics. In this second article, we cover the key concepts and methods in genetic epidemiology including the classification of genetic disorders, study designs and their implementation, genetic marker selection, genotyping and sequencing technologies, gene identification strategies, data analyses and data interpretation. This review will help the reader critically appraise a genetic association study. In the next article, we will discuss the clinical applications of genetic epidemiology in the personalized medicine area. PMID:25598767
Integrable Cosmological Models in DD and Variations of Fundamental Constants
NASA Astrophysics Data System (ADS)
Melnikov, V. N.
Discovery of present acceleration of the Universe, dark matter and dark energy problems are great challenges to modern physics, which may bring to a new revolution. Integrable multidimensional models of gravitation and cosmology make up one of the proper approaches to study basic issues and, in particular, strong field objects, the Early and present Universe and black hole physics 1,2. Problems of the absolute G measurements and its possible time and range variations, which are reflections of the unification problem are discussed. A need for further measurements of G and its possible variations (also in space) is pointed out.
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.
2015-01-01
Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 degrees is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.
2014-01-01
Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 deg is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.
NASA Astrophysics Data System (ADS)
Razoumny, Yury N.
2016-12-01
Continuing the series of papers with description of the fundamentals of the Route Theory for satellite constellation design, the general method for minimization of the satellite swath width required under given constraint on the maximum revisit time (MRT), the main quality characteristic of the satellite constellation discontinuous coverage, is presented. The interrelation between MRT and multiplicity of the periodic coverage - the minimum number of the observation sessions realized for the points of observation region during the satellite tracks' repetition period - is revealed and described. In particular, it is shown that a change of MRT can occur only at points of coverage multiplicity changing. Basic elements of multifold Earth coverage theory are presented and used for obtaining analytical relations for the minimum swath width providing given multifold coverage. The satellite swath width calculation procedure for the multifold coverage of rotating Earth using the iterations on the sphere of stationary coverage is developed. The numerical results for discontinuous coverage with minimal satellite swath, including comparison with some known particular cases and implementations of the method, are presented.
Garrett, H J; Bleay, S M
2013-06-01
The fundamental interactions between sebaceous constituents of fingermarks and three lipid specific fingermark enhancement reagents (solvent black 3, basic violet 3 and basic violet 2) are reported. The staining of fingermarks is investigated using optical microscopy, and the interaction of the reagents with individual constituents is explored using spot tests. It is demonstrated that solvent black 3, basic violet 3 and basic violet 2 all interact with different constituents of sebaceous sweat, and this may offer potential for using the reagents in sequence for fingermark enhancement. Further tests to explore the effect of dye concentration on reagent effectiveness indicate that dye concentration can be reduced by up to 25% without significant detriment to effectiveness. It is shown that there is little practical difference between solvent black 3 formulations with the solvents (ethanol and 1-methoxy-2-propanol) used in this study. The study also indicates that basic violet 2 may have some operational advantages over basic violet 3 and may be worthy of further investigation. Copyright © 2012 Forensic Science Society. All rights reserved.
Li, Aihua; Meyre, David
2014-01-01
With the decrease in sequencing cost and the rise of companies providing sequencing services, it is likely that personalized whole-genome sequencing will eventually become an instrument of common medical practice. We write this series of three reviews to help non-geneticist clinicians get ready for the major breakthroughs that are likely to occur in the coming years in the fast-moving field of personalized medicine. This first paper focuses on the fundamental concepts of molecular genetics. We review how recombination occurs during meiosis, how de novo genetic variations including single nucleotide polymorphisms (SNPs), insertions and deletions are generated and how they are inherited from one generation to the next. We detail how genetic variants can impact protein expression and function, and summarize the main characteristics of the human genome. We also explain how the achievements of the Human Genome Project, the HapMap Project, and more recently, the 1000 Genomes Project, have boosted the identification of genetic variants contributing to common diseases in human populations. The second and third papers will focus on genetic epidemiology and clinical applications in personalized medicine. PMID:25132812
Lawler, J.S.
2001-10-29
Previous theoretical work has shown that when all loss mechanisms are neglected the constant power speed range (CPSR) of a brushless dc motor (BDCM) is infinite when the motor is driven by the dual-mode inverter control (DMIC) [1,2]. In a physical drive, losses, particularly speed-sensitive losses, will limit the CPSR to a finite value. In this paper we report the results of laboratory testing of a low-inductance, 7.5-hp BDCM driven by the DMIC. The speed rating of the test motor rotor limited the upper speed of the testing, and the results show that the CPSR of the test machine is greater than 6:1 when driven by the DMIC. Current wave shape, peak, and rms values remained controlled and within rating over the entire speed range. The laboratory measurements allowed the speed-sensitive losses to be quantified and incorporated into computer simulation models, which then accurately reproduce the results of lab testing. The simulator shows that the limiting CPSR of the test motor is 8:1. These results confirm that the DMIC is capable of driving low-inductance BDCMs over the wide CPSR that would be required in electric vehicle applications.
NASA Technical Reports Server (NTRS)
Dimotakis, P. E.; Collins, D. J.; Lang, D. B.
1979-01-01
A description of both the mean and the fluctuating components of the flow, and of the Reynolds stress as observed using a dual forward scattering laser-Doppler velocimeter is presented. A detailed description of the instrument and of the data analysis techniques were included in order to fully document the data. A detailed comparison was made between the laser-Doppler results and those presented in Part 1, and an assessment was made of the ability of the laser-Doppler velocimeter to measure the details of the flows involved.
NASA Astrophysics Data System (ADS)
Freedman, Wendy; Madore, Barry; Mager, Violet; Persson, Eric; Rigby, Jane; Sturch, Laura
2008-12-01
We present a plan to measure a value of the Hubble constant having a final systematic uncertainty of only 3% by taking advantage of Spitzer's unique mid-infrared capabilities. This involves using IRAC to undertake a fundamental recalibration of the Cepheid distance scale and progressively moving it out to pure Hubble flow by an application of a revised mid-IR Tully-Fisher relation. The calibration and application, in one coherent and self-consistent program, will go continuously from distances of parsecs to several hundred megaparsecs. It will provide a first-ever mid-IR calibration of Cepheids in the Milky Way, LMC and Key Project spiral galaxies and a first-ever measurement and calibration of the TF relation at mid-infrared wavelengths, and finally a calibration of Type Ia SNe. Most importantly this program will be undertaken with a single instrument, on a single telescope, working exclusively at mid-infrared wavelengths that are far removed from the obscuring effects of dust extinction. Using Spitzer in this focused way will effectively eliminate all of the major systematics in the Cepheid and TF distance scales that have been the limiting factors in all previous applications, including the HST Key Project. By executing this program, based exclusively on Spitzer data, we will deliver a value of the Hubble constant, having a statistical precision better than 11%, with all currently known systematics quantified and constrained to a level of less than 3%. A value of Ho determined to this level of systematic accuracy is required for up-coming cosmology experiments, including Planck. A more accurate value of the Hubble constant will directly result in other contingently measured cosmological parameters (e.g., Omega_m, Omega_L, & w) having their covariant uncertainties reduced significantly now. Any further improvements using this route will have to await JWST, for which this study is designed to provide a lasting and solid foundation, and ultimately a value of Ho
Water dimer equilibrium constant of saturated vapor
NASA Astrophysics Data System (ADS)
Malomuzh, N. P.; Mahlaichuk, V. N.; Khrapatyi, S. V.
2014-08-01
The value and temperature dependence of the dimerization constant for saturated water vapor are determined. A general expression that links the second virial coefficient and the dimerization constant is obtained. It is shown that the attraction between water monomers and dimers is fundamental, especially at T > 350 K. The range of application for the obtained results is determined.
Harmonic undulator radiations with constant magnetic field
NASA Astrophysics Data System (ADS)
Jeevakhan, Hussain; Mishra, G.
2015-01-01
Harmonic undulators has been analysed in the presence of constant magnetic field along the direction of main undulator field. The spectrum modifications in harmonic undulator radiations and intensity degradation as a function of constant magnetic field magnitude at fundamental and third harmonics have been evaluated with a numerical integration method and generalised Bessel function. The role of harmonic field to overcome the intensity reduction due to constant magnetic field and energy spread in electron beam has also been demonstrated.
Varying Constants, Gravitation and Cosmology.
Uzan, Jean-Philippe
2011-01-01
Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.
NASA Astrophysics Data System (ADS)
Mangano, Gianpiero; Lizzi, Fedele; Porzio, Alberto
2015-12-01
Motivated by the Dirac idea that fundamental constants are dynamical variables and by conjectures on quantum structure of space-time at small distances, we consider the possibility that Planck constant ℏ is a time depending quantity, undergoing random Gaussian fluctuations around its measured constant mean value, with variance σ2 and a typical correlation timescale Δt. We consider the case of propagation of a free particle and a one-dimensional harmonic oscillator coherent state, and show that the time evolution in both cases is different from the standard behavior. Finally, we discuss how interferometric experiments or exploiting coherent electromagnetic fields in a cavity may put effective bounds on the value of τ = σ2Δt.
Dielectric Constant and Loss Data. Part 4
1980-12-01
02, 5.99ý59999999999992 DOT = 21.0000000000000000 .400(000000 OC0000019D-03,A6= .0 .4827599999999999930-03,TC2= .3533000000000000000-08...3,83; Asbetos8, lle4 pl5 tic,9 IV-56,57; VI-28,29; P.R.-72 V-8,9,84,85,96,97 Bell Labs., F-66, IV-3,83; V.-56,57; * ~~ Asphalt pavement and asphalts ,PR...7 i’.R.-156 .- 7"" • I :Bentonite, P.R.-3.27 Asphalts and cenients, IV-56 sAtrophyllite, P.R.-127 Bentonite, IV-14; V-74 "Astroquartz", 8-46,47; 9-42
Dielectric Constant and Loss Data Part 2
1975-12-01
fluoride , hot-pressed 7 IRTRAN 1, Eastman Kodak 7,8 Magnesium oxide crystals, MIT, Ceramics Laboratory 8 Cr-doped magnesium oxide, Univ. of Colors o 11...3.8E-9 • - =- • ., , -- . . . . . .. .. . . .. . . .. ..... . . ... Magnesium fluoride , hot-pressed IUTRTN I (cont.) Comparison of 4 samples at 8.5 GHz...128,129 AVCO Research, P.R,-161 Beryllia, IV-6; V-21,24,52,53; VI-22-27; AVCO Research, polyvinylidne P.y,.--3q-43 fluoride , P.R.-161 American Lmwa
Dielectric Constant and Loss Data, Part 3
1977-05-01
tan 6 .00012 .00096 .00126 .00129 .00203 0 >1. E18 200 K’ 2.89 2.86 2.84 2.82 2.78 tan 6 .0147 .00729 .00382 .00210 .00135 0 4.3E11 4.21E11 235 p 5.E9...Paraffin, natural, IV-58 135,1.42,143,183,209 Paraffin wax 1320 ASTM and 1350 AMP, IV-58 Nickel-lithium ferrite (MIT samples), "Paraplex" P13, IV-48,122 V
Nuclei and Fundamental Symmetries
NASA Astrophysics Data System (ADS)
Haxton, Wick
2016-09-01
Nuclei provide marvelous laboratories for testing fundamental interactions, often enhancing weak processes through accidental degeneracies among states, and providing selection rules that can be exploited to isolate selected interactions. I will give an overview of current work, including the use of parity violation to probe unknown aspects of the hadronic weak interaction; nuclear electric dipole moment searches that may shed light on new sources of CP violation; and tests of lepton number violation made possible by the fact that many nuclei can only decay by rare second-order weak interactions. I will point to opportunities in both theory and experiment to advance the field. Based upon work supported in part by the US Department of Energy, Office of Science, Office of Nuclear Physics and SciDAC under Awards DE-SC00046548 (Berkeley), DE-AC02-05CH11231 (LBNL), and KB0301052 (LBNL).
A Fundamental Breakdown. Part I: Locomotion
ERIC Educational Resources Information Center
Townsend, J. Scott; Mohr, Derek J.
2005-01-01
In an earlier issue of "TEPE" (January, 2005) the "Research to Practice" column examined the effects of a developmental curriculum on elementary-aged children's performance. Pappa, Evanggelinou, and Karabourniotis (2005) found support for a line of research suggesting that curricular programming should place a specific focus on development of…
Fundamentals of Physics, Part 1 Revised Printing
NASA Astrophysics Data System (ADS)
Cummings, Karen; Halliday, David; Resnick, Robert; Walker, Jearl
2001-08-01
Measurement. Motion Along a Straight Line. Forces and Motion Along a Line. Vectors. Forces and Motion in Two Dimensions. Combined Forces. Translational Momentum. Extended Systems. Kinetic Energy and Work. Potential Energy and the Conservation of Energy. Rotation. Complex Rotations. Exercises and Problems. Special Problems. Appendices. Answers to Reading Exercises. Photo Credits. Index.
The Reciprocal of the Fundamental Theorem of Riemannian Geometry
NASA Astrophysics Data System (ADS)
Calderon, Hector
2008-05-01
The fundamental theorem of Riemannian geometry is inverted for analytic Christoffel symbols. The inversion formula, henceforth dubbed Ricardo's formula, is obtained without ancillary assumptions and it is well suited to compute the uncertainty in the metric that arises from the uncertainty in the measurement of positions. The solution is given up to a constant conformal factor, in part, because there are no experiments that can fix such factor without probing the whole universe. Ricardo's formula excludes some pathological examples and works for manifolds of any dimension and metrics of any signature.
Why the measured cosmological constant is small
NASA Astrophysics Data System (ADS)
Rostami, T.; Jalalzadeh, S.
2015-09-01
In a quest to explain the small value of the today's cosmological constant, following the approach introduced in Jalalzadeh and Rostami (2015), we show that the theoretical value of cosmological constant is consistent with its observational value. In more detail, we study the Friedmann-Lamaître-Robertson-Walker cosmology embedded isometrically in an 11-dimensional ambient space. The field equations determines Λ in terms of other measurable fundamental constants. Specifically, it predicts that the cosmological constant measured today be Λ LPl2 = 2.56 × 10-122, as observed.
The cosmic distance scale and the Hubble constant.
NASA Astrophysics Data System (ADS)
de Vaucouleurs, G.
Contents: Part I. Extragalactic distances, Hubble constant and the age of the Universe. Part II. The long and short distance scales - comparison of two approaches to the Hubble constant. Part III. Five crucial tests of the extragalactic distance scale using the Galaxy as fundamental calibrator: 1. Introduction. 2. The Galaxy as fundamental calibrator. 3. The basic metric and photometric scale factors of the Galaxy. 4. Galactic zero points of the B- and H-band Tully-Fisher relations and a first test of the long and short distance scales. 5. Galactic zero points of the B- and V-band Faber-Jackson relations and a second test of the long and short distance scales. 6. Third test: metric and photometric scale lengths in the Galaxy and galactic zero points for the luminosity index scale. 7. Fourth test: implications of the long and short distance scales for some metric and photometric properties of the Galaxy and its "sosies". 8. Fifth test: implications of the long and short distance scales for the mean absolute magnitudes of globular clusters, RR Lyr and Mira variables. 9. Summary and conclusions. Appendix: Is the Galaxy a good standard?
Combustion Fundamentals Research
NASA Technical Reports Server (NTRS)
1983-01-01
Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.
Jewish Fundamentalism in Israel.
1986-11-01
1 Jewish Fundamentalism in Israeli Society ........................ 3 Definitions , Terminology, and Historical Background...discussed. ad2 .4 .4.o Lustick Jewish Fundamentalism Jewish Fundamentalism in Israeli Society Definitions , Terminology, and Historical Background...fundamentalist movement, requires the application of a broader, but carefully construed definition of "fundamentalist." For the purposes of this study a belief
Exchange Rates and Fundamentals.
ERIC Educational Resources Information Center
Engel, Charles; West, Kenneth D.
2005-01-01
We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…
Exchange Rates and Fundamentals.
ERIC Educational Resources Information Center
Engel, Charles; West, Kenneth D.
2005-01-01
We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…
Millikan's measurement of Planck's constant
NASA Astrophysics Data System (ADS)
Franklin, Allan
2013-12-01
Robert Millikan is famous for measuring the charge of the electron. His result was better than any previous measurement and his method established that there was a fundamental unit of charge, or charge quantization. He is less well-known for his measurement of Planck's constant, although, as discussed below, he is often mistakenly given credit for providing significant evidence in support of Einstein's photon theory of light.1 His Nobel Prize citation was "for his work on the elementary electric charge of electricity and the photoelectric effect," an indication of the significance of his work on the photoelectric effect.
Assessing uncertainty in physical constants
NASA Astrophysics Data System (ADS)
Henrion, Max; Fischhoff, Baruch
1986-09-01
Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.
Mass spectrometry at and below 0.1 parts per billion
Bradley, M.; Palmer, F.; Pritchard, D.E.
1994-12-31
The single ion Penning trap mass spectrometer at M.I.T. can compare masses to within 0.1 parts per billion. We have created a short table of fundamental atomic masses and made measurements useful for calibrating the X-ray standard, and determining Avogadro`s number, the molar Plank constant, and the fine structure constant.
Cosmological constant, fine structure constant and beyond
NASA Astrophysics Data System (ADS)
Wei, Hao; Zou, Xiao-Bo; Li, Hong-Yu; Xue, Dong-Ze
2017-01-01
In the present work, we consider the cosmological constant model Λ ∝ α ^{-6}, which is well motivated from three independent approaches. As is well known, the hint of varying fine structure constant α was found in 1998. If Λ ∝ α ^{-6} is right, it means that the cosmological constant Λ should also be varying. Here, we try to develop a suitable framework to model this varying cosmological constant Λ ∝ α ^{-6}, in which we view it from an interacting vacuum energy perspective. Then we consider the observational constraints on these models by using the 293 Δ α /α data from the absorption systems in the spectra of distant quasars. We find that the model parameters can be tightly constrained to the very narrow ranges of O(10^{-5}) typically. On the other hand, we can also view the varying cosmological constant model Λ ∝ α ^{-6} from another perspective, namely it can be equivalent to a model containing "dark energy" and "warm dark matter", but there is no interaction between them. We find that this is also fully consistent with the observational constraints on warm dark matter.
Fundamental properties of PTCDI-C8 semiconductor for optoelectronic and photonic applications
NASA Astrophysics Data System (ADS)
Erdoǧan, Erman; Gündüz, Bayram
2017-02-01
In this study, we investigated fundamental properties such as electrical and optical properties of the N,N'-Dioctyl-3,4,9,10 perylenedicarboximide (PTCDI-C8) Organic Semiconductor (OSC) material for optoelectronic and photonic applications. The important spectral parameters such as mass extinction coefficient and transmittance spectrum of the PTCDI-C8 molecule were calculated. Optical properties such as refractive index, optical band gap, real and imaginary parts of dielectric constants of the PTCDI-C8 were obtained. The electrical and optical conductance properties were also investigated. The advantages and disadvantages of obtained fundamental parameters were determined for optoelectronic and photonic applications.
NASA Astrophysics Data System (ADS)
Gitlin, M. S.
2017-02-01
The first part of the review is presented which is dedicated to the time-resolved method of imaging and measuring the spatial distribution of the intensity of millimeter waves by using visible continuum (VC) emitted by the positive column (PC) of a dc discharge in a mixture of cesium vapor with xenon. The review focuses on the operating principles, fundamentals, and applications of this new technique. The design of the discharge tube and experimental setup used to create a wide homogeneous plasma slab with the help of the Cs-Xe discharge at a gas pressure of 45 Torr are described. The millimeter-wave effects on the plasma slab are studied experimentally. The mechanism of microwave-induced variations in the VC brightness and the causes of violation of the local relation between the VC brightness and the intensity of millimeter waves are discussed. Experiments on the imaging of the field patterns of horn antennas and quasi-optical beams demonstrate that this technique can be used for good-quality imaging of millimeter-wave beams in the entire millimeter-wavelength band. The method has a microsecond temporal resolution and a spatial resolution of about 2 mm. Energy sensitivities of about 10 μJ/cm2 in the Ka-band and about 200 μJ/cm2 in the D-band have been demonstrated.
(In)validity of the constant field and constant currents assumptions in theories of ion transport.
Syganow, A; von Kitzing, E
1999-01-01
Constant electric fields and constant ion currents are often considered in theories of ion transport. Therefore, it is important to understand the validity of these helpful concepts. The constant field assumption requires that the charge density of permeant ions and flexible polar groups is virtually voltage independent. We present analytic relations that indicate the conditions under which the constant field approximation applies. Barrier models are frequently fitted to experimental current-voltage curves to describe ion transport. These models are based on three fundamental characteristics: a constant electric field, negligible concerted motions of ions inside the channel (an ion can enter only an empty site), and concentration-independent energy profiles. An analysis of those fundamental assumptions of barrier models shows that those approximations require large barriers because the electrostatic interaction is strong and has a long range. In the constant currents assumption, the current of each permeating ion species is considered to be constant throughout the channel; thus ion pairing is explicitly ignored. In inhomogeneous steady-state systems, the association rate constant determines the strength of ion pairing. Among permeable ions, however, the ion association rate constants are not small, according to modern diffusion-limited reaction rate theories. A mathematical formulation of a constant currents condition indicates that ion pairing very likely has an effect but does not dominate ion transport. PMID:9929480
NASA Astrophysics Data System (ADS)
Brooks, Juliana
2010-02-01
Planck's proportionality constant ``h'' is not an action constant. Re-examination of Planck's work has revealed the numerical value for his famous constant ``h'' is actually an energy constant.* Planck's energy constant is the mean energy of a single oscillation of electromagnetic energy, namely 6.626 X 10-34 J/osc. The misinterpretation of Planck's constant resulted from an inadvertent mathematical procedure in his 1901 black-body derivation. Planck's energy constant is found in his original (1897) quantum relationship: E a ν tm where energy (``E'') is proportional to the product of a constant (``a'', energy per oscillation), the frequency (``ν''), and the measurement time (``tm''). Planck's inadvertence fixed the measurement time variable ``tm'' at a value of one second, and multiplied it by his constant ``a'', resulting in the product ``h'' which Planck proposed as the ``quantum of action''. Planck's black-body derivation and condensed quantum formula E = hν were never knowingly premised on one second time intervals, however. Subsequent development of quantum mechanics thus took place against the back drop of a hidden assumption. Numerous paradoxes, problems and a lack of reality resulted. Recognition of Planck's energy constant provides a richer and more realistic interpretation of quantum mechanics. *Brooks, JHJ, ``Hidden Variables: The Elementary Quantum of Light'', The Nature of Light: What are Photons? III, Proc. of SPIE Vol. 7421, 74210T-3, 2009. )
Fundamentals of phosphate transfer.
Kirby, Anthony J; Nome, Faruk
2015-07-21
Historically, the chemistry of phosphate transfer-a class of reactions fundamental to the chemistry of Life-has been discussed almost exclusively in terms of the nucleophile and the leaving group. Reactivity always depends significantly on both factors; but recent results for reactions of phosphate triesters have shown that it can also depend strongly on the nature of the nonleaving or "spectator" groups. The extreme stabilities of fully ionised mono- and dialkyl phosphate esters can be seen as extensions of the same effect, with one or two triester OR groups replaced by O(-). Our chosen lead reaction is hydrolysis-phosphate transfer to water: because water is the medium in which biological chemistry takes place; because the half-life of a system in water is an accepted basic index of stability; and because the typical mechanisms of hydrolysis, with solvent H2O providing specific molecules to act as nucleophiles and as general acids or bases, are models for reactions involving better nucleophiles and stronger general species catalysts. Not least those available in enzyme active sites. Alkyl monoester dianions compete with alkyl diester monoanions for the slowest estimated rates of spontaneous hydrolysis. High stability at physiological pH is a vital factor in the biological roles of organic phosphates, but a significant limitation for experimental investigations. Almost all kinetic measurements of phosphate transfer reactions involving mono- and diesters have been followed by UV-visible spectroscopy using activated systems, conveniently compounds with good leaving groups. (A "good leaving group" OR* is electron-withdrawing, and can be displaced to generate an anion R*O(-) in water near pH 7.) Reactivities at normal temperatures of P-O-alkyl derivatives-better models for typical biological substrates-have typically had to be estimated: by extended extrapolation from linear free energy relationships, or from rate measurements at high temperatures. Calculation is free
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Craw, James M. (Technical Monitor)
1995-01-01
We prove known identities for the Khinchin constant and develop new identities for the more general Hoelder mean limits of continued fractions. Any of these constants can be developed as a rapidly converging series involving values of the Riemann zeta function and rational coefficients. Such identities allow for efficient numerical evaluation of the relevant constants. We present free-parameter, optimizable versions of the identities, and report numerical results.
NASA Astrophysics Data System (ADS)
Halliday, David; Resnick, Robert; Walker, Jearl
2003-01-01
No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.
Optical constants of solid methane
NASA Technical Reports Server (NTRS)
Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.
1989-01-01
Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented of the optical constants of solid methane for the 0.4 to 2.6 micron region. K is reported for both the amorphous and the crystalline (annealed) states. Using the previously measured values of the real part of the refractive index, n, of liquid methane at 110 K n is computed for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH4.
Fundamentals of Condensed Matter Physics
NASA Astrophysics Data System (ADS)
Cohen, Marvin L.; Louie, Steven G.
2016-05-01
Part I. Basic Concepts: Electrons and Phonons: 1. Concept of a solid: qualitative introduction and overview; 2. Electrons in crystals; 3. Electronic energy bands; 4. Lattice vibrations and phonons; Part II. Electron Intercations, Dynamics and Responses: 5. Electron dynamics in crystals; 6. Many-electron interactions: the interacting electron gas and beyond; 7. Density functional theory; 8. The dielectric function for solids; Part III. Optical and Transport Phenomena: 9. Electronic transitions and optical properties of solids; 10. Electron-phonon interactions; 11. Dynamics of crystal electrons in a magnetic field; 12. Fundamentals of transport phenomena in solids; Part IV. Superconductivity, Magnetism, and Lower Dimensional Systems: 13. Using many-body techniques; 14. Superconductivity; 15. Magnetism; 16. Reduced-dimensional systems and nanostructures; Index.
New Series Representation for Madelung Constant
NASA Astrophysics Data System (ADS)
Tyagi, S.
2005-09-01
A new series for the Madelung constant M is derived on the basis of a representation given by Crandall [Exp. Math. 8 (1999), 367]. We are able to write it in the form M = C + S, where S is a rapidly convergent series, and the constant C is fundamental: C = -1/8- ln 2/(4π) - 4π /3 + 1/(2√2) + Γ (1/8) Γ (3/8) / (π3/2√2) approx -1.747564594... The remarkable result is that even if S is discarded, the constant C alone gives ten significant figures of M. This result advances the state of the art in the discovery of what Crandall has termed ``close calls" to an exact Madelung evaluation. We present related identities and discuss how this fundamental ten-digit accuracy might be improved further.
Universal constants and equations of turbulent motion
NASA Astrophysics Data System (ADS)
Baumert, Helmut
2011-11-01
For turbulence at high Reynolds number we present an analogy with the kinetic theory of gases, with dipoles made of vortex tubes as frictionless, incompressible but deformable quasi-particles. Their movements are governed by Helmholtz' elementary vortex rules applied locally. A contact interaction or ``collision'' leads either to random scatter of a trajectory or to the formation of two likewise rotating, fundamentally unstable whirls forming a dissipative patch slowly rotating around its center of mass, the latter almost at rest. This approach predicts von Karman's constant as 1/sqrt(2 pi) = 0.399 and the spatio-temporal dynamics of energy-containing time and length scales controlling turbulent mixing [Baumert 2005, 2009]. A link to turbulence spectra was missing so far. In the present contribution it is shown that the above image of dipole movements is compatible with Kolmogorov's spectra if dissipative patches, beginning as two likewise rotating eddies, evolve locally into a space-filling bearing in the sense of Herrmann [1990], i.e. into an ``Apollonian gear.'' Its parts and pieces are are frictionless, excepting the dissipative scale of size zero. Our approach predicts the dimensionless pre-factor in the 3D Eulerian wavenumber spectrum (in terms of pi) as 1.8, and in the Lagrangian frequency spectrum as the integer number 2. Our derivations are free of empirical relations and rest on geometry, methods from many-particle physics, and on elementary conservation laws only. Department of the Navy Grant, ONR Global
Optical constants of solid methane
NASA Technical Reports Server (NTRS)
Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.
1990-01-01
Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented on the optical constants of solid methane for the 0.4 to 2.6 micrometer region. Deposition onto a substrate at 10 K produces glassy (semi-amorphous) material. Annealing this material at approximately 33 K for approximately 1 hour results in a crystalline material as seen by sharper, more structured bands and negligible background extinction due to scattering. The constant k is reported for both the amorphous and the crystalline (annealed) states. Typical values (at absorption maxima) are in the .001 to .0001 range. Below lambda = 1.1 micrometers the bands are too weak to be detected by transmission through the films less than or equal to 215 micrometers in thickness, employed in the studies to date. Using previously measured values of the real part of the refractive index, n, of liquid methane at 110 K, n is computed for solid methane using the Lorentz-Lorenz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for
"Recognizing Numerical Constants"
NASA Technical Reports Server (NTRS)
Bailey, David H.; Craw, James M. (Technical Monitor)
1995-01-01
The advent of inexpensive, high performance computer and new efficient algorithms have made possible the automatic recognition of numerically computed constants. In other words, techniques now exist for determining, within certain limits, whether a computed real or complex number can be written as a simple expression involving the classical constants of mathematics. In this presentation, some of the recently discovered techniques for constant recognition, notably integer relation detection algorithms, will be presented. As an application of these methods, the author's recent work in recognizing "Euler sums" will be described in some detail.
NASA Astrophysics Data System (ADS)
Yamamoto, Satoshi; Nakata, Munetaka; Kuchitsu, Kozo
1985-07-01
The third-order anharmonic constants of phosgene are determined from the rotational constants of the six fundamental vibrational states, those of eight isotopic species, and the rz structure obtained from the electron diffraction intensity by analyzing the changes in the average structures. The equilibrium structure is obtained as r e(CCl = 1.7365(12) Å, r e( CO) = 1.1766(22) Å, and ∠ e(ClCCl) = 111.91(12)°.
Astronomical reach of fundamental physics.
Burrows, Adam S; Ostriker, Jeremiah P
2014-02-18
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
Astronomical reach of fundamental physics
NASA Astrophysics Data System (ADS)
Burrows, Adam S.; Ostriker, Jeremiah P.
2014-02-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
Fundamental studies of polymer filtration
Smith, B.F.; Lu, M.T.; Robison, T.W.; Rogers, Y.C.; Wilson, K.V.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The objectives of this project were (1) to develop an enhanced fundamental understanding of the coordination chemistry of hazardous-metal-ion complexation with water-soluble metal-binding polymers, and (2) to exploit this knowledge to develop improved separations for analytical methods, metals processing, and waste treatment. We investigated features of water-soluble metal-binding polymers that affect their binding constants and selectivity for selected transition metal ions. We evaluated backbone polymers using light scattering and ultrafiltration techniques to determine the effect of pH and ionic strength on the molecular volume of the polymers. The backbone polymers were incrementally functionalized with a metal-binding ligand. A procedure and analytical method to determine the absolute level of functionalization was developed and the results correlated with the elemental analysis, viscosity, and molecular size.
Astronomical reach of fundamental physics
Burrows, Adam S.; Ostriker, Jeremiah P.
2014-01-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law. PMID:24477692
The cosmological constant problem
Dolgov, A.D.
1989-05-01
A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs.
ERIC Educational Resources Information Center
Eichinger, John
1996-01-01
Presents an activity in which students attempt to keep water at a constant temperature. Helps students in grades three to six hone their skills in prediction, observation, measurement, data collection, graphing, data analysis, and communication. (JRH)
History and progress on accurate measurements of the Planck constant
NASA Astrophysics Data System (ADS)
Steiner, Richard
2013-01-01
The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved
History and progress on accurate measurements of the Planck constant.
Steiner, Richard
2013-01-01
The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the
The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry’s Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aq...
The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry’s Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aq...
Fundamentals of the dwarf fundamental plane
NASA Astrophysics Data System (ADS)
McCall, M. L.; Vaduvescu, O.; Pozo Nunez, F.; Barr Dominguez, A.; Fingerhut, R.; Unda-Sanzana, E.; Li, B.; Albrecht, M.
2012-04-01
Aims: Star-forming dwarfs are studied to elucidate the physical underpinnings of their fundamental plane. Processes controlling dynamics are evaluated, connections between quiescent and bursting dwarfs are examined, and the viability of using structural properties of dwarfs to determine distances is assessed. Methods: Deep surface photometry in Ks is presented for 19 star-forming dwarfs. The data are amalgamated with previously published observations to create a sample of 66 galaxies suitable for exploring how global properties and kinematics are connected. Results: It is confirmed that residuals in the Tully-Fisher relation are correlated with surface brightness, but that even after accomodating the surface brightness dependence through the dwarf fundamental plane, residuals in absolute magnitude are far larger than expected from observational errors. Rather, a morefundamental plane is identified which connects the potential to HI line width and surface brightness. Residuals correlate with the axis ratio in a way which can be accommodated by recognizing the galaxies to be oblate spheroids viewed at varying angles. Correction of surface brightnesses to face-on leads to a correlation among the potential, line width, and surface brightness for which residuals are entirely attributable to observational uncertainties. The mean mass-to-light ratio of the diffuse component of the galaxies is constrained to be 0.88 ± 0.20 in Ks. Blue compact dwarfs lie in the same plane as dwarf irregulars. The dependence of the potential on line width is less strong than expected for virialized systems, but this may be because surface brightness is acting as a proxy for variations in the mass-to-light ratio from galaxy to galaxy. Altogether, the observations suggest that gas motions are predominantly disordered and isotropic, that they are a consequence of gravity, not turbulence, and that the mass and scale of dark matter haloes scale with the amount and distribution of luminous matter
Arguing against fundamentality
NASA Astrophysics Data System (ADS)
McKenzie, Kerry
This paper aims to open up discussion on the relationship between fundamentality and naturalism, and in particular on the question of whether fundamentality may be denied on naturalistic grounds. A historico-inductive argument for an anti-fundamentalist conclusion, prominent within the contemporary metaphysical literature, is examined; finding it wanting, an alternative 'internal' strategy is proposed. By means of an example from the history of modern physics - namely S-matrix theory - it is demonstrated that (1) this strategy can generate similar (though not identical) anti-fundamentalist conclusions on more defensible naturalistic grounds, and (2) that fundamentality questions can be empirical questions. Some implications and limitations of the proposed approach are discussed.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Fundamentals of fluid lubrication
NASA Technical Reports Server (NTRS)
Hamrock, Bernard J.
1991-01-01
The aim is to coordinate the topics of design, engineering dynamics, and fluid dynamics in order to aid researchers in the area of fluid film lubrication. The lubrication principles that are covered can serve as a basis for the engineering design of machine elements. The fundamentals of fluid film lubrication are presented clearly so that students that use the book will have confidence in their ability to apply these principles to a wide range of lubrication situations. Some guidance on applying these fundamentals to the solution of engineering problems is also provided.
NASA Technical Reports Server (NTRS)
Zuk, J.
1976-01-01
The fundamentals of fluid sealing, including seal operating regimes, are discussed and the general fluid-flow equations for fluid sealing are developed. Seal performance parameters such as leakage and power loss are presented. Included in the discussion are the effects of geometry, surface deformations, rotation, and both laminar and turbulent flows. The concept of pressure balancing is presented, as are differences between liquid and gas sealing. Mechanisms of seal surface separation, fundamental friction and wear concepts applicable to seals, seal materials, and pressure-velocity (PV) criteria are discussed.
ERIC Educational Resources Information Center
Taylor, Kelley R.
2009-01-01
The 21st century has brought many technological, social, and economic changes--nearly all of which have affected schools and the students, administrators, and faculty members who are in them. Luckily, as some things change, other things remain the same. Such is true with the fundamental legal principles that guide school administrators' actions…
ERIC Educational Resources Information Center
Taylor, Kelley R.
2009-01-01
The 21st century has brought many technological, social, and economic changes--nearly all of which have affected schools and the students, administrators, and faculty members who are in them. Luckily, as some things change, other things remain the same. Such is true with the fundamental legal principles that guide school administrators' actions…
Peselnick, L.; Robie, R.A.
1962-01-01
The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics.
Homeschooling and Religious Fundamentalism
ERIC Educational Resources Information Center
Kunzman, Robert
2010-01-01
This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to…
Basic Publication Fundamentals.
ERIC Educational Resources Information Center
Savedge, Charles E., Ed.
Designed for students who produce newspapers and newsmagazines in junior high, middle, and elementary schools, this booklet is both a scorebook and a fundamentals text. The scorebook provides realistic criteria for judging publication excellence at these educational levels. All the basics for good publications are included in the text of the…
The Fundamental Property Relation.
ERIC Educational Resources Information Center
Martin, Joseph J.
1983-01-01
Discusses a basic equation in thermodynamics (the fundamental property relation), focusing on a logical approach to the development of the relation where effects other than thermal, compression, and exchange of matter with the surroundings are considered. Also demonstrates erroneous treatments of the relation in three well-known textbooks. (JN)
Unification of Fundamental Forces
NASA Astrophysics Data System (ADS)
Salam, Abdus; Taylor, Foreword by John C.
2005-10-01
Foreword John C. Taylor; 1. Unification of fundamental forces Abdus Salam; 2. History unfolding: an introduction to the two 1968 lectures by W. Heisenberg and P. A. M. Dirac Abdus Salam; 3. Theory, criticism, and a philosophy Werner Heisenberg; 4. Methods in theoretical physics Paul Adrian Maurice Dirac.
USDA-ARS?s Scientific Manuscript database
This study guide provides comments and references for professional soil scientists who are studying for the soil science fundamentals exam needed as the first step for certification. The performance objectives were determined by the Soil Science Society of America's Council of Soil Science Examiners...
Fundamentals of Chemical Processes.
ERIC Educational Resources Information Center
Moser, William R.
1985-01-01
Describes a course that provides students with a fundamental understanding of the chemical, catalytic, and engineering sciences related to the chemical reactions taking place in a variety of reactors of different configurations. Also describes the eight major lecture topics, course examinations, and term papers. The course schedule is included.…
ERIC Educational Resources Information Center
North Carolina State Dept. of Public Instruction, Raleigh. Div. of Vocational Education.
This curriculum guide is designed as a resource for marketing education teachers in planning and teaching a course on sales fundamentals for students in grades 10-12 who are interested in a sales career. Internships, simulations, and co-op experiences may be used to expand practical application of the course. The student course objectives are to…
Fundamental research data base
NASA Technical Reports Server (NTRS)
1983-01-01
A fundamental research data base containing ground truth, image, and Badhwar profile feature data for 17 North Dakota, South Dakota, and Minnesota agricultural sites is described. Image data was provided for a minimum of four acquisition dates for each site and all four images were registered to one another.
Laser Fundamentals and Experiments.
ERIC Educational Resources Information Center
Van Pelt, W. F.; And Others
As a result of work performed at the Southwestern Radiological Health Laboratory with respect to lasers, this manual was prepared in response to the increasing use of lasers in high schools and colleges. It is directed primarily toward the high school instructor who may use the text for a short course in laser fundamentals. The definition of the…
ERIC Educational Resources Information Center
Smithsonian Institution, Washington, DC. National Reading is Fun-damental Program.
Reading Is Fundamental (RIF) is a national, nonprofit organization designed to motivate children to read by making a wide variety of inexpensive books available to them and allowing the children to choose and keep books that interest them. This annual report for 1977 contains the following information on the RIF project: an account of the…
Fundamentals of Diesel Engines.
ERIC Educational Resources Information Center
Marine Corps Inst., Washington, DC.
This student guide, one of a series of correspondence training courses designed to improve the job performance of members of the Marine Corps, deals with the fundamentals of diesel engine mechanics. Addressed in the three individual units of the course are the following topics: basic principles of diesel mechanics; principles, mechanics, and…
Fundamentals of Solid Lubrication
2012-03-01
NOTES 14. ABSTRACT During this program, we have worked to develop a fundamental understanding of the chemical and tribological issues related to...approach, tribological measurements performed over a range of length scales, and the correlation of the two classes of information. Research activities...correlated measurements of surface composition and environmentally specific tribological performance of thin film solid lubricants. • Correlate shear
Fundamentals of Electromagnetic Phenomena
NASA Astrophysics Data System (ADS)
Lorrain, Paul; Corson, Dale R.; Lorrain, Francois
Based on the classic Electromagnetic Fields and Waves by the same authors, Fundamentals of Electromagnetic Phenomena capitalizes on the older text's traditional strengths--solid physics, inventive problems, and an experimental approach--while offering a briefer, more accessible introduction to the basic principles of electromagnetism.
On the role of the Avogadro constant in redefining SI units for mass and amount of substance
NASA Astrophysics Data System (ADS)
Leonard, B. P.
2007-02-01
There is a common misconception that the Avogadro constant is one of the fundamental constants of nature, in the same category as the speed of light, the Planck constant and the invariant masses of atomic-scale particles. Although the absolute mass of any specified atomic-scale entity is an invariant universal constant of nature, the Avogadro constant relating this to a macroscopic quantity is not. Rather, it is a man-made construct, designed by convention to define a convenient unit relating the atomic and macroscopic scales. The misportrayal seems to stem from the widespread use of the term 'fixed-Avogadro-constant' for describing a redefinition of the kilogram that is, in fact, based on a fixed atomic-scale particle mass. This paper endeavours to clarify the role of the Avogadro constant in current definitions of SI units for mass and amount of substance as well as recently proposed redefinitions of these units—in particular, those based on fixing the numerical values of the Planck and Avogadro constants, respectively. Precise definitions lead naturally to a rational, straightforward and intuitively obvious construction of appropriate (exactly defined) atomic-scale units for these quantities. And this, in turn, suggests a direct and easily comprehended two-part statement of the fixed-Planck-constant kilogram definition involving a well-understood and physically meaningful de Broglie-Compton frequency.
Measuring Boltzmann's Constant with Carbon Dioxide
ERIC Educational Resources Information Center
Ivanov, Dragia; Nikolov, Stefan
2013-01-01
In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…
Measuring Boltzmann's Constant with Carbon Dioxide
ERIC Educational Resources Information Center
Ivanov, Dragia; Nikolov, Stefan
2013-01-01
In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…
Rotor-Liquid-Fundament System's Oscillation
NASA Astrophysics Data System (ADS)
Kydyrbekuly, A.
The work is devoted to research of oscillation and sustainability of stationary twirl of vertical flexible static dynamically out-of-balance rotor with cavity partly filled with liquid and set on relative frame fundament. The accounting of such factors like oscillation of fundament, liquid oscillation, influence of asymmetry of installation of a rotor on a shaft, anisotropism of shaft support and fundament, static and dynamic out-of-balance of a rotor, an external friction, an internal friction of a shaft, allows to settle an invoice more precisely kinematic and dynamic characteristics of system.
System Engineering Fundamentals
2001-01-01
currently valid OMB control number. 1. REPORT DATE JAN 2001 2. REPORT TYPE 3. DATES COVERED 00-00-2001 to 00-00-2001 4. TITLE AND SUBTITLE System...73 PART 3. SYSTEM ANALYSIS AND CONTROL Chapter 9. Work Breakdown Structure...divided into four parts: Introduction; Systems Engineering Process; Systems Analysis and Control ; and Planning, Organizing, and Managing. The first part
Fundamentals of Polarized Light
NASA Technical Reports Server (NTRS)
Mishchenko, Michael
2003-01-01
The analytical and numerical basis for describing scattering properties of media composed of small discrete particles is formed by the classical electromagnetic theory. Although there are several excellent textbooks outlining the fundamentals of this theory, it is convenient for our purposes to begin with a summary of those concepts and equations that are central to the subject of this book and will be used extensively in the following chapters. We start by formulating Maxwell's equations and constitutive relations for time- harmonic macroscopic electromagnetic fields and derive the simplest plane-wave solution that underlies the basic optical idea of a monochromatic parallel beam of light. This solution naturally leads to the introduction of such fundamental quantities as the refractive index and the Stokes parameters. Finally, we define the concept of a quasi-monochromatic beam of light and discuss its implications.
Fundamental properties of resonances
Ceci, S.; Hadžimehmedović, M.; Osmanović, H.; Percan, A.; Zauner, B.
2017-01-01
All resonances, from hydrogen nuclei excited by the high-energy gamma rays in deep space to newly discovered particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. However, two distinct sets of properties are used to describe resonances: the pole parameters (complex pole position and residue) and the Breit-Wigner parameters (mass, width, and branching fractions). There is an ongoing decades-old debate on which one of them should be abandoned. In this study of nucleon resonances appearing in the elastic pion-nucleon scattering we discover an intricate interplay of the parameters from both sets, and realize that neither set is completely independent or fundamental on its own. PMID:28345595
Fundamental properties of resonances.
Ceci, S; Hadžimehmedović, M; Osmanović, H; Percan, A; Zauner, B
2017-03-27
All resonances, from hydrogen nuclei excited by the high-energy gamma rays in deep space to newly discovered particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. However, two distinct sets of properties are used to describe resonances: the pole parameters (complex pole position and residue) and the Breit-Wigner parameters (mass, width, and branching fractions). There is an ongoing decades-old debate on which one of them should be abandoned. In this study of nucleon resonances appearing in the elastic pion-nucleon scattering we discover an intricate interplay of the parameters from both sets, and realize that neither set is completely independent or fundamental on its own.
Fundamentals of Polarized Light
NASA Technical Reports Server (NTRS)
Mishchenko, Michael
2003-01-01
The analytical and numerical basis for describing scattering properties of media composed of small discrete particles is formed by the classical electromagnetic theory. Although there are several excellent textbooks outlining the fundamentals of this theory, it is convenient for our purposes to begin with a summary of those concepts and equations that are central to the subject of this book and will be used extensively in the following chapters. We start by formulating Maxwell's equations and constitutive relations for time- harmonic macroscopic electromagnetic fields and derive the simplest plane-wave solution that underlies the basic optical idea of a monochromatic parallel beam of light. This solution naturally leads to the introduction of such fundamental quantities as the refractive index and the Stokes parameters. Finally, we define the concept of a quasi-monochromatic beam of light and discuss its implications.
Fundamental studies in geodynamics
NASA Technical Reports Server (NTRS)
Anderson, D. L.; Hager, B. H.; Kanamori, H.
1981-01-01
Research in fundamental studies in geodynamics continued in a number of fields including seismic observations and analysis, synthesis of geochemical data, theoretical investigation of geoid anomalies, extensive numerical experiments in a number of geodynamical contexts, and a new field seismic volcanology. Summaries of work in progress or completed during this report period are given. Abstracts of publications submitted from work in progress during this report period are attached as an appendix.
Fundamentals of petroleum maps
Mc Elroy, D.P.
1986-01-01
It's a complete guide to the fundamentals of reading, using, and making petroleum maps. The topics covered are well spotting, lease posting, contouring, hanging cross sections, and ink drafting. This book not only tells the how of petroleum mapping, but it also tells the why to better understand the principles and techniques. The books does not teach ''drafting,'' but does describe the proper care and use of drafting equipment for those who are totally new to the task.
Phacoemulsification. Technology and fundamentals.
Gilger, B C
1997-09-01
The number one rule of phacoemulsification and aspiration cataract surgery is to know your machine. This chapter is designed to help the surgeon who is currently using phacoemulsification, or those who wish to understand more about technique, learn the basics and technology of the various types of phacoemulsification machines. Fluidics, pump design, handpiece mechanics, phacoemulsification needles, and fundamentals of phacoemulsification of cataracts will be reviewed.
Redefining the Fundamental Questions
ERIC Educational Resources Information Center
Crain, Margaret Ann
2006-01-01
Every researcher must make some fundamental questions. A researcher's questions should include the following: (1) What is the nature of the reality that I wish to study? (2) How will I know it? (3) What must I do to know it? (4) Who am I? (5) Where is God in this? and (6) For religious educators--How does my research lead to a world of peace and…
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Greg Hall, D
2011-01-01
Session 1 of the 2010 STP/IFSTP Joint Symposium on Toxicologic Neuropathology, titled "Fundamentals of Neurobiology," was organized to provide a foundation for subsequent sessions by presenting essential elements of neuroanatomy and nervous system function. A brief introduction to the session titled "Introduction to Correlative Neurobiology" was provided by Dr. Greg Hall (Eli Lilly and Company, Indianapolis, IN). Correlative neurobiology refers to considerations of the relationships between the highly organized and compartmentalized structure of nervous tissues and the functioning within this system.
Fundamentals of Structural Geology
NASA Astrophysics Data System (ADS)
Pollard, David D.; Fletcher, Raymond C.
2005-09-01
Fundamentals of Structural Geology provides a new framework for the investigation of geological structures by integrating field mapping and mechanical analysis. Assuming a basic knowledge of physical geology, introductory calculus and physics, it emphasizes the observational data, modern mapping technology, principles of continuum mechanics, and the mathematical and computational skills, necessary to quantitatively map, describe, model, and explain deformation in Earth's lithosphere. By starting from the fundamental conservation laws of mass and momentum, the constitutive laws of material behavior, and the kinematic relationships for strain and rate of deformation, the authors demonstrate the relevance of solid and fluid mechanics to structural geology. This book offers a modern quantitative approach to structural geology for advanced students and researchers in structural geology and tectonics. It is supported by a website hosting images from the book, additional colour images, student exercises and MATLAB scripts. Solutions to the exercises are available to instructors. The book integrates field mapping using modern technology with the analysis of structures based on a complete mechanics MATLAB is used to visualize physical fields and analytical results and MATLAB scripts can be downloaded from the website to recreate textbook graphics and enable students to explore their choice of parameters and boundary conditions The supplementary website hosts color images of outcrop photographs used in the text, supplementary color images, and images of textbook figures for classroom presentations The textbook website also includes student exercises designed to instill the fundamental relationships, and to encourage the visualization of the evolution of geological structures; solutions are available to instructors
NASA Astrophysics Data System (ADS)
Burov, Alexey
Fundamental science is a hard, long-term human adventure that has required high devotion and social support, especially significant in our epoch of Mega-science. The measure of this devotion and this support expresses the real value of the fundamental science in public opinion. Why does fundamental science have value? What determines its strength and what endangers it? The dominant answer is that the value of science arises out of curiosity and is supported by the technological progress. Is this really a good, astute answer? When trying to attract public support, we talk about the ``mystery of the universe''. Why do these words sound so attractive? What is implied by and what is incompatible with them? More than two centuries ago, Immanuel Kant asserted an inseparable entanglement between ethics and metaphysics. Thus, we may ask: which metaphysics supports the value of scientific cognition, and which does not? Should we continue to neglect the dependence of value of pure science on metaphysics? If not, how can this issue be addressed in the public outreach? Is the public alienated by one or another message coming from the face of science? What does it mean to be politically correct in this sort of discussion?
Neutrons and Fundamental Symmetries
Plaster, Bradley
2016-01-11
The research supported by this project addressed fundamental open physics questions via experiments with subatomic particles. In particular, neutrons constitute an especially ideal “laboratory” for fundamental physics tests, as their sensitivities to the four known forces of nature permit a broad range of tests of the so-called “Standard Model”, our current best physics model for the interactions of subatomic particles. Although the Standard Model has been a triumphant success for physics, it does not provide satisfactory answers to some of the most fundamental open questions in physics, such as: are there additional forces of nature beyond the gravitational, electromagnetic, weak nuclear, and strong nuclear forces?, or why does our universe consist of more matter than anti-matter? This project also contributed significantly to the training of the next generation of scientists, of considerable value to the public. Young scientists, ranging from undergraduate students to graduate students to post-doctoral researchers, made significant contributions to the work carried out under this project.
Can compactifications solve the cosmological constant problem?
Hertzberg, Mark P.; Masoumi, Ali
2016-06-30
Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain why Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.
Redshift in Hubble's constant.
NASA Astrophysics Data System (ADS)
Temple-Raston, M.
1997-01-01
A topological field theory with Bogomol'nyi solitons is examined. The Bogomol'nyi solitons have much in common with the instanton in Yang-Mills theory; consequently the author called them 'topological instantons'. When periodic boundary conditions are imposed, the field theory comments indirectly on the speed of light within the theory. In this particular model the speed of light is not a universal constant. This may or may not be relevant to the current debate in astronomy and cosmology over the large values of the Hubble constant obtained by the latest generation of ground- and space-based telescopes. An experiment is proposed to detect spatial variation in the speed of light.
Percolation with Constant Freezing
NASA Astrophysics Data System (ADS)
Mottram, Edward
2014-06-01
We introduce and study a model of percolation with constant freezing ( PCF) where edges open at constant rate , and clusters freeze at rate independently of their size. Our main result is that the infinite volume process can be constructed on any amenable vertex transitive graph. This is in sharp contrast to models of percolation with freezing previously introduced, where the limit is known not to exist. Our interest is in the study of the percolative properties of the final configuration as a function of . We also obtain more precise results in the case of trees. Surprisingly the algebraic exponent for the cluster size depends on the degree, suggesting that there is no lower critical dimension for the model. Moreover, even for , it is shown that finite clusters have algebraic tail decay, which is a signature of self organised criticality. Partial results are obtained on , and many open questions are discussed.
NASA Technical Reports Server (NTRS)
Sorensen, E
1940-01-01
The conventional axial blowers operate on the high-pressure principle. One drawback of this type of blower is the relatively low pressure head, which one attempts to overcome with axial blowers producing very high pressure at a given circumferential speed. The Schicht constant-pressure blower affords pressure ratios considerably higher than those of axial blowers of conventional design with approximately the same efficiency.
Jackson, Neal
2015-01-01
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0 values of around 72-74 km s(-1) Mpc(-1), with typical errors of 2-3 km s(-1) Mpc(-1). This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s(-1) Mpc(-1) and typical errors of 1-2 km s(-1) Mpc(-1). The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.
NASA Astrophysics Data System (ADS)
Yongquan, Han
2016-10-01
The ideal gas state equation is not applicable to ordinary gas, it should be applied to the Electromagnetic ``gas'' that is applied to the radiation, the radiation should be the ultimate state of matter changes or initial state, the universe is filled with radiation. That is, the ideal gas equation of state is suitable for the Singular point and the universe. Maybe someone consider that, there is no vessel can accommodate radiation, it is because the Ordinary container is too small to accommodate, if the radius of your container is the distance that Light through an hour, would you still think it can't accommodates radiation? Modern scientific determinate that the radius of the universe now is about 1027 m, assuming that the universe is a sphere whose volume is approximately: V = 4.19 × 1081 cubic meters, the temperature radiation of the universe (cosmic microwave background radiation temperature of the universe, should be the closest the average temperature of the universe) T = 3.15k, radiation pressure P = 5 × 10-6 N / m 2, according to the law of ideal gas state equation, PV / T = constant = 6 × 1075, the value of this constant is the universe, The singular point should also equal to the constant Author: hanyongquan
Varying constants quantum cosmology
Leszczyńska, Katarzyna; Balcerzak, Adam; Dabrowski, Mariusz P. E-mail: abalcerz@wmf.univ.szczecin.pl
2015-02-01
We discuss minisuperspace models within the framework of varying physical constants theories including Λ-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ansätze for the variability of constants: c(a) = c{sub 0} a{sup n} and G(a)=G{sub 0} a{sup q}. We find that most of the varying c and G minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe ''from nothing'' (a=0) to a Friedmann geometry with the scale factor a{sub t} is large for growing c models and is strongly suppressed for diminishing c models. As for G varying, the probability of tunneling is large for G diminishing, while it is small for G increasing. In general, both varying c and G change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.
Rare Isotopes and Fundamental Symmetries
NASA Astrophysics Data System (ADS)
Brown, B. Alex; Engel, Jonathan; Haxton, Wick; Ramsey-Musolf, Michael; Romalis, Michael; Savard, Guy
2009-01-01
Experiments searching for new interactions in nuclear beta decay / Klaus P. Jungmann -- The beta-neutrino correlation in sodium-21 and other nuclei / P. A. Vetter ... [et al.] -- Nuclear structure and fundamental symmetries/ B. Alex Brown -- Schiff moments and nuclear structure / J. Engel -- Superallowed nuclear beta decay: recent results and their impact on V[symbol] / J. C. Hardy and I. S. Towner -- New calculation of the isospin-symmetry breaking correlation to superallowed Fermi beta decay / I. S. Towner and J. C. Hardy -- Precise measurement of the [symbol]H to [symbol]He mass difference / D. E. Pinegar ... [et al.] -- Limits on scalar currents from the 0+ to 0+ decay of [symbol]Ar and isospin breaking in [symbol]Cl and [symbol]Cl / A. Garcia -- Nuclear constraints on the weak nucleon-nucleon interaction / W. C. Haxton -- Atomic PNC theory: current status and future prospects / M. S. Safronova -- Parity-violating nucleon-nucleon interactions: what can we learn from nuclear anapole moments? / B. Desplanques -- Proposed experiment for the measurement of the anapole moment in francium / A. Perez Galvan ... [et al.] -- The Radon-EDM experiment / Tim Chupp for the Radon-EDM collaboration -- The lead radius Eexperiment (PREX) and parity violating measurements of neutron densities / C. J. Horowitz -- Nuclear structure aspects of Schiff moment and search for collective enhancements / Naftali Auerbach and Vladimir Zelevinsky -- The interpretation of atomic electric dipole moments: Schiff theorem and its corrections / C. -P. Liu -- T-violation and the search for a permanent electric dipole moment of the mercury atom / M. D. Swallows ... [et al.] -- The new concept for FRIB and its potential for fundamental interactions studies / Guy Savard -- Collinear laser spectroscopy and polarized exotic nuclei at NSCL / K. Minamisono -- Environmental dependence of masses and coupling constants / M. Pospelov.
Petrowsky, Matt; Fleshman, Allison; Frech, Roger
2012-05-17
The temperature dependence of ionic conductivity and the static dielectric constant is examined for 0.30 m TbaTf- or LiTf-1-alcohol solutions. Above ambient temperature, the conductivity increases with temperature to a greater extent in electrolytes whose salt has a charge-protected cation. Below ambient temperature, the dielectric constant changes only slightly with temperature in electrolytes whose salt has a cation that is not charge-protected. The compensated Arrhenius formalism is used to describe the temperature-dependent conductivity in terms of the contributions from both the exponential prefactor σo and Boltzmann factor exp(-Ea/RT). This analysis explains why the conductivity decreases with increasing temperature above 65 °C for the LiTf-dodecanol electrolyte. At higher temperatures, the decrease in the exponential prefactor is greater than the increase in the Boltzmann factor.
Beyond lensing by the cosmological constant
NASA Astrophysics Data System (ADS)
Faraoni, Valerio; Lapierre-Léonard, Marianne
2017-01-01
The long-standing problem of whether the cosmological constant affects directly the deflection of light caused by a gravitational lens is reconsidered. We use a new approach based on the Hawking quasilocal mass of a sphere grazed by light rays and on its splitting into local and cosmological parts. Previous literature restricted to the cosmological constant is extended to any form of dark energy accelerating the universe in which the gravitational lens is embedded.
Fundamental experiments in velocimetry
Briggs, Matthew Ellsworth; Hull, Larry; Shinas, Michael
2009-01-01
One can understand what velocimetry does and does not measure by understanding a few fundamental experiments. Photon Doppler Velocimetry (PDV) is an interferometer that will produce fringe shifts when the length of one of the legs changes, so we might expect the fringes to change whenever the distance from the probe to the target changes. However, by making PDV measurements of tilted moving surfaces, we have shown that fringe shifts from diffuse surfaces are actually measured only from the changes caused by the component of velocity along the beam. This is an important simplification in the interpretation of PDV results, arising because surface roughness randomizes the scattered phases.
Fundamental research data base
NASA Technical Reports Server (NTRS)
1983-01-01
A fundamental research data base was created on a single 9-track 1600 BPI tape containing ground truth, image, and Badhwar profile feature data for 17 North Dakota, South Dakota, and Minnesota agricultural sites. Each site is 5x6 nm in area. Image data has been provided for a minimum of four acquisition dates for each site. All four images have been registered to one another. A list of the order of the files on tape and the dates of acquisition is provided.
Lubowitz, James H; Provencher, Matthew T; Brand, Jefferson C; Rossi, Michael J; Poehling, Gary G
2015-06-01
In 2015, Henry P. Hackett, Managing Editor, Arthroscopy, retires, and Edward A. Goss, Executive Director, Arthroscopy Association of North America (AANA), retires. Association is a positive constant, in a time of change. With change comes a need for continuing education, research, and sharing of ideas. While the quality of education at AANA and ISAKOS is superior and most relevant, the unique reason to travel and meet is the opportunity to interact with innovative colleagues. Personal interaction best stimulates new ideas to improve patient care, research, and teaching. Through our network, we best create innovation.
Testing Our Fundamental Assumptions
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-06-01
Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these
NASA Astrophysics Data System (ADS)
Pisacane, Vincent L.
2005-06-01
Fundamentals of Space Systems was developed to satisfy two objectives: the first is to provide a text suitable for use in an advanced undergraduate or beginning graduate course in both space systems engineering and space system design. The second is to be a primer and reference book for space professionals wishing to broaden their capabilities to develop, manage the development, or operate space systems. The authors of the individual chapters are practicing engineers that have had extensive experience in developing sophisticated experimental and operational spacecraft systems in addition to having experience teaching the subject material. The text presents the fundamentals of all the subsystems of a spacecraft missions and includes illustrative examples drawn from actual experience to enhance the learning experience. It included a chapter on each of the relevant major disciplines and subsystems including space systems engineering, space environment, astrodynamics, propulsion and flight mechanics, attitude determination and control, power systems, thermal control, configuration management and structures, communications, command and telemetry, data processing, embedded flight software, survuvability and reliability, integration and test, mission operations, and the initial conceptual design of a typical small spacecraft mission.
Thermodynamics fundamentals of energy conversion
NASA Astrophysics Data System (ADS)
Dan, Nicolae
The work reported in the chapters 1-5 focuses on the fundamentals of heat transfer, fluid dynamics, thermodynamics and electrical phenomena related to the conversion of one form of energy to another. Chapter 6 is a re-examination of the fundamental heat transfer problem of how to connect a finite-size heat generating volume to a concentrated sink. Chapter 1 extends to electrical machines the combined thermodynamics and heat transfer optimization approach that has been developed for heat engines. The conversion efficiency at maximum power is 1/2. When, as in specific applications, the operating temperature of windings must not exceed a specified level, the power output is lower and efficiency higher. Chapter 2 addresses the fundamental problem of determining the optimal history (regime of operation) of a battery so that the work output is maximum. Chapters 3 and 4 report the energy conversion aspects of an expanding mixture of hot particles, steam and liquid water. At the elemental level, steam annuli develop around the spherical drops as time increases. At the mixture level, the density decreases while the pressure and velocity increases. Chapter 4 describes numerically, based on the finite element method, the time evolution of the expanding mixture of hot spherical particles, steam and water. The fluid particles are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Chapter 5 describes the process of thermal interaction between the molten material and water. In the second part of the chapter the model accounts for the irreversibility due to the flow of the mixture through the cracks of the mixing vessel. The approach presented in this chapter is based on exergy analysis and represents a departure from the line of inquiry that was followed in chapters 3-4. Chapter 6 shows that the geometry of the heat flow path between a volume and one point can be optimized in two fundamentally different ways. In the "growth" method the
Pommé, S; Stroh, H; Altzitzoglou, T; Paepen, J; Van Ammel, R; Kossert, K; Nähle, O; Keightley, J D; Ferreira, K M; Verheyen, L; Bruggeman, M
2017-09-07
Some authors have raised doubt about the invariability of decay constants, which would invalidate the exponential-decay law and the foundation on which the common measurement system for radioactivity is based. Claims were made about a new interaction - the fifth force - by which neutrinos could affect decay constants, thus predicting changes in decay rates in correlation with the variations of the solar neutrino flux. Their argument is based on the observation of permille-sized annual modulations in particular decay rate measurements, as well as transient oscillations at frequencies near 11 year(-1) and 12.7 year(-1) which they speculatively associate with dynamics of the solar interior. In this work, 12 data sets of precise long-term decay rate measurements have been investigated for the presence of systematic modulations at frequencies between 0.08 and 20 year(-1). Besides small annual effects, no common oscillations could be observed among α, β(-), β(+) or EC decaying nuclides. The amplitudes of fitted oscillations to residuals from exponential decay do not exceed 3 times their standard uncertainty, which varies from 0.00023 % to 0.023 %. This contradicts the assertion that 'neutrino-induced' beta decay provides information about the deep solar interior. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Biochemical Engineering Fundamentals
ERIC Educational Resources Information Center
Bailey, J. E.; Ollis, D. F.
1976-01-01
Discusses a biochemical engineering course that is offered as part of a chemical engineering curriculum and includes topics that influence the behavior of man-made or natural microbial or enzyme reactors. (MLH)
Division i: Fundamental Astronomy
NASA Astrophysics Data System (ADS)
McCarthy, Dennis D.; Klioner, Sergei A.; Vondrák, Jan; Evans, Dafydd Wyn; Hohenkerk, Catherine Y.; Hosokawa, Mizuhiko; Huang, Cheng-Li; Kaplan, George H.; Knežević, Zoran; Manchester, Richard N.; Morbidelli, Alessandro; Petit, Gérard; Schuh, Harald; Soffel, Michael H.; Zacharias, Norbert
2012-04-01
The goal of the division is to address the scientific issues that were developed at the 2009 IAU General Assembly in Rio de Janeiro. These are:•Astronomical constants-Gaussian gravitational constant, Astronomical Unit, GMSun, geodesic precession-nutation•Astronomical software•Solar System Ephemerides-Pulsar research-Comparison of dynamical reference frames•Future Optical Reference Frame•Future Radio Reference Frame•Exoplanets-Detection-Dynamics•Predictions of Earth orientation•Units of measurements for astronomical quantities in relativistic context•Astronomical units in the relativistic framework•Time-dependent ecliptic in the GCRS•Asteroid masses•Review of space missions•Detection of gravitational waves•VLBI on the Moon•Real time electronic access to UT1-UTCIn pursuit of these goals Division I members have made significant scientific and organizational progress, and are organizing a Joint Discussion on Space-Time Reference Systems for Future Research at the 2012 IAU General Assembly. The details of Division activities and references are provided in the individual Commission and Working Group reports in this volume. A comprehensive list of references related to the work of the Division is available at the IAU Division I website at http://maia.usno.navy.mil/iaudiv1/.
Fundamentals of gel dosimeters
NASA Astrophysics Data System (ADS)
McAuley, K. B.; Nasr, A. T.
2013-06-01
Fundamental chemical and physical phenomena that occur in Fricke gel dosimeters, polymer gel dosimeters, micelle gel dosimeters and genipin gel dosimeters are discussed. Fricke gel dosimeters are effective even though their radiation sensitivity depends on oxygen concentration. Oxygen contamination can cause severe problems in polymer gel dosimeters, even when THPC is used. Oxygen leakage must be prevented between manufacturing and irradiation of polymer gels, and internal calibration methods should be used so that contamination problems can be detected. Micelle gel dosimeters are promising due to their favourable diffusion properties. The introduction of micelles to gel dosimetry may open up new areas of dosimetry research wherein a range of water-insoluble radiochromic materials can be explored as reporter molecules.
Fundamentals of zoological scaling
NASA Astrophysics Data System (ADS)
Lin, Herbert
1982-01-01
Most introductory physics courses emphasize highly idealized problems with unique well-defined answers. Though many textbooks complement these problems with estimation problems, few books present anything more than an elementary discussion of scaling. This paper presents some fundamentals of scaling in the zoological domain—a domain complex by any standard, but one also well suited to illustrate the power of very simple physical ideas. We consider the following animal characteristics: skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing flapping, and maximum sizes of animals that fly and hover. These relationships are compared to zoological data and everyday experience, and match reasonably well.
Unitaxial constant velocity microactuator
McIntyre, Timothy J.
1994-01-01
A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-manometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment.
Jackson, Neal
2007-01-01
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. In the last 20 years, much progress has been made and estimates now range between 60 and 75 km s(-1) Mpc(-1), with most now between 70 and 75 km s(-1) Mpc(-1), a huge improvement over the factor-of-2 uncertainty which used to prevail. Further improvements which gave a generally agreed margin of error of a few percent rather than the current 10% would be vital input to much other interesting cosmology. There are several programmes which are likely to lead us to this point in the next 10 years.
Beiu, V.
1997-04-01
In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.
NASA Technical Reports Server (NTRS)
Stevens, F W
1924-01-01
This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound.
Tully, R B
1993-06-01
Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars.
Tully, R B
1993-01-01
Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391
Unitaxial constant velocity microactuator
McIntyre, T.J.
1994-06-07
A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment is disclosed. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-nanometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment. 10 figs.
Tatara, T; Tsuzaki, K
2000-07-01
A study is conducted to determine whether the extracellular fluid (ECF) volume fraction and equivalent dielectric constant of the cell membrane epsilon m, derived from the dielectric properties of the human body can track the progression of surgical tissue injury. Frequency-dependent dielectric constants and electrical conductivities of body segments are obtained at surgical (trunk) and non-surgical sites (arm and leg) from five patients who have undergone oesophageal resections, before and at the end of surgery and on the day after the operation. The ECF volume fraction and the equivalent epsilon m of body segments are estimated by fitting the dielectric data for body segments to the cell suspension model incorporating fat tissue, and their time-course changes are compared between body segments. By the day after the operation, the estimated ECF volume fraction has increased in all body segments compared with that before surgery, by 0.13 in the arm, 0.16 in the trunk and 0.14 in the leg (p < 0.05), indicating postoperative fluid accumulation in the extracellular space. In contrast, the estimated equivalent epsilon m shows a different time course between body segments on the day after the operation, characterised by a higher change ratio of epsilon m of the trunk (1.34 +/- 0.66, p < 0.05), from that of the arm (0.66 +/- 0.34) and leg (0.61 +/- 0.11). The results suggest that the equivalent epsilon m of a body segment at a surgical site can track pathophysiological cell changes following surgical tissue injury.
NASA Astrophysics Data System (ADS)
1984-01-01
The 1984 CPEM—the world's leading international biennial conference for electromagnetic metrology and related fundamental constants—will be held on 20 24 August, 1984, at Delft University of Technology, The Netherlands. Papers are requested for CPEM 84 which describe original work, not published or previously presented, covering the design, performance or application of electromagnetic measurements, techniques, instruments or systems. In cooperation with the relevant commission of the International Union of Pure and Applied Physics (IUPAP) the Conference Committee has decided that topics on fundamental constants related to electromagnetic measurements will also be part of CPEM 84. All papers concerned with EM-measurements and related fundamental constants will be considered. Papers in the following fields are regarded as particularly appropriate for this conference: EM-based fundamental constants and standards direct current and low frequency time and frequency antennas and fields microwaves and millimeter waves (micro)computer-aided measurements infrared, visible and ultraviolet radiation electro optics, fibre optics lasers cryo-electronics technical calibration services. The conference language will be English. Authors are requested to submit a summary (500 1000 words) along with an abstract (maximum 50 words) to facilitate paper selection by the programme committee. The summary must describe clearly what new and significant results have been obtained and why the results are important. Summaries must be received on or before 1 February, 1984 and must be sent to Prof. dr. H Postma, Technical Programme Chairman CPEM 84, Delft University of Technology, PO Box 5046, NL-2600 GA Delft, The Netherlands. Authors will be notified before 15 May, 1984 whether their papers are accepted and informed of the manner of presentation and possible publication in the IEEE Trans. Instrum. Meas. conference issue.
NASA Astrophysics Data System (ADS)
Steele, A. G.; Meija, J.; Sanchez, C. A.; Yang, L.; Wood, B. M.; Sturgeon, R. E.; Mester, Z.; Inglis, A. D.
2012-02-01
The next revision to the International System of Units will emphasize the relationship between the base units (kilogram, metre, second, ampere, kelvin, candela and mole) and fundamental constants of nature (the speed of light, c, the Planck constant, h, the elementary charge, e, the Boltzmann constant, kB, the Avogadro constant, NA, etc). The redefinition cannot proceed without consistency between two complementary metrological approaches to measuring h: a 'physics' approach, using watt balances and the equivalence principle between electrical and mechanical force, and a 'chemistry' approach that can be viewed as determining the mass of a single atom of silicon. We report the first high precision physics and chemistry results that agree within 12 parts per billion: h (watt balance) = 6.626 070 63(43) × 10-34 J s and h(silicon) = 6.626 070 55(21) × 10-34 J s. When combined with values determined by other metrology laboratories, this work helps to constrain our knowledge of h to 20 parts per billion, moving us closer to a redefinition of the metric system used around the world.
NASA Technical Reports Server (NTRS)
Lotz, R.; Crandall, S. H.
1973-01-01
The fundamental equation of statistical energy analysis (SEA) states that the average power flow between two coupled vibrating systems is proportional to the difference in their average modal energies. Under certain circumstances it is possible to estimate the proportionality constant by modifying system boundary conditions on the separated systems and calculating or measuring changes in the systems. Newland's estimate, based upon blocking part of the system, is reexamined, and limitations are discussed. Three alternative methods which circumvent blocking are presented. These were applied to predict power flow in experiments on coupled beams and on coupled plates wherein power flow through the coupling was measured directly as a product of force times velocity. The measurements support the fundamental SEA relation, including the null power point where the average modal energies are equal.
NASA Technical Reports Server (NTRS)
Kuehl, H.
1947-01-01
After defining the aims and requirements to be set for a control system of gas-turbine power plants for aircraft, the report will deal with devices that prevent the quantity of fuel supplied per unit of time from exceeding the value permissible at a given moment. The general principles of the actuation of the adjustable parts of the power plant are also discussed.
NASA Astrophysics Data System (ADS)
Petitjean, Patrick; Wang, F. Y.; Wu, X. F.; Wei, J. J.
2016-12-01
Gamma-ray bursts (GRBs) are short and intense flashes at the cosmological distances, which are the most luminous explosions in the Universe. The high luminosities of GRBs make them detectable out to the edge of the visible universe. So, they are unique tools to probe the properties of high-redshift universe: including the cosmic expansion and dark energy, star formation rate, the reionization epoch and the metal evolution of the Universe. First, they can be used to constrain the history of cosmic acceleration and the evolution of dark energy in a redshift range hardly achievable by other cosmological probes. Second, long GRBs are believed to be formed by collapse of massive stars. So they can be used to derive the high-redshift star formation rate, which can not be probed by current observations. Moreover, the use of GRBs as cosmological tools could unveil the reionization history and metal evolution of the Universe, the intergalactic medium (IGM) properties and the nature of first stars in the early universe. But beyond that, the GRB high-energy photons can be applied to constrain Lorentz invariance violation (LIV) and to test Einstein's Equivalence Principle (EEP). In this paper, we review the progress on the GRB cosmology and fundamental physics probed by GRBs.
Internal machining accomplished at constant radii
NASA Technical Reports Server (NTRS)
Gollihugh, T. E.
1966-01-01
Device machines fluid passages in workpieces at constant radii through two adjacent surfaces that are at included angles up to approximately 120 degrees. This technique has been used extensively in fabricating engine parts where close control of fluid flow is a requirement.
Gravitational clock: A proposed experiment for the measurement of the gravitational constant G
NASA Technical Reports Server (NTRS)
Smalley, L. L.
1975-01-01
The increased importance and the fundamental significance of accurately measuring the gravitational constant G are discussed along with recent or proposed experimental measurements of G. The method of using mutually gravitating bodies in the clock mode in a drag-free satellite is described. A satellite experiment consisting of the flat-plate spherical mass oscillator proposed combines the mathematical and experimental conveniences most simply. It is estimated that accuracies of 1 part in 1,000,000 are easily obtainable by careful fabrication of parts. The use of cryogenic techniques, thin films, and superconductors allows increased accuracies of two or three orders of magnitude or better. These measurements can be increased to the level of 1 part in 10 to the 11th power at which time-variations, and other variations, in G can be observed.
TASI Lectures on the cosmological constant
Bousso, Raphael; Bousso, Raphael
2007-08-30
The energy density of the vacuum, Lambda, is at least 60 orders of magnitude smaller than several known contributions to it. Approaches to this problem are tightly constrained by data ranging from elementary observations to precision experiments. Absent overwhelming evidence to the contrary, dark energy can only be interpreted as vacuum energy, so the venerable assumption that Lambda=0 conflicts with observation. The possibility remains that Lambda is fundamentally variable, though constant over large spacetime regions. This can explain the observed value, but only in a theory satisfying a number of restrictive kinematic and dynamical conditions. String theory offers a concrete realization through its landscape of metastable vacua.
Deuteron charge radius and Rydberg constant from spectroscopy data in atomic deuterium
NASA Astrophysics Data System (ADS)
Pohl, Randolf; Nez, François; Udem, Thomas; Antognini, Aldo; Beyer, Axel; Fleurbaey, Hélène; Grinin, Alexey; Hänsch, Theodor W.; Julien, Lucile; Kottmann, Franz; Krauth, Julian J.; Maisenbacher, Lothar; Matveev, Arthur; Biraben, François
2017-04-01
We give a pedagogical description of the method to extract the charge radii and Rydberg constant from laser spectroscopy in regular hydrogen (H) and deuterium (D) atoms, that is part of the CODATA least-squares adjustment (LSA) of the fundamental physical constants. We give a deuteron charge radius {{r}\\text{d}} from D spectroscopy alone of 2.1415(45) fm. This value is independent of the measurements that lead to the proton charge radius, and five times more accurate than the value found in the CODATA Adjustment 10. The improvement is due to the use of a value for the 1S\\to 2S transition in atomic deuterium which can be inferred from published data or found in a PhD thesis.
Fundamental constraints on two-time physics
NASA Astrophysics Data System (ADS)
Piceno, E.; Rosado, A.; Sadurní, E.
2016-10-01
We show that generalizations of classical and quantum dynamics with two times lead to a fundamentally constrained evolution. At the level of classical physics, Newton's second law is extended and exactly integrated in a (1 + 2) -dimensional space, leading to effective single-time evolution for any initial condition. The cases 2 + 2 and 3 + 2 are also analyzed. In the domain of quantum mechanics, we follow strictly the hypothesis of probability conservation by extending the Heisenberg picture to unitary evolution with two times. As a result, the observability of two temporal axes is constrained by a generalized uncertainty relation involving level spacings, total duration of the effect and Planck's constant.
Topological Quantization in Units of the Fine Structure Constant
Maciejko, Joseph; Qi, Xiao-Liang; Drew, H.Dennis; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC
2011-11-11
Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant {alpha} = e{sup 2}/{h_bar}c. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.
Interfaces at equilibrium: A guide to fundamentals.
Marmur, Abraham
2016-05-20
The fundamentals of the thermodynamics of interfaces are reviewed and concisely presented. The discussion starts with a short review of the elements of bulk thermodynamics that are also relevant to interfaces. It continues with the interfacial thermodynamics of two-phase systems, including the definition of interfacial tension and adsorption. Finally, the interfacial thermodynamics of three-phase (wetting) systems is discussed, including the topic of non-wettable surfaces. A clear distinction is made between equilibrium conditions, in terms of minimizing energies (internal, Gibbs or Helmholtz), and equilibrium indicators, in terms of measurable, intrinsic properties (temperature, chemical potential, pressure). It is emphasized that the equilibrium indicators are the same whatever energy is minimized, if the boundary conditions are properly chosen. Also, to avoid a common confusion, a distinction is made between systems of constant volume and systems with drops of constant volume.
NASA Astrophysics Data System (ADS)
1995-08-01
about the distances to galaxies and thereby about the expansion rate of the Universe. A simple way to determine the distance to a remote galaxy is by measuring its redshift, calculate its velocity from the redshift and divide this by the Hubble constant, H0. For instance, the measured redshift of the parent galaxy of SN 1995K (0.478) yields a velocity of 116,000 km/sec, somewhat more than one-third of the speed of light (300,000 km/sec). From the universal expansion rate, described by the Hubble constant (H0 = 20 km/sec per million lightyears as found by some studies), this velocity would indicate a distance to the supernova and its parent galaxy of about 5,800 million lightyears. The explosion of the supernova would thus have taken place 5,800 million years ago, i.e. about 1,000 million years before the solar system was formed. However, such a simple calculation works only for relatively ``nearby'' objects, perhaps out to some hundred million lightyears. When we look much further into space, we also look far back in time and it is not excluded that the universal expansion rate, i.e. the Hubble constant, may have been different at earlier epochs. This means that unless we know the change of the Hubble constant with time, we cannot determine reliable distances of distant galaxies from their measured redshifts and velocities. At the same time, knowledge about such change or lack of the same will provide unique information about the time elapsed since the Universe began to expand (the ``Big Bang''), that is, the age of the Universe and also its ultimate fate. The Deceleration Parameter q0 Cosmologists are therefore eager to determine not only the current expansion rate (i.e., the Hubble constant, H0) but also its possible change with time (known as the deceleration parameter, q0). Although a highly accurate value of H0 has still not become available, increasing attention is now given to the observational determination of the second parameter, cf. also the Appendix at the
Precision Measurement of the Newtonian Gravitational Constant by Atom Interferometry
NASA Astrophysics Data System (ADS)
Rosi, G.; D'Amico, G.; Tino, G. M.; Cacciapuoti, L.; Prevedelli, M.; Sorrentino, F.
We report on the latest determination of the Newtonian gravitational constant G using our atom interferometry gravity gradiometer. After a short introduction on the G measurement issue we will provide a description of the experimental method employed, followed by a discussion of the experimental results in terms of sensitivity and systematic effects. Finally, prospects for future cold atom-based experiments devoted to the measurement of this fundamental constant are reported.
Improving Estimated Optical Constants With MSTM and DDSCAT Modeling
NASA Astrophysics Data System (ADS)
Pitman, K. M.; Wolff, M. J.
2015-12-01
We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long
Fundamentals of Space Medicine
NASA Astrophysics Data System (ADS)
Clément, G.
2003-10-01
As of today, a total of more than 240 human space flights have been completed, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This book presents in a readable text the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardiovascular, bone and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination
Fundamentals of Space Medicine
NASA Astrophysics Data System (ADS)
Clément, Gilles
2005-03-01
A total of more than 240 human space flights have been completed to date, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This readable text presents the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardio-vascular, bone, and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated, and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination of both the
Astronomers Gain Clues About Fundamental Physics
NASA Astrophysics Data System (ADS)
2005-12-01
An international team of astronomers has looked at something very big -- a distant galaxy -- to study the behavior of things very small -- atoms and molecules -- to gain vital clues about the fundamental nature of our entire Universe. The team used the National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) to test whether the laws of nature have changed over vast spans of cosmic time. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) "The fundamental constants of physics are expected to remain fixed across space and time; that's why they're called constants! Now, however, new theoretical models for the basic structure of matter indicate that they may change. We're testing these predictions." said Nissim Kanekar, an astronomer at the National Radio Astronomy Observatory (NRAO), in Socorro, New Mexico. So far, the scientists' measurements show no change in the constants. "We've put the most stringent limits yet on some changes in these constants, but that's not the end of the story," said Christopher Carilli, another NRAO astronomer. "This is the exciting frontier where astronomy meets particle physics," Carilli explained. The research can help answer fundamental questions about whether the basic components of matter are tiny particles or tiny vibrating strings, how many dimensions the Universe has, and the nature of "dark energy." The astronomers were looking for changes in two quantities: the ratio of the masses of the electron and the proton, and a number physicists call the fine structure constant, a combination of the electron charge, the speed of light and the Planck constant. These values, considered fundamental physical constants, once were "taken as time independent, with values given once and forever" said German particle physicist Christof Wetterich. However, Wetterich explained, "the viewpoint of modern particle theory has changed in recent years," with ideas such as
Fundamentals of process neuropsychology.
Brown, J W
1998-11-01
An examination of the whole-to-part transition over phases, from potential to actual in the specification of a concrete entity, as in the momentary mind/brain state, reveals patterns of change that can be considered a first approximation to the foundational laws of cognition. These laws, which amount to a theory of universal change, apply as well to the becoming or actualization of non-cognitive entities. Thus, the thesis is advanced that the mental and the physical actualize a generic process and that a theory of this process, process monism, is a metaphysics of the antecedents of occasions of fact or the laws of change that deliver objects. The commonality of the mental and the physical lies in the conceptuality of the duration of becoming and its continuity with the duration of the conscious present. The before/after relation that characterizes the phase-transitions in a non-cognitive entity is the seed of the past/present relation in consciousness. The connectedness of past and present arises as a feeling of the relation of antecedent phases imminent in a concrete particular. The theory rejects as regressive the elimination of consciousness by a reduction to the material, or the reverse, in idealism, as well as an emergence of consciousness from material states. A deep current of connectedness runs from the nature of conscious phenomena to the categories of existence at the level of the atom.
Is Planck's quantization constant unique?
NASA Astrophysics Data System (ADS)
Livadiotis, George
2016-07-01
A cornerstone of Quantum Mechanics is the existence of a non-zero least action, the Planck constant. However, the basic concepts and theoretical developments of Quantum Mechanics are independent of its specific numerical value. A different constant h _{*}, similar to the Planck constant h, but ˜12 orders of magnitude larger, characterizes plasmas. The study of >50 different geophysical, space, and laboratory plasmas, provided the first evidence for the universality and the quantum nature of h _{*}, revealing that it is a new quantization constant. The recent results show the diagnostics for determining whether plasmas are characterized by the Planck or the new quantization constant, compounding the challenge to reconcile both quantization constants in quantum mechanics.
Wakefields in a Dielectric Tube with Frequency Dependent Dielectric Constant
Siemann, R.H.; Chao, A.W.; /SLAC
2005-05-27
Laser driven dielectric accelerators could operate at a fundamental mode frequency where consideration must be given to the frequency dependence of the dielectric constant when calculating wakefields. Wakefields are calculated for a frequency dependence that arises from a single atomic resonance. Causality is considered, and the effects on the short range wakefields are calculated.
Fundamentals and Techniques of Nonimaging
O'Gallagher, J. J.; Winston, R.
2003-07-10
This is the final report describing a long term basic research program in nonimaging optics that has led to major advances in important areas, including solar energy, fiber optics, illumination techniques, light detectors, and a great many other applications. The term ''nonimaging optics'' refers to the optics of extended sources in systems for which image forming is not important, but effective and efficient collection, concentration, transport, and distribution of light energy is. Although some of the most widely known developments of the early concepts have been in the field of solar energy, a broad variety of other uses have emerged. Most important, under the auspices of this program in fundamental research in nonimaging optics established at the University of Chicago with support from the Office of Basic Energy Sciences at the Department of Energy, the field has become very dynamic, with new ideas and concepts continuing to develop, while applications of the early concepts continue to be pursued. While the subject began as part of classical geometrical optics, it has been extended subsequently to the wave optics domain. Particularly relevant to potential new research directions are recent developments in the formalism of statistical and wave optics, which may be important in understanding energy transport on the nanoscale. Nonimaging optics permits the design of optical systems that achieve the maximum possible concentration allowed by physical conservation laws. The earliest designs were constructed by optimizing the collection of the extreme rays from a source to the desired target: the so-called ''edge-ray'' principle. Later, new concentrator types were generated by placing reflectors along the flow lines of the ''vector flux'' emanating from lambertian emitters in various geometries. A few years ago, a new development occurred with the discovery that making the design edge-ray a functional of some other system parameter permits the construction of whole
An Evaluation of Fundamental Schools.
ERIC Educational Resources Information Center
Weber, Larry J.; And Others
1984-01-01
When compared with regular schools in the same district, fundamental school students performed as well as or better than regular school students; fundamental schools rated better on learning climate, discipline, and suspensions; and there were no differences in student self-concept. (Author/BW)
Rosen, M D
2005-09-30
On the Nova Laser at LLNL, we demonstrated many of the key elements required for assuring that the next laser, the National Ignition Facility (NIF) will drive an Inertial Confinement Fusion (ICF) target to ignition. The indirect drive (sometimes referred to as ''radiation drive'') approach converts laser light to x-rays inside a gold cylinder, which then acts as an x-ray ''oven'' (called a hohlraum) to drive the fusion capsule in its center. On Nova we've demonstrated good understanding of the temperatures reached in hohlraums and of the ways to control the uniformity with which the x-rays drive the spherical fusion capsules. In these lectures we will be reviewing the physics of these laser heated hohlraums, recent attempts at optimizing their performance, and then return to the ICF problem in particular to discuss scaling of ICF gain with scale size, and to compare indirect vs. direct drive gains. In ICF, spherical capsules containing Deuterium and Tritium (DT)--the heavy isotopes of hydrogen--are imploded, creating conditions of high temperature and density similar to those in the cores of stars required for initiating the fusion reaction. When DT fuses an alpha particle (the nucleus of a helium atom) and a neutron are created releasing large amount amounts of energy. If the surrounding fuel is sufficiently dense, the alpha particles are stopped and can heat it, allowing a self-sustaining fusion burn to propagate radially outward and a high gain fusion micro-explosion ensues. To create those conditions the outer surface of the capsule is heated (either directly by a laser or indirectly by laser produced x-rays) to cause rapid ablation and outward expansion of the capsule material. A rocket-like reaction to that outward flowing heated material leads to an inward implosion of the remaining part of the capsule shell. The pressure generated on the outside of the capsule can reach nearly 100 megabar (100 million times atmospheric pressure [1b = 10{sup 6} cgs
The Search for Universal Constants and the Birth of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Robotti, Nadia; Badino, Massimiliano
The origin of quantum theory and Max Planck's theoretical work are without doubt two of the most frequently quoted episodes in the history of quantum physics, for the obvious reason that they represented the first steps in its formulation. Paradoxically however there are relatively few specific studies of Planck and those differ on a range of questions. In our opinion this is due to the extremely synthetic nature of some of Planck's papers, and especially the "fundamentals" of October and December 1900. Faced with such brevity, a number of historians of science and philosophers have preferred to give a comprehensive analysis of the landmarks in Planck's work, often resorting to a more or less retrospective reconstruction process rather than attempting to build an all-embracing vision of Planck's work as a whole. In this paper we have therefore attempted to rebuild Planck's steps from 1899 to 1900. An analysis of this type shows that Planck's work has a profound internal unity throughout the entire period leading up to the discovery of the "quantum of energy". In our opinion a key to interpreting the mutual relationships between the various parts and stages of the theory in an intelligible manner is provided by Planck's interest in universal constants. This interest was grounded in two factors: 1) universal constants gave the entire theory a precise physical meaning, 2) they could be used to build a universal system of units of measurement. In particular we show that various pairs of constants are a clear feature of Planck's treatment of the blackbody problem throughout the period in question and that for Planck the appearance of these constants in the distribution law represented a fundamental criteria. So much so that it inevitably played a key role in what has been defined as the crucial moment of the entire process - the decision to use a probabilistic definition of entropy.
QCD coupling constants and VDM
Erkol, G.; Ozpineci, A.; Zamiralov, V. S.
2012-10-23
QCD sum rules for coupling constants of vector mesons with baryons are constructed. The corresponding QCD sum rules for electric charges and magnetic moments are also derived and with the use of vector-meson-dominance model related to the coupling constants. The VDM role as the criterium of reciprocal validity of the sum rules is considered.
Constant-Pressure Hydraulic Pump
NASA Technical Reports Server (NTRS)
Galloway, C. W.
1982-01-01
Constant output pressure in gas-driven hydraulic pump would be assured in new design for gas-to-hydraulic power converter. With a force-multiplying ring attached to gas piston, expanding gas would apply constant force on hydraulic piston even though gas pressure drops. As a result, pressure of hydraulic fluid remains steady, and power output of the pump does not vary.
Precision measurement of the Newtonian gravitational constant using cold atoms.
Rosi, G; Sorrentino, F; Cacciapuoti, L; Prevedelli, M; Tino, G M
2014-06-26
About 300 experiments have tried to determine the value of the Newtonian gravitational constant, G, so far, but large discrepancies in the results have made it impossible to know its value precisely. The weakness of the gravitational interaction and the impossibility of shielding the effects of gravity make it very difficult to measure G while keeping systematic effects under control. Most previous experiments performed were based on the torsion pendulum or torsion balance scheme as in the experiment by Cavendish in 1798, and in all cases macroscopic masses were used. Here we report the precise determination of G using laser-cooled atoms and quantum interferometry. We obtain the value G = 6.67191(99) × 10(-11) m(3) kg(-1) s(-2) with a relative uncertainty of 150 parts per million (the combined standard uncertainty is given in parentheses). Our value differs by 1.5 combined standard deviations from the current recommended value of the Committee on Data for Science and Technology. A conceptually different experiment such as ours helps to identify the systematic errors that have proved elusive in previous experiments, thus improving the confidence in the value of G. There is no definitive relationship between G and the other fundamental constants, and there is no theoretical prediction for its value, against which to test experimental results. Improving the precision with which we know G has not only a pure metrological interest, but is also important because of the key role that G has in theories of gravitation, cosmology, particle physics and astrophysics and in geophysical models.
Ablative Thermal Protection System Fundamentals
NASA Technical Reports Server (NTRS)
Beck, Robin A. S.
2013-01-01
This is the presentation for a short course on the fundamentals of ablative thermal protection systems. It covers the definition of ablation, description of ablative materials, how they work, how to analyze them and how to model them.
Clinical fundamentals for radiation oncologists.
Yang, Jack
2011-11-01
Clinical fundamentals for radiation oncologists. Hasan Murshed. Medical Physics Publishing, Madison, WI, 2011. 680 pp. (soft cover), Price: $90.00. 978-1-930524-43-9. © 2011 American Association of Physicists in Medicine.
Fundamental principles of particle detectors
Fernow, R.C.
1988-01-01
This paper goes through the fundamental physics of particles-matter interactions which is necessary for the detection of these particles with detectors. A listing of 41 concepts and detector principles are given. 14 refs., 11 figs.
Hydrogenlike highly charged ions for tests of the time independence of fundamental constants.
Schiller, S
2007-05-04
Hyperfine transitions in the electronic ground state of cold, trapped hydrogenlike highly charged ions have attractive features for use as frequency standards because the majority of systematic frequency shifts are smaller by orders of magnitude compared to many microwave and optical frequency standards. Frequency measurements of these transitions hold promise for significantly improved laboratory tests of local position invariance of the electron and quark masses.
Fundamental physics at the threshold of discovery
NASA Astrophysics Data System (ADS)
Toro, Natalia
This thesis is divided into two parts: one driven by theory, the other by experiment. The first two chapters consider two model-building challenges: the little hierarchy of supersymmetry and the slowness of confinement in Randall-Sundrum models. In the third chapter, we turn to the question of determining the nature of fundamental physics at the TeV scale from LHC data. Crucial to this venture is a characterization for models of new physics. We present On-Shell Effective Theories (OSETs), a characterization of hadron collider data in terms of masses, production cross sections, and decay modes of new particles. We argue that such a description can likely be obtained from ≲ 1 year of LHC data, and in many scenarios is an essential intermediate step in describing fundamental physics at the TeV scale.
Fundamental mechanisms of micromachine reliability
DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.
2000-01-01
Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is independent of
Fluid property programs. Part 3. Program determines gas constants
Meehan, D.N.
1980-11-24
A calculator program written for the HP 67/97 programmable calculator uses gas-gravity data to quickly determine the pseudocritical properties of a reservoir gas, with corrections for the presence of N/sub 2/, CO/sub 2/, and H/sub 2/S. The program is based on equations for pressure and temperature developed by Standing and Katz and by Wichert and Aziz.
Distributed detection with multiple sensors: Part I - fundamentals
Viswanathan, R.; Varshney, P.K.
1997-01-01
In this paper, basic results on distributed detection are reviewed. In particular, the authors consider the parallel and the serial architectures in some detail and discuss the decision rules obtained from their optimization based on the Neyman-Pearson (NP) criterion and the Bayes formulation. For conditionally independent sensor observations, the optimality of the likelihood ratio test (LRT) at the sensors is established. General comments on several important issues are made including the computational complexity of obtaining the optimal solutions, the design of detection networks with more general topologies, and applications to different areas.
Fundamental ignition study for material fire safety improvement, part 2
NASA Technical Reports Server (NTRS)
Paciorek, K. L.; Kratzer, R. H.; Kaufman, J.
1971-01-01
The autoignition behavior of polymeric compositions in oxidizing media was investigated as well as the nature and relative concentration of the volatiles produced during oxidative decomposition culminating in combustion. The materials investigated were Teflon, Fluorel KF-2140 raw gum and its compounded versions Refset and Ladicote, 45B3 intumenscent paint, and Ames isocyanurate foam. The majority of the tests were conducted using a stagnation burner arrangement which provided a laminar gas flow and allowed the sample block and gas temperatures to be varied independently. The oxidizing atmospheres were essentially air and oxygen, although in the case of the Fluorel family of materials, due to partial blockage of the gas inlet system, some tests were performed unintentionally in enriched air (not oxygen). The 45B3 paint was not amenable to sampling in a dynamic system, due to its highly intumescent nature. Consequently, selected experiments were conducted using a sealed tube technique both in air and oxygen media.
Fundamental performance differences of CMOS and CCD imagers: part V
NASA Astrophysics Data System (ADS)
Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff
2013-02-01
Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.
Visual design for the user interface, Part 1: Design fundamentals.
Lynch, P J
1994-01-01
Digital audiovisual media and computer-based documents will be the dominant forms of professional communication in both clinical medicine and the biomedical sciences. The design of highly interactive multimedia systems will shortly become a major activity for biocommunications professionals. The problems of human-computer interface design are intimately linked with graphic design for multimedia presentations and on-line document systems. This article outlines the history of graphic interface design and the theories that have influenced the development of today's major graphic user interfaces.
Fundamental ignition study for material fire safety improvement, part 1
NASA Technical Reports Server (NTRS)
Paciorek, K. L.; Zung, L. B.
1970-01-01
The investigation of preignition, ignition, and combustion characteristics of Delrin (acetate terminated polyformaldehyde) and Teflon (polytetrafluoroethylene) resins in air and oxygen are presented. The determination of ignition limits and their dependence on temperature and the oxidizing media, as well as the analyses of the volatiles produced, were studied. Tests were conducted in argon, an inert medium in which only purely pyrolytic reactions can take place, using the stagnation burner arrangement designed and constructed for this purpose. A theoretical treatment of the ignition and combination phenomena was devised. In the case of Delrin the ignition and ignition delays are apparently independent of the gas (air, oxygen) temperatures. The results indicate that hydrogen is the ignition triggering agent. Teflon ignition limits were established in oxygen only.
Fundamental performance differences between CMOS and CCD imagers: Part II
NASA Astrophysics Data System (ADS)
Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff
2007-09-01
A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.
Fundamentals of Mathematics, Part 1. Extended Time Frame. Experimental Edition.
ERIC Educational Resources Information Center
Goldberg, Judy, Ed.
This curriculum guide is an adaptation for students who need to proceed more slowly with new concepts and who also require additional reinforcement. The materials have been designed to assist the teacher in developing plans to be utilized in a variety of classroom settings. The guide can be used to develop both individual and group lessons. In…
Avogadro's Number and Avogadro's Constant
ERIC Educational Resources Information Center
Davies, R. O.
1973-01-01
Discusses three possible methods of thinking about the implications of the definitions of the Avogadro constant and number. Indicates that there is only one way to arrive at a simple and standard conclusion. (CC)
Oxygen Michaelis constants for tyrosinase.
Rodríguez-López, J N; Ros, J R; Varón, R; García-Cánovas, F
1993-01-01
The Michaelis constant of tyrosinase for oxygen in the presence of monophenols and o-diphenols, which generate a cyclizable o-quinone, has been studied. This constant depends on the nature of the monophenol and o-diphenol and is always lower in the presence of the former than of the latter. From the mechanism proposed for tyrosinase and from its kinetic analysis [Rodríguez-López, J. N., Tudela, J., Varón, R., García-Carmona, F. and García-Cánovas, F. (1992) J. Biol. Chem. 267, 3801-3810] a quantitative ratio has been established between the Michaelis constants for oxygen in the presence of monophenols and their o-diphenols. This ratio is used for the determination of the Michaelis constant for oxygen with monophenols when its value cannot be calculated experimentally. PMID:8352753
Fundamental Physics from Observations of White Dwarf Stars
NASA Astrophysics Data System (ADS)
Bainbridge, M. B.; Barstow, M. A.; Reindl, N.; Barrow, J. D.; Webb, J. K.; Hu, J.; Preval, S. P.; Holberg, J. B.; Nave, G.; Tchang-Brillet, L.; Ayres, T. R.
2017-03-01
Variation in fundamental constants provide an important test of theories of grand unification. Potentially, white dwarf spectra allow us to directly observe variation in fundamental constants at locations of high gravitational potential. We study hot, metal polluted white dwarf stars, combining far-UV spectroscopic observations, atomic physics, atmospheric modelling and fundamental physics, in the search for variation in the fine structure constant. This registers as small but measurable shifts in the observed wavelengths of highly ionized Fe and Ni lines when compared to laboratory wavelengths. Measurements of these shifts were performed by Berengut et al (2013) using high-resolution STIS spectra of G191-B2B, demonstrating the validity of the method. We have extended this work by; (a) using new (high precision) laboratory wavelengths, (b) refining the analysis methodology (incorporating robust techniques from previous studies towards quasars), and (c) enlarging the sample of white dwarf spectra. A successful detection would be the first direct measurement of a gravitational field effect on a bare constant of nature. We describe our approach and present preliminary results.
Recommending a value for the Newtonian gravitational constant.
Wood, Barry M
2014-10-13
The primary objective of the CODATA Task Group on Fundamental Constants is 'to periodically provide the scientific and technological communities with a self-consistent set of internationally recommended values of the basic constants and conversion factors of physics and chemistry based on all of the relevant data available at a given point in time'. I discuss why the availability of these recommended values is important and how it simplifies and improves science. I outline the process of determining the recommended values and introduce the principles that are used to deal with discrepant results. In particular, I discuss the specific challenges posed by the present situation of gravitational constant experimental results and how these principles were applied to the most recent 2010 recommended value. Finally, I speculate about what may be expected for the next recommended value of the gravitational constant scheduled for evaluation in 2014.
The fundamental plane correlations for globular clusters
NASA Technical Reports Server (NTRS)
Djorgovski, S.
1995-01-01
In the parameter space whose axes include a radius (core, or half-light), a surface brightness (central, or average within the half-light radius), and the central projected velocity dispersion, globular clusters lie on a two-dimensional surface (a plane, if the logarithmic quantities are used). This is analogous to the 'fundamental plane' of elliptical galaxies. The implied bivariate correlations are the best now known for globular clusters. The derived scaling laws for the core properties imply that cluster cores are fully virialized, homologous systems, with a constant (M/L) ratio. The corresponding scaling laws on the half-light scale are differrent, but are nearly identical to those derived from the 'fundamental plane' of ellipticals. This may be due to the range of cluster concentrations, which are correlated with other parameters. A similar explanation for elliptical galaxies may be viable. These correlations provide new empirical constraints for models of globular cluster formation and evolution, and may also be usable as rough distance-indicator relations for globular clusters.
The fundamental plane correlations for globular clusters
NASA Technical Reports Server (NTRS)
Djorgovski, S.
1995-01-01
In the parameter space whose axes include a radius (core, or half-light), a surface brightness (central, or average within the half-light radius), and the central projected velocity dispersion, globular clusters lie on a two-dimensional surface (a plane, if the logarithmic quantities are used). This is analogous to the 'fundamental plane' of elliptical galaxies. The implied bivariate correlations are the best now known for globular clusters. The derived scaling laws for the core properties imply that cluster cores are fully virialized, homologous systems, with a constant (M/L) ratio. The corresponding scaling laws on the half-light scale are differrent, but are nearly identical to those derived from the 'fundamental plane' of ellipticals. This may be due to the range of cluster concentrations, which are correlated with other parameters. A similar explanation for elliptical galaxies may be viable. These correlations provide new empirical constraints for models of globular cluster formation and evolution, and may also be usable as rough distance-indicator relations for globular clusters.
Defining the fundamentals of care.
Kitson, Alison; Conroy, Tiffany; Wengstrom, Yvonne; Profetto-McGrath, Joanne; Robertson-Malt, Suzi
2010-08-01
A three-stage process is being undertaken to investigate the fundamentals of care. Stage One (reported here) involves the use of a met a-narrative review methodology to undertake a thematic analysis, categorization and synthesis of selected contents extracted from seminal texts relating to nursing practice. Stage Two will involve a search for evidence to inform the fundamentals of care and a refinement of the review method. Stage Three will extend the reviews of the elements defined as fundamentals of care. This introductory paper covers the following aspects: the conceptual basis upon which nursing care is delivered; how the fundamentals of care have been defined in the literature and in practice; an argument that physiological aspects of care, self-care elements and aspects of the environment of care are central to the conceptual refinement of the term fundamentals of care; and that efforts to systematize such information will enhance overall care delivery through improvements in patient safety and quality initiatives in health systems.
Constant fields and constant gradients in open ionic channels.
Chen, D P; Barcilon, V; Eisenberg, R S
1992-01-01
Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159
Effective cosmological constant induced by stochastic fluctuations of Newton's constant
NASA Astrophysics Data System (ADS)
de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi
2016-09-01
We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.
Geophysics Fatally Flawed by False Fundamental Philosophy
NASA Astrophysics Data System (ADS)
Myers, L. S.
2004-05-01
For two centuries scientists have failed to realize Laplace's nebular hypothesis \\(1796\\) of Earth's creation is false. As a consequence, geophysicists today are misinterpreting and miscalculating many fundamental aspects of the Earth and Solar System. Why scientists have deluded themselves for so long is a mystery. The greatest error is the assumption Earth was created 4.6 billion years ago as a molten protoplanet in its present size, shape and composition. This assumption ignores daily accretion of more than 200 tons/day of meteorites and dust, plus unknown volumes of solar insolation that created coal beds and other biomass that increased Earth's mass and diameter over time! Although the volume added daily is minuscule compared with Earth's total mass, logic and simple addition mandates an increase in mass, diameter and gravity. Increased diameter from accretion is proved by Grand Canyon stratigraphy that shows a one kilometer increase in depth and planetary radius at a rate exceeding three meters \\(10 ft\\) per Ma from start of the Cambrian \\(540 Ma\\) to end of the Permian \\(245 Ma\\)-each layer deposited onto Earth's surface. This is unequivocal evidence of passive external growth by accretion, part of a dual growth and expansion process called "Accreation" \\(creation by accretion\\). Dynamic internal core expansion, the second stage of Accreation, did not commence until the protoplanet reached spherical shape at 500-600 km diameter. At that point, gravity-powered compressive heating initiated core melting and internal expansion. Expansion quickly surpassed the external accretion growth rate and produced surface volcanoes to relieve explosive internal tectonic pressure and transfer excess mass (magma)to the surface. Then, 200-250 Ma, expansion triggered Pangaea's breakup, first sundering Asia and Australia to form the Pacific Ocean, followed by North and South America to form the Atlantic Ocean, by the mechanism of midocean ridges, linear underwater
Effect of Fundamental Frequency on Judgments of Electrolaryngeal Speech
ERIC Educational Resources Information Center
Nagle, Kathy F.; Eadie, Tanya L.; Wright, Derek R.; Sumida, Yumi A.
2012-01-01
Purpose: To determine (a) the effect of fundamental frequency (f0) on speech intelligibility, acceptability, and perceived gender in electrolaryngeal (EL) speakers, and (b) the effect of known gender on speech acceptability in EL speakers. Method: A 2-part study was conducted. In Part 1, 34 healthy adults provided speech recordings using…
Frequency-constant Q, unity and disorder
Hargreaves, N.D.
1995-12-31
In exploration geophysics we obtain information about the earth by observing its response to different types of applied force. The response can cover the full range of possible Q values (where Q, the quality factor, is a measure of energy dissipation), from close to infinity in the case of deep crustal seismic to close to 0 in the case of many electromagnetic methods. When Q is frequency-constant, however, the various types of response have a common scaling behavior and can be described as being self-affine. The wave-equation then takes on a generalised form, changing from the standard wave-equation at Q = {infinity} to the diffusion equation at Q = 0, via lossy, diffusive, propagation at intermediate Q values. Solutions of this wave-diffusion equation at any particular Q value can be converted to an equivalent set of results for any other Q value. In particular it is possible to convert from diffusive to wave propagation by a mapping from Q < {infinity} to Q = {infinity}. In the context of seismic sounding this is equivalent to applying inverse Q-filtering; in a more general context the mapping integrates different geophysical observations by referencing them to the common result at Q = {infinity}. The self-affinity of the observations for frequency-constant Q is an expression of scale invariance in the fundamental physical properties of the medium of propagation, this being the case whether the mechanism of diffusive propagation is scattering of intrinsic attenuation. Scale invariance, or fractal scaling, is a general property of disordered systems; the assumption of frequency-constant Q not only implies a unity between different geophysical observations, but also suggests that it is the disordered nature of the earth`s sub-surface that is the unifying factor.
Simplified fundamental force and mass measurements
NASA Astrophysics Data System (ADS)
Robinson, I. A.
2016-08-01
The watt balance relates force or mass to the Planck constant h, the metre and the second. It enables the forthcoming redefinition of the unit of mass within the SI by measuring the Planck constant in terms of mass, length and time with an uncertainty of better than 2 parts in 108. To achieve this, existing watt balances require complex and time-consuming alignment adjustments limiting their use to a few national metrology laboratories. This paper describes a simplified construction and operating principle for a watt balance which eliminates the need for the majority of these adjustments and is readily scalable using either electromagnetic or electrostatic actuators. It is hoped that this will encourage the more widespread use of the technique for a wide range of measurements of force or mass. For example: thrust measurements for space applications which would require only measurements of electrical quantities and velocity/displacement.
Astrophysical probes of fundamental physics
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.
2009-10-01
I review the motivation for varying fundamental couplings and discuss how these measurements can be used to constrain fundamental physics scenarios that would otherwise be inaccessible to experiment. I highlight the current controversial evidence for varying couplings and present some new results. Finally I focus on the relation between varying couplings and dark energy, and explain how varying coupling measurements might be used to probe the nature of dark energy, with some advantages over standard methods. In particular I discuss what can be achieved with future spectrographs such as ESPRESSO and CODEX.
Critique of Coleman's Theory of the Vanishing Cosmological Constant
NASA Astrophysics Data System (ADS)
Susskind, Leonard
In these lectures I would like to review some of the criticisms to the Coleman worm-hole theory of the vanishing cosmological constant. In particular, I would like to focus on the most fundamental assumption that the path integral over topologies defines a probability for the cosmological constant which has the form EXP(A) with A being the Baum-Hawking-Coleman saddle point. Coleman argues that the euclideam path integral over all geometries may be dominated by special configurations which consist of large smooth "spheres" connected by any number of narrow wormholes. Formally summing up such configurations gives a very divergent expression for the path integral…
Altinay, Gokhan; Macdonald, R Glen
2012-03-08
The recombination rate constants for the reactions NH2(X2B1) + NH2(X2B1) + M → N2H4 + M and NH2(X2B1) + H + M → NH3 + M, where M was CH4, C2H6, CO2, CF4, or SF6, were measured in the same experiment over presseure ranges of 1-20 and 7-20 Torr, respectively, at 296 ± 2 K. The NH2 radical was produced by the 193 nm laser photolysis of NH3. Both NH2 and NH3 were monitored simultaneously following the photolysis laser pulse. High-resolution time-resolved absorption spectroscopy was used to monitor the temporal dependence of both species: NH2 on the (1)2(21) ← (1)3(31) rotational transition of the (0,7,0)A2A1 ← (0,0,0)X2B1 electronic transition near 675 nm and NH3 in the IR on either of the inversion doublets of the qQ3(3) rotational transition of the ν1 fundamental near 2999 nm. The NH2 self-recombination clearly exhibited falloff behavior for the third-body collision partners used in this work. The pressure dependences of the NH2 self-recombination rate constants were fit using Troe’s parametrization scheme, k(inf), k(0), and F(cent), with k(inf) = 7.9 × 10(-11) cm3 molecule(-1) s(-1), the theoretical value calculated by Klippenstein et al. (J. Phys. Chem. A113, 113, 10241). The individual Troe parameters were CH4, k(0)(CH4) = 9.4 × 10(-29) and F(cent)(CH4) = 0.61; C2H6, k(0)(C2H6) = 1.5 × 10(-28) and F(cent)(C2H6) = 0.80; CO2, k(0)(CO2) = 8.6 × 10(-29) and F(cent)(CO2) = 0.66; CF4, k(0)(CF4) = 1.1 × 10(-28) and F(cent)(CF4) = 0.55; and SF6, k(0)(SF6) = 1.9 × 10(-28) and F(cent)(SF6) = 0.52, where the units of k0 are cm6 molecule(-2) s(-1). The NH2 + H + M reaction rate constant was assumed to be in the three-body pressure regime, and the association rate constants were CH4, (6.0 ± 1.8) × 10(-30); C2H6, (1.1 ± 0.41) × 10(-29); CO2, (6.5 ± 1.8) × 10(-30); CF4, (8.3 ± 1.7) × 10(-30); and SF6, (1.4 ± 0.30) × 10(-29), with units cm6 molecule(-1) s,(-1) and the systematic and experimental errors are given at the 2σ confidence level.
Vibrational force constants for acetaldehyde
NASA Astrophysics Data System (ADS)
Nikolova, B.
1990-05-01
The vibrational force field of ethanal (acetaldehyde), CH 3CHO, is refined by using procedures with differential increments for the force constants (Commun. Dep. Chem., Bulg. Acad. Sci., 21/3 (1988) 433). The characteristics general valence force constants of the high-dimensional symmetry classes of ethanal, A' of tenth and A″ of fifth order, are determined for the experimental assignment of bands. The low barrier to hindered internal rotation about the single carbon—carbon bond is quantitatively estimated on the grounds of normal vibrational analysis.
Cosmologies with variable gravitational constant
Narkikar, J.V.
1983-03-01
In 1937 Dirac presented an argument, based on the socalled large dimensionless numbers, which led him to the conclusion that the Newtonian gravitational constant G changes with epoch. Towards the end of the last century Ernst Mach had given plausible arguments to link the property of inertia of matter to the large scale structure of the universe. Mach's principle also leads to cosmological models with a variable gravitational constant. Three cosmologies which predict a variable G are discussed in this paper both from theoretical and observational points of view.
On flows having constant vorticity
NASA Astrophysics Data System (ADS)
Roberts, Paul H.; Wu, Cheng-Chin
2011-10-01
Constant vorticity flows of a uniform fluid in a rigid ellipsoidal container rotating at a variable rate are considered. These include librationally driven and precessionally driven flows. The well-known Poincaré solution for precessionally driven flow in a spheroid is generalized to an ellipsoid with unequal principal axes. The dynamic stability of these flows is investigated, and of other flows in which the angular velocity of the container is constant in time. Solutions for the Chandler wobble are discussed. The role of an invariant, called here the Helmholtzian, is examined.
Cosmologies with variable gravitational constant
NASA Astrophysics Data System (ADS)
Narlikar, J. V.
1983-03-01
In 1937 Dirac presented an argument, based on the socalled large dimensionless numbers, which led him to the conclusion that the Newtonian gravitational constant G changes with epoch. Towards the end of the last century Ernst Mach had given plausible arguments to link the property of inertia of matter to the large scale structure of the universe. Mach's principle also leads to cosmological models with a variable gravitational constant. Three cosmologies which predict a variable G are discussed in this paper both from theoretical and observational points of view.
Cosmological constant from quantum spacetime
NASA Astrophysics Data System (ADS)
Majid, Shahn; Tao, Wen-Qing
2015-06-01
We show that a hypothesis that spacetime is quantum with coordinate algebra [xi,t ]=λPxi , and spherical symmetry under rotations of the xi, essentially requires in the classical limit that the spacetime metric is the Bertotti-Robinson metric, i.e., a solution of Einstein's equations with a cosmological constant and a non-null electromagnetic field. Our arguments do not give the value of the cosmological constant or the Maxwell field strength, but they cannot both be zero. We also describe the quantum geometry and the full moduli space of metrics that can emerge as classical limits from this algebra.
Museum Techniques in Fundamental Education.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Paris (France).
Some museum techniques and methods can be used in fundamental educational programs without elaborate buildings or equipment; exhibitions should be based on valid presumptions and should take into account the "common sense" beliefs of people for whom the exhibit is designed. They can be used profitably in the economic development of local…
Fundamentals of the Slide Library.
ERIC Educational Resources Information Center
Boerner, Susan Zee
This paper is an introduction to the fundamentals of the art (including architecture) slide library, with some emphasis on basic procedures of the science slide library. Information in this paper is particularly relevant to the college, university, and museum slide library. Topics addressed include: (1) history of the slide library; (2) duties of…
Light as a Fundamental Particle
ERIC Educational Resources Information Center
Weinberg, Steven
1975-01-01
Presents two arguments concerning the role of the photon. One states that the photon is just another particle distinguished by a particular value of charge, spin, mass, lifetime, and interaction properties. The second states that the photon plays a fundamental role with a deep relation to ultimate formulas of physics. (GS)
Fundamentals of Welding. Teacher Edition.
ERIC Educational Resources Information Center
Fortney, Clarence; And Others
These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…
Status of Fundamental Physics Program
NASA Technical Reports Server (NTRS)
Lee, Mark C.
2003-01-01
Update of the Fundamental Physics Program. JEM/EF Slip. 2 years delay. Reduced budget. Community support and advocacy led by Professor Nick Bigelow. Reprogramming led by Fred O Callaghan/JPL team. LTMPF M1 mission (DYNAMX and SUMO). PARCS. Carrier re baselined on JEM/EF.
Environmental Law: Fundamentals for Schools.
ERIC Educational Resources Information Center
Day, David R.
This booklet outlines the environmental problems most likely to arise in schools. An overview provides a fundamental analysis of environmental issues rather than comprehensive analysis and advice. The text examines the concerns that surround superfund cleanups, focusing on the legal framework, and furnishes some practical pointers, such as what to…
Brake Fundamentals. Automotive Articulation Project.
ERIC Educational Resources Information Center
Cunningham, Larry; And Others
Designed for secondary and postsecondary auto mechanics programs, this curriculum guide contains learning exercises in seven areas: (1) brake fundamentals; (2) brake lines, fluid, and hoses; (3) drum brakes; (4) disc brake system and service; (5) master cylinder, power boost, and control valves; (6) parking brakes; and (7) trouble shooting. Each…
Fundamentals of Environmental Education. Report.
ERIC Educational Resources Information Center
1976
An outline of fundamental definitions, relationships, and human responsibilities related to environment provides a basis from which a variety of materials, programs, and activities can be developed. The outline can be used in elementary, secondary, higher education, or adult education programs. The framework is based on principles of the science…
Programs for Fundamentals of Chemistry.
ERIC Educational Resources Information Center
Gallardo, Julio; Delgado, Steven
This document provides computer programs, written in BASIC PLUS, for presenting fundamental or remedial college chemistry students with chemical problems in a computer assisted instructional program. Programs include instructions, a sample run, and 14 separate practice sessions covering: mathematical operations, using decimals, solving…
Chronometric cosmology and fundamental fermions
Segal, I. E.
1982-01-01
It is proposed that the fundamental fermions of nature are modeled by fields on the chronometric cosmos that are not precisely spinors but become such only in the nonchronometric limit. The imbedding of the scale-extended Poincaré group in the linearizer of the Minkowskian conformal group defines such fields, by induction. PMID:16593266
Fundamentals of Environmental Education. Report.
ERIC Educational Resources Information Center
1976
An outline of fundamental definitions, relationships, and human responsibilities related to environment provides a basis from which a variety of materials, programs, and activities can be developed. The outline can be used in elementary, secondary, higher education, or adult education programs. The framework is based on principles of the science…
Light as a Fundamental Particle
ERIC Educational Resources Information Center
Weinberg, Steven
1975-01-01
Presents two arguments concerning the role of the photon. One states that the photon is just another particle distinguished by a particular value of charge, spin, mass, lifetime, and interaction properties. The second states that the photon plays a fundamental role with a deep relation to ultimate formulas of physics. (GS)
Fundamentals of Welding. Teacher Edition.
ERIC Educational Resources Information Center
Fortney, Clarence; And Others
These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…
Brake Fundamentals. Automotive Articulation Project.
ERIC Educational Resources Information Center
Cunningham, Larry; And Others
Designed for secondary and postsecondary auto mechanics programs, this curriculum guide contains learning exercises in seven areas: (1) brake fundamentals; (2) brake lines, fluid, and hoses; (3) drum brakes; (4) disc brake system and service; (5) master cylinder, power boost, and control valves; (6) parking brakes; and (7) trouble shooting. Each…
Fundamentals of Microelectronics Processing (VLSI).
ERIC Educational Resources Information Center
Takoudis, Christos G.
1987-01-01
Describes a 15-week course in the fundamentals of microelectronics processing in chemical engineering, which emphasizes the use of very large scale integration (VLSI). Provides a listing of the topics covered in the course outline, along with a sample of some of the final projects done by students. (TW)
Lighting Fundamentals. Monograph Number 13.
ERIC Educational Resources Information Center
Locatis, Craig N.; Gerlach, Vernon S.
Using an accompanying, specified film that consists of 10-second pictures separated by blanks, the learner can, with the 203-step, self-correcting questions and answers provided in this program, come to understand the fundamentals of lighting in photography. The learner should, by the end of the program, be able to describe and identify the…
Fundamentals of Microelectronics Processing (VLSI).
ERIC Educational Resources Information Center
Takoudis, Christos G.
1987-01-01
Describes a 15-week course in the fundamentals of microelectronics processing in chemical engineering, which emphasizes the use of very large scale integration (VLSI). Provides a listing of the topics covered in the course outline, along with a sample of some of the final projects done by students. (TW)
FUNdamental Movement in Early Childhood.
ERIC Educational Resources Information Center
Campbell, Linley
2001-01-01
Noting that the development of fundamental movement skills is basic to children's motor development, this booklet provides a guide for early childhood educators in planning movement experiences for children between 4 and 8 years. The booklet introduces a wide variety of appropriate practices to promote movement skill acquisition and increased…
The Fundamental Manifold of Spheroids
NASA Astrophysics Data System (ADS)
Zaritsky, Dennis; Gonzalez, Anthony H.; Zabludoff, Ann I.
2006-02-01
We present a unifying empirical description of the structural and kinematic properties of all spheroids embedded in dark matter halos. We find that the intracluster stellar spheroidal components of galaxy clusters, which we call cluster spheroids (CSphs) and which are typically 100 times the size of normal elliptical galaxies, lie on a ``fundamental plane'' as tight as that defined by elliptical galaxies (rms in effective radius of ~0.07) but having a different slope. The slope, as measured by the coefficient of the logσ term, declines significantly and systematically between the fundamental planes of ellipticals, brightest cluster galaxies (BCGs), and CSphs. We attribute this decline primarily to a continuous change in Me/Le, the mass-to-light ratio within the effective radius re, with spheroid scale. The magnitude of the slope change requires that it arise principally from differences in the relative distributions of luminous and dark matter, rather than from stellar population differences such as in age and metallicity. By expressing the Me/Le term as a function of σ in the simple derivation of the fundamental plane and requiring the behavior of that term to mimic the observed nonlinear relationship between logMe/Le and logσ, we simultaneously fit a two-dimensional manifold to the measured properties of dwarf elliptical and elliptical galaxies, BCGs, and CSphs. The combined data have an rms scatter in logre of 0.114 (0.099 for the combination of ellipticals, BCGs, and CSphs), which is modestly larger than each fundamental plane has alone, but which includes the scatter introduced by merging different studies done in different filters by different investigators. This ``fundamental manifold'' fits the structural and kinematic properties of spheroids that span a factor of 100 in σ and 1000 in re. While our mathematical form is neither unique nor derived from physical principles, the tightness of the fit leaves little room for improvement by other unification
The quadrupole coupling constant of HNC. [hydrogen isocyanide hyperfine structure measurements
NASA Technical Reports Server (NTRS)
Snyder, L. E.; Hollis, J. M.; Buhl, D.
1977-01-01
The letter reports resolved measurements of the quadrupole hyperfine structure of HNC (hydrogen isocyanide). These measurements were made in the direction of the cool interstellar dust cloud L134, and were used to make an experimental determination of a fundamental spectroscopic constant of HNC, its quadrupole coupling constant.
van Gemert, M J; Lucassen, G W; Welch, A J
1996-08-01
The thermal response of a semi-infinite medium in air, irradiated by laser light in a cylindrical geometry, cannot accurately be approximately by single radial and axial time constants for heat conduction. This report presents an analytical analysis of hear conduction where the thermal response is expressed in terms of distributions over radial and axial time constants. The source term for heat production is written as the product of a Gaussian shaped radial term and an exponentially shaped axial term. The two terms are expanded in integrals over eigenfunctions of the radial and axial parts of the Laplace heat conduction operator. The result is a double integral over the coupled distributions of the two time constants to compute the temperature rise as a function of time and of axial and radial positions. The distribution of axial time constants is a homogeneous slowly decreasing function of spatial frequency (v) indicating that one single axial time constant cannot reasonably characterize axial heat conduction. The distribution of radial time constants is a function centred around a distinguished maximum in the spatial frequency (lambda) close to the single radial time constant value used previously. This suggests that one radial time constant to characterize radial heat conduction may be a useful concept. Special cases have been evaluated analytically, such as short and long irradiation times, axial or radial heat conduction (shallow or deep penetrating laser beams) and, especially, thermal relaxation (cooling) of the tissue. For shallow penetrating laser beams the asymptotic cooling rate is confirmed to be proportional to [(t)0.5-(t-tL)0.5] which approaches 1/t0.5 for t > tL, where t is the time and tL is the laser pulse duration. For deep penetrating beams this is proportional to 1/(t-tL). For intermediate penetration, i.e. penetration depths about equal to spot size diameters, this is proportional to 1/(t-tL)1.5. The double integral has been evaluated
Variations of the solar constant
Sofia, S.
1981-12-01
The variations in data received from rocket-borne and balloon-borne instruments are discussed. Indirect techniques to measure and monitor the solar constant are presented. Emphasis is placed on the correlation of data from the Solar Maximum Mission and the Nimbus 7 satellites. Abstracts of individual items from the workshop were prepared separately for the data base.
ERIC Educational Resources Information Center
Ford, T. A.
1979-01-01
In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…
ERIC Educational Resources Information Center
Ford, T. A.
1979-01-01
In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…
The 1% concordance Hubble constant
Bennett, C. L.; Larson, D.; Weiland, J. L.; Hinshaw, G.
2014-10-20
The determination of the Hubble constant has been a central goal in observational astrophysics for nearly a hundred years. Extraordinary progress has occurred in recent years on two fronts: the cosmic distance ladder measurements at low redshift and cosmic microwave background (CMB) measurements at high redshift. The CMB is used to predict the current expansion rate through a best-fit cosmological model. Complementary progress has been made with baryon acoustic oscillation (BAO) measurements at relatively low redshifts. While BAO data do not independently determine a Hubble constant, they are important for constraints on possible solutions and checks on cosmic consistency. A precise determination of the Hubble constant is of great value, but it is more important to compare the high and low redshift measurements to test our cosmological model. Significant tension would suggest either uncertainties not accounted for in the experimental estimates or the discovery of new physics beyond the standard model of cosmology. In this paper we examine in detail the tension between the CMB, BAO, and cosmic distance ladder data sets. We find that these measurements are consistent within reasonable statistical expectations and we combine them to determine a best-fit Hubble constant of 69.6 ± 0.7 km s{sup –1} Mpc{sup –1}. This value is based upon WMAP9+SPT+ACT+6dFGS+BOSS/DR11+H {sub 0}/Riess; we explore alternate data combinations in the text. The combined data constrain the Hubble constant to 1%, with no compelling evidence for new physics.
Fundamentals of Managing Reference Collections
ERIC Educational Resources Information Center
Singer, Carol A.
2012-01-01
Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…
Fundamentals of Managing Reference Collections
ERIC Educational Resources Information Center
Singer, Carol A.
2012-01-01
Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…
Constant-bandwidth constant-temperature hot-wire anemometer.
Ligeza, P
2007-07-01
A constant-temperature anemometer (CTA) enables the measurement of fast-changing velocity fluctuations. In the classical solution of CTA, the transmission band is a function of flow velocity. This is a minor drawback when the mean flow velocity does not significantly change, though it might lead to dynamic errors when flow velocity varies over a considerable range. A modification is outlined, whereby an adaptive controller is incorporated in the CTA system such that the anemometer's transmission band remains constant in the function of flow velocity. For that purpose, a second feedback loop is provided, and the output signal from the anemometer will regulate the controller's parameters such that the transmission bandwidth remains constant. The mathematical model of a CTA that has been developed and model testing data allow a through evaluation of the proposed solution. A modified anemometer can be used in measurements of high-frequency variable flows in a wide range of velocities. The proposed modification allows the minimization of dynamic measurement errors.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
... Regulation Supplement: Release of Fundamental Research Information (DFARS Case 2012-D054) AGENCY: Defense... relating to the release of fundamental research information. This rule was previously published as part of... fundamental research projects and not safeguarding. This rule was initiated to implement guidance provided...
DOE Fundamentals Handbook: Classical Physics
Not Available
1992-06-01
The Classical Physics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of physical forces and their properties. The handbook includes information on the units used to measure physical properties; vectors, and how they are used to show the net effect of various forces; Newton's Laws of motion, and how to use these laws in force and motion applications; and the concepts of energy, work, and power, and how to measure and calculate the energy involved in various applications. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility systems and equipment.
Astrophysical Probes of Fundamental Physics
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.
I review the theoretical motivation for varying fundamental couplings and discuss how these measurements can be used to constrain a number of fundamental physics scenarios that would otherwise be inacessible to experiment. As a case study I will focus on the relation between varying couplings and dark energy, and explain how varying coupling measurements can be used to probe the nature of dark energy, with important advantages over the standard methods. Assuming that the current observational evidence for varying α. and μ is correct, a several-sigma detection of dynamical dark energy is feasible within a few years, using currently operational ground-based facilities. With forthcoming instruments like CODEX, a high-accuracy reconstruction of the equation of state may be possible all the way up to redshift z ˜ 4.
Fundamental neutron physics at LANSCE
Greene, G.
1995-10-01
Modern neutron sources and science share a common origin in mid-20th-century scientific investigations concerned with the study of the fundamental interactions between elementary particles. Since the time of that common origin, neutron science and the study of elementary particles have evolved into quite disparate disciplines. The neutron became recognized as a powerful tool for studying condensed matter with modern neutron sources being primarily used (and justified) as tools for neutron scattering and materials science research. The study of elementary particles has, of course, led to the development of rather different tools and is now dominated by activities performed at extremely high energies. Notwithstanding this trend, the study of fundamental interactions using neutrons has continued and remains a vigorous activity at many contemporary neutron sources. This research, like neutron scattering research, has benefited enormously by the development of modern high-flux neutron facilities. Future sources, particularly high-power spallation sources, offer exciting possibilities for continuing this research.
Omura, Yoshiaki; Lu, Dominic P; Jones, Marilyn; O'Young, Brian; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu
2011-01-01
The expression of the longevity gene, Sirtuin 1, was non-invasively measured using Electro-Magnetic Field (EMF) resonance phenomenon between a known amount of polyclonal antibody of the C-terminal of Sirtuin 1 & Sirtuin 1 molecule inside of the body. Our measurement of over 100 human adult males and females, ranging between 20-122 years old, indicated that the majority of subjects had Sirtuin 1 levels of 5-10 pg BDORT units in most parts of the body. When Sirtuin 1 was less than 1 pg, the majority of the people had various degrees of tumors or other serious diseases. When Sirtuin 1 levels were less than 0.25 pg BDORT units, a high incidence of AIDS was also detected. Very few people had Sirtuin 1 levels of over 25 pg BDORT units in most parts of the body. We selected 7 internationally recognized supercentenarians who lived between 110-122 years old. To our surprise, most of their body Sirtuin 1 levels were between 2.5-10 pg BDORT units. However, by evaluating different parts of the brain, we found that both sides of the Hippocampus had a much higher amount of Sirtuin 1, between 25-100 pg BDORT units. With most subjects, Sirtuin 1 was found to be higher in the Hippocampus than in the rest of the body and remains relatively constant regardless of age. We found that Aspartame, plastic eye contact lenses, and asbestos in dental apparatuses, which reduce normal cell telomeres, also significantly reduce Sirtuin 1. In addition, we found that increasing normal cell telomere by electrical or mechanical stimulation of True ST-36 increases the expression of the Sirtuin 1 gene in people in which expression is low. This measurement of Sirtuin 1 in the Hippocampus has become a reliable indicator for detecting potential longevity of an individual.
Microplasmas: from applications to fundamentals
NASA Astrophysics Data System (ADS)
Nguon, Olivier; Huang, Sisi; Gauthier, Mario; Karanassios, Vassili
2014-05-01
Microplasmas are receiving increasing attention in the scientific literature and in recent conferences. Yet, few analytical applications of microplasmas for elemental analysis using liquid samples have been described in the literature. To address this, we describe two applications: one involves the determination of Zn in microsamples of the metallo-enzyme Super Oxide Dismutase. The other involves determination of Pd-concentration in microsamples of Pd nanocatalysts. These applications demonstrate the potential of microplasmas and point to the need for future fundamental studies.
CP, T and fundamental interactions
NASA Astrophysics Data System (ADS)
Frère, Jean-Marie
2012-03-01
We discuss the importance of the CP (simultaneous particle-antiparticle and left-right permutation) and T (time reversal) symmetries in the context of fundamental interactions. We show that they may provide clues to go beyond the 4-D gauge interactions. We insist on the fact that T violation is not associated to a degradation (like in entropy), but simply characterised by different trajectories.
The Not so Constant Gravitational "Constant" G as a Function of Quantum Vacuum
NASA Astrophysics Data System (ADS)
Maxmilian Caligiuri, Luigi
Gravitation is still the less understood among the fundamental forces of Nature. The ultimate physical origin of its ruling constant G could give key insights in this understanding. According to the Einstein's Theory of General Relativity, a massive body determines a gravitational potential that alters the speed of light, the clock's rate and the particle size as a function of the distance from its own center. On the other hand, it has been shown that the presence of mass determines a modification of Zero-Point Field (ZPF) energy density within its volume and in the space surrounding it. All these considerations strongly suggest that also the constant G could be expressed as a function of quantum vacuum energy density somehow depending on the distance from the mass whose presence modifies the ZPF energy structure. In this paper, starting from a constitutive medium-based picture of space, it has been formulated a model of gravitational constant G as a function of Planck's time and Quantum Vacuum energy density in turn depending on the radial distance from center of the mass originating the gravitational field, supposed as spherically symmetric. According to this model, in which gravity arises from the unbalanced physical vacuum pressure, gravitational "constant" G is not truly unchanging but slightly varying as a function of the distance from the mass source of gravitational potential itself. An approximate analytical form of such dependence has been discussed. The proposed model, apart from potentially having deep theoretical consequences on the commonly accepted picture of physical reality (from cosmology to matter stability), could also give the theoretical basis for unthinkable applications related, for example, to the field of gravity control and space propulsion.
The spectroscopic constants and anharmonic force field of AgSH: An ab initio study.
Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang
2016-07-05
The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH.
Low uncertainty Boltzmann constant determinations and the kelvin redefinition.
Fischer, J
2016-03-28
At its 25th meeting, the General Conference on Weights and Measures (CGPM) approved Resolution 1 'On the future revision of the International System of Units, the SI', which sets the path towards redefinition of four base units at the next CGPM in 2018. This constitutes a decisive advance towards the formal adoption of the new SI and its implementation. Kilogram, ampere, kelvin and mole will be defined in terms of fixed numerical values of the Planck constant, elementary charge, Boltzmann constant and Avogadro constant, respectively. The effect of the new definition of the kelvin referenced to the value of the Boltzmann constant k is that the kelvin is equal to the change of thermodynamic temperature T that results in a change of thermal energy kT by 1.380 65×10(-23) J. A value of the Boltzmann constant suitable for defining the kelvin is determined by fundamentally different primary thermometers such as acoustic gas thermometers, dielectric constant gas thermometers, noise thermometers and the Doppler broadening technique. Progress to date of the measurements and further perspectives are reported. Necessary conditions to be met before proceeding with changing the definition are given. The consequences of the new definition of the kelvin on temperature measurement are briefly outlined.
How does Planck’s constant influence the macroscopic world?
NASA Astrophysics Data System (ADS)
Yang, Pao-Keng
2016-09-01
In physics, Planck’s constant is a fundamental physical constant accounting for the energy-quantization phenomenon in the microscopic world. The value of Planck’s constant also determines in which length scale the quantum phenomenon will become conspicuous. Some students think that if Planck’s constant were to have a larger value than it has now, the quantum effect would only become observable in a world with a larger size, whereas the macroscopic world might remain almost unchanged. After reasoning from some basic physical principles and theories, we found that doubling Planck’s constant might result in a radical change on the geometric sizes and apparent colors of macroscopic objects, the solar spectrum and luminosity, the climate and gravity on Earth, as well as energy conversion between light and materials such as the efficiency of solar cells and light-emitting diodes. From the discussions in this paper, students can appreciate how Planck’s constant affects various aspects of the world in which we are living now.
Dielectric-constant gas thermometry
NASA Astrophysics Data System (ADS)
Gaiser, Christof; Zandt, Thorsten; Fellmuth, Bernd
2015-10-01
The principles, techniques and results from dielectric-constant gas thermometry (DCGT) are reviewed. Primary DCGT with helium has been used for measuring T-T90 below the triple point of water (TPW), where T is the thermodynamic temperature and T90 is the temperature on the international temperature scale of 1990 (ITS-90), and, in an inverse regime with T as input quantity, for determining the Boltzmann constant at the TPW. Furthermore, DCGT allows the determination of several important material properties including the polarizability of neon and argon as well as the virial coefficients of helium, neon, and argon. With interpolating DCGT (IDCGT), the ITS-90 has been approximated in the temperature range from 4 K to 25 K. An overview and uncertainty budget for each of these applications of DCGT is provided, accompanied by corroborating evidence from the literature or, for IDCGT, a CIPM key comparison.
Three pion nucleon coupling constants
NASA Astrophysics Data System (ADS)
Ruiz Arriola, E.; Amaro, J. E.; Navarro Pérez, R.
2016-08-01
There exist four pion nucleon coupling constants, fπ0pp, - fπ0nn, fπ+pn/2 and fπ-np/2 which coincide when up and down quark masses are identical and the electron charge is zero. While there is no reason why the pion-nucleon-nucleon coupling constants should be identical in the real world, one expects that the small differences might be pinned down from a sufficiently large number of independent and mutually consistent data. Our discussion provides a rationale for our recent determination fp2 = 0.0759(4),f 02 = 0.079(1),f c2 = 0.0763(6), based on a partial wave analysis of the 3σ self-consistent nucleon-nucleon Granada-2013 database comprising 6713 published data in the period 1950-2013.
Renormalization constants from string theory.
NASA Astrophysics Data System (ADS)
di Vecchia, P.; Magnea, L.; Lerda, A.; Russo, R.; Marotta, R.
The authors review some recent results on the calculation of renormalization constants in Yang-Mills theory using open bosonic strings. The technology of string amplitudes, supplemented with an appropriate continuation off the mass shell, can be used to compute the ultraviolet divergences of dimensionally regularized gauge theories. The results show that the infinite tension limit of string amplitudes corresponds to the background field method in field theory.
Are the Truly Constant Constants of Nature? How is the Real Material Space and its Structure?
Luz Montero Garcia, Jose de la; Novoa Blanco, Jesus Francisco
2007-04-28
In a concise and simplified way, some matters of authors' theories -Unified Theory of the Physical and Mathematical Universal Constants and Quantum Cellular Structural Geometry-, an only one theoretical main body MN2. This investigation has as objective the search of the last cells that base the existence, unicity and harmony of matter, as well as its structural-formal and dynamic-functional diversity. The quantitative hypothesis is demonstrated that 'World is one, is one; but it is one Arithmetic-Geometric-Topological-Dimensional and Structural-Cellular-Dynamic one, simultaneously'. In the Frontiers of Fundamental Physics such last cells are the cells of own Real Material Space of whose whole accretion, interactive and staggered all the existing one at all the hierarchic levels arises, cells these below which make no sense to speak of structure and, therefore, of existence. The cells of the Real Material Space are its 'Atoms'. Law of Planetary Systems or '4th Kepler's Law'.
WHY IS THE SOLAR CONSTANT NOT A CONSTANT?
Li, K. J.; Xu, J. C.; Gao, P. X.; Yang, L. H.; Liang, H. F.; Zhan, L. S.
2012-03-10
In order to probe the mechanism of variations of the solar constant on the inter-solar-cycle scale, the total solar irradiance (TSI; the so-called solar constant) in the time interval of 1978 November 7 to 2010 September 20 is decomposed into three components through empirical mode decomposition and time-frequency analyses. The first component is the rotation signal, counting up to 42.31% of the total variation of TSI, which is understood to be mainly caused by large magnetic structures, including sunspot groups. The second is an annual-variation signal, counting up to 15.17% of the total variation, the origin of which is not known at this point in time. Finally, the third is the inter-solar-cycle signal, counting up to 42.52%, which is inferred to be caused by the network magnetic elements in quiet regions, whose magnetic flux ranges from (4.27-38.01) Multiplication-Sign 10{sup 19} Mx.
Dielectric constant of liquid alkanes and hydrocarbon mixtures
NASA Technical Reports Server (NTRS)
Sen, A. D.; Anicich, V. G.; Arakelian, T.
1992-01-01
The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.
Dielectric constant of liquid alkanes and hydrocarbon mixtures.
Sen, A D; Anicich, V G; Arakelian, T
1992-01-01
The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.
Dielectric constant of liquid alkanes and hydrocarbon mixtures
NASA Technical Reports Server (NTRS)
Sen, A. D.; Anicich, V. G.; Arakelian, T.
1992-01-01
The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.
Fundamental Limits to Cellular Sensing
NASA Astrophysics Data System (ADS)
ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew
2016-03-01
In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade
Short-range Fundamental forces
Antoniadis, I; Baessler, Stefan; Buechner, M; Fedorov, General Victor; Hoedl, S.; Lambrecht, A; Nesvizhevsky, V.; Pignol, G; Reynaud, S.; Sobolev, Yu.
2011-01-01
We consider theoretical motivations to search for extra short-range fundamental forces as well as experiments constraining their parameters. The forces could be of two types: (1) spin-independent forces; and (2) spin-dependent axion-like forces. Different experimental techniques are sensitive in respective ranges of characteristic distances. The techniques include measurements of gravity at short distances, searches for extra interactions on top of the Casimir force, precision atomic and neutron experiments. We focus on neutron constraints, thus the range of characteristic distances considered here corresponds to the range accessible for neutron experiments.
Fundamental Characteristics of Breather Hydrodynamics
NASA Astrophysics Data System (ADS)
Chabchoub, Amin
2014-05-01
The formation of oceanic rogue waves can be explained by the modulation instability of deep-water Stokes waves. In particular, being doubly-localized and amplifying the background wave amplitude by a factor of three or higher, the class of Peregrine-type breather solutions of the nonlinear Schrödinger equation (NLS) are considered to be appropriate models to describe extreme ocean wave dynamics. Here, we present an experimental validation of fundamental properties of the NLS within the context of Peregrine breather dynamics and we discuss the long-term behavior of such in time and space localized structures.
Solid Lubrication Fundamentals and Applications
NASA Technical Reports Server (NTRS)
Miyoshi, Kazuhisa
2001-01-01
Solid Lubrication Fundamentals and Applications description of the adhesion, friction, abrasion, and wear behavior of solid film lubricants and related tribological materials, including diamond and diamond-like solid films. The book details the properties of solid surfaces, clean surfaces, and contaminated surfaces as well as discussing the structures and mechanical properties of natural and synthetic diamonds; chemical-vapor-deposited diamond film; surface design and engineering toward wear-resistant, self-lubricating diamond films and coatings. The author provides selection and design criteria as well as applications for synthetic and natural coatings in the commercial, industrial and aerospace industries..
Reconstruction of fundamental SUSY parameters
P. M. Zerwas et al.
2003-09-25
We summarize methods and expected accuracies in determining the basic low-energy SUSY parameters from experiments at future e{sup +}e{sup -} linear colliders in the TeV energy range, combined with results from LHC. In a second step we demonstrate how, based on this set of parameters, the fundamental supersymmetric theory can be reconstructed at high scales near the grand unification or Planck scale. These analyses have been carried out for minimal supergravity [confronted with GMSB for comparison], and for a string effective theory.
Solid Lubrication Fundamentals and Applications
NASA Technical Reports Server (NTRS)
Miyoshi, Kazuhisa
2001-01-01
Solid Lubrication Fundamentals and Applications description of the adhesion, friction, abrasion, and wear behavior of solid film lubricants and related tribological materials, including diamond and diamond-like solid films. The book details the properties of solid surfaces, clean surfaces, and contaminated surfaces as well as discussing the structures and mechanical properties of natural and synthetic diamonds; chemical-vapor-deposited diamond film; surface design and engineering toward wear-resistant, self-lubricating diamond films and coatings. The author provides selection and design criteria as well as applications for synthetic and natural coatings in the commercial, industrial and aerospace industries..
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
An Alcohol Test for Drifting Constants
NASA Astrophysics Data System (ADS)
Jansen, P.; Bagdonaite, J.; Ubachs, W.; Bethlem, H. L.; Kleiner, I.; Xu, L.-H.
2013-06-01
The Standard Model of physics is built on the fundamental constants of nature, however without providing an explanation for their values, nor requiring their constancy over space and time. Molecular spectroscopy can address this issue. Recently, we found that microwave transitions in methanol are extremely sensitive to a variation of the proton-to-electron mass ratio μ, due to a fortuitous interplay between classically forbidden internal rotation and rotation of the molecule as a whole. In this talk, we will explain the origin of this effect and how the sensitivity coefficients in methanol are calculated. In addition, we set a limit on a possible cosmological variation of μ by comparing transitions in methanol observed in the early Universe with those measured in the laboratory. Based on radio-astronomical observations of PKS1830-211, we deduce a constraint of Δμ/μ=(0.0± 1.0)× 10^{-7} at redshift z = 0.89, corresponding to a look-back time of 7 billion years. While this limit is more constraining and systematically more robust than previous ones, the methanol method opens a new search territory for probing μ-variation on cosmological timescales. P. Jansen, L.-H. Xu, I. Kleiner, W. Ubachs, and H.L. Bethlem Phys. Rev. Lett. {106}(100801) 2011. J. Bagdonaite, P. Jansen, C. Henkel, H.L. Bethlem, K.M. Menten, and W. Ubachs Science {339}(46) 2013.
Accurate lineshape spectroscopy and the Boltzmann constant.
Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N
2015-10-14
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.
Chandra Independently Determines Hubble Constant
NASA Astrophysics Data System (ADS)
2006-08-01
A critically important number that specifies the expansion rate of the Universe, the so-called Hubble constant, has been independently determined using NASA's Chandra X-ray Observatory. This new value matches recent measurements using other methods and extends their validity to greater distances, thus allowing astronomers to probe earlier epochs in the evolution of the Universe. "The reason this result is so significant is that we need the Hubble constant to tell us the size of the Universe, its age, and how much matter it contains," said Max Bonamente from the University of Alabama in Huntsville and NASA's Marshall Space Flight Center (MSFC) in Huntsville, Ala., lead author on the paper describing the results. "Astronomers absolutely need to trust this number because we use it for countless calculations." Illustration of Sunyaev-Zeldovich Effect Illustration of Sunyaev-Zeldovich Effect The Hubble constant is calculated by measuring the speed at which objects are moving away from us and dividing by their distance. Most of the previous attempts to determine the Hubble constant have involved using a multi-step, or distance ladder, approach in which the distance to nearby galaxies is used as the basis for determining greater distances. The most common approach has been to use a well-studied type of pulsating star known as a Cepheid variable, in conjunction with more distant supernovae to trace distances across the Universe. Scientists using this method and observations from the Hubble Space Telescope were able to measure the Hubble constant to within 10%. However, only independent checks would give them the confidence they desired, considering that much of our understanding of the Universe hangs in the balance. Chandra X-ray Image of MACS J1149.5+223 Chandra X-ray Image of MACS J1149.5+223 By combining X-ray data from Chandra with radio observations of galaxy clusters, the team determined the distances to 38 galaxy clusters ranging from 1.4 billion to 9.3 billion
Stability of fundamental couplings: A global analysis
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.; Pinho, A. M. M.
2017-01-01
Astrophysical tests of the stability of fundamental couplings are becoming an increasingly important probe of new physics. Motivated by the recent availability of new and stronger constraints we update previous works testing the consistency of measurements of the fine-structure constant α and the proton-to-electron mass ratio μ =mp/me (mostly obtained in the optical/ultraviolet) with combined measurements of α , μ and the proton gyromagnetic ratio gp (mostly in the radio band). We carry out a global analysis of all available data, including the 293 archival measurements of Webb et al. and 66 more recent dedicated measurements, and constraining both time and spatial variations. While nominally the full data sets show a slight statistical preference for variations of α and μ (at up to two standard deviations), we also find several inconsistencies between different subsets, likely due to hidden systematics and implying that these statistical preferences need to be taken with caution. The statistical evidence for a spatial dipole in the values of α is found at the 2.3 sigma level. Forthcoming studies with facilities such as ALMA and ESPRESSO should clarify these issues.
Prion 2005: Between Fundamentals and Society's Needs.
Treiber, Carina
2006-01-25
Prion diseases for the most part affect individuals older than 60 years of age and share features with other diseases characterized by protein deposits in the brain, such as Alzheimer's disease and Parkinson's disease. The international conference "Prion 2005: Between Fundamentals and Society's Needs," organized by the German Transmissible Spongiform Encephalopathies Research Platform, aimed to integrate and coordinate the research efforts of participants to better achieve prevention, treatment, control, and management of prion diseases, including Creutzfeldt-Jakob disease and fatal familial insomnia in humans. Several main topics were discussed, such as the molecular characteristics of prion strains, the cell biology of cellular and pathogenic forms of the prion proteins, the pathogenesis of the diseases they cause, emerging problems, and promising approaches for therapy and new diagnostic tools. The presentations at the Prion 2005 conference provided new insights in both basic and applied research, which will have broad implications for society's needs.
Cosmological constant and local gravity
Bernabeu, Jose; Espinoza, Catalina; Mavromatos, Nick E.
2010-04-15
We discuss the linearization of Einstein equations in the presence of a cosmological constant, by expanding the solution for the metric around a flat Minkowski space-time. We demonstrate that one can find consistent solutions to the linearized set of equations for the metric perturbations, in the Lorentz gauge, which are not spherically symmetric, but they rather exhibit a cylindrical symmetry. We find that the components of the gravitational field satisfying the appropriate Poisson equations have the property of ensuring that a scalar potential can be constructed, in which both contributions, from ordinary matter and {Lambda}>0, are attractive. In addition, there is a novel tensor potential, induced by the pressure density, in which the effect of the cosmological constant is repulsive. We also linearize the Schwarzschild-de Sitter exact solution of Einstein's equations (due to a generalization of Birkhoff's theorem) in the domain between the two horizons. We manage to transform it first to a gauge in which the 3-space metric is conformally flat and, then, make an additional coordinate transformation leading to the Lorentz gauge conditions. We compare our non-spherically symmetric solution with the linearized Schwarzschild-de Sitter metric, when the latter is transformed to the Lorentz gauge, and we find agreement. The resulting metric, however, does not acquire a proper Newtonian form in terms of the unique scalar potential that solves the corresponding Poisson equation. Nevertheless, our solution is stable, in the sense that the physical energy density is positive.
Detector Fundamentals for Reachback Analysts
Karpius, Peter Joseph; Myers, Steven Charles
2016-08-03
This presentation is a part of the DHS LSS spectroscopy course and provides an overview of the following concepts: detector system components, intrinsic and absolute efficiency, resolution and linearity, and operational issues and limits.
Stability constant estimator user`s guide
Hay, B.P.; Castleton, K.J.; Rustad, J.R.
1996-12-01
The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.
Holographic dark energy with cosmological constant
NASA Astrophysics Data System (ADS)
Hu, Yazhou; Li, Miao; Li, Nan; Zhang, Zhenhui
2015-08-01
Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ωhde are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ2min=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain -0.07<ΩΛ0<0.68 and correspondingly 0.04<Ωhde0<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.
Holographic dark energy with cosmological constant
Hu, Yazhou; Li, Nan; Zhang, Zhenhui; Li, Miao E-mail: mli@itp.ac.cn E-mail: zhangzhh@mail.ustc.edu.cn
2015-08-01
Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ω{sub hde} are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ{sup 2}{sub min}=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain −0.07<Ω{sub Λ0}<0.68 and correspondingly 0.04<Ω{sub hde0}<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.
Fundamental plant biology enabled by the space shuttle.
Paul, Anna-Lisa; Wheeler, Ray M; Levine, Howard G; Ferl, Robert J
2013-01-01
The relationship between fundamental plant biology and space biology was especially synergistic in the era of the Space Shuttle. While all terrestrial organisms are influenced by gravity, the impact of gravity as a tropic stimulus in plants has been a topic of formal study for more than a century. And while plants were parts of early space biology payloads, it was not until the advent of the Space Shuttle that the science of plant space biology enjoyed expansion that truly enabled controlled, fundamental experiments that removed gravity from the equation. The Space Shuttle presented a science platform that provided regular science flights with dedicated plant growth hardware and crew trained in inflight plant manipulations. Part of the impetus for plant biology experiments in space was the realization that plants could be important parts of bioregenerative life support on long missions, recycling water, air, and nutrients for the human crew. However, a large part of the impetus was that the Space Shuttle enabled fundamental plant science essentially in a microgravity environment. Experiments during the Space Shuttle era produced key science insights on biological adaptation to spaceflight and especially plant growth and tropisms. In this review, we present an overview of plant science in the Space Shuttle era with an emphasis on experiments dealing with fundamental plant growth in microgravity. This review discusses general conclusions from the study of plant spaceflight biology enabled by the Space Shuttle by providing historical context and reviews of select experiments that exemplify plant space biology science.
Little, Max A; Jones, Nick S
2011-11-08
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Little, Max A.; Jones, Nick S.
2011-01-01
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play. PMID:22003312
Fundamental Travel Demand Model Example
NASA Technical Reports Server (NTRS)
Hanssen, Joel
2010-01-01
Instances of transportation models are abundant and detailed "how to" instruction is available in the form of transportation software help documentation. The purpose of this paper is to look at the fundamental inputs required to build a transportation model by developing an example passenger travel demand model. The example model reduces the scale to a manageable size for the purpose of illustrating the data collection and analysis required before the first step of the model begins. This aspect of the model development would not reasonably be discussed in software help documentation (it is assumed the model developer comes prepared). Recommendations are derived from the example passenger travel demand model to suggest future work regarding the data collection and analysis required for a freight travel demand model.
Cognition is … Fundamentally Cultural
Bender, Andrea; Beller, Sieghard
2013-01-01
A prevailing concept of cognition in psychology is inspired by the computer metaphor. Its focus on mental states that are generated and altered by information input, processing, storage and transmission invites a disregard for the cultural dimension of cognition, based on three (implicit) assumptions: cognition is internal, processing can be distinguished from content, and processing is independent of cultural background. Arguing against each of these assumptions, we point out how culture may affect cognitive processes in various ways, drawing on instances from numerical cognition, ethnobiological reasoning, and theory of mind. Given the pervasive cultural modulation of cognition—on all of Marr’s levels of description—we conclude that cognition is indeed fundamentally cultural, and that consideration of its cultural dimension is essential for a comprehensive understanding. PMID:25379225
Fundamental concepts of quantum chaos
NASA Astrophysics Data System (ADS)
Robnik, M.
2016-09-01
We review the fundamental concepts of quantum chaos in Hamiltonian systems. The quantum evolution of bound systems does not possess the sensitive dependence on initial conditions, and thus no chaotic behaviour occurs, whereas the study of the stationary solutions of the Schrödinger equation in the quantum phase space (Wigner functions) reveals precise analogy of the structure of the classical phase portrait. We analyze the regular eigenstates associated with invariant tori in the classical phase space, and the chaotic eigenstates associated with the classically chaotic regions, and the corresponding energy spectra. The effects of quantum localization of the chaotic eigenstates are treated phenomenologically, resulting in Brody-like level statistics, which can be found also at very high-lying levels, while the coupling between the regular and the irregular eigenstates due to tunneling, and of the corresponding levels, manifests itself only in low-lying levels.
Fundamental reaction pathways during coprocessing
Stock, L.M.; Gatsis, J.G. . Dept. of Chemistry)
1992-12-01
The objective of this research was to investigate the fundamental reaction pathways in coal petroleum residuum coprocessing. Once the reaction pathways are defined, further efforts can be directed at improving those aspects of the chemistry of coprocessing that are responsible for the desired results such as high oil yields, low dihydrogen consumption, and mild reaction conditions. We decided to carry out this investigation by looking at four basic aspects of coprocessing: (1) the effect of fossil fuel materials on promoting reactions essential to coprocessing such as hydrogen atom transfer, carbon-carbon bond scission, and hydrodemethylation; (2) the effect of varied mild conditions on the coprocessing reactions; (3) determination of dihydrogen uptake and utilization under severe conditions as a function of the coal or petroleum residuum employed; and (4) the effect of varied dihydrogen pressure, temperature, and residence time on the uptake and utilization of dihydrogen and on the distribution of the coprocessed products. Accomplishments are described.
Quantum repeaters: fundamental and future
NASA Astrophysics Data System (ADS)
Li, Yue; Hua, Sha; Liu, Yu; Ye, Jun; Zhou, Quan
2007-04-01
An overview of the Quantum Repeater techniques based on Entanglement Distillation and Swapping is provided. Beginning with a brief history and the basic concepts of the quantum repeaters, the article primarily focuses on the communication model based on the quantum repeater techniques, which mainly consists of two fundamental modules --- the Entanglement Distillation module and the Swapping module. The realizations of Entanglement Distillation are discussed, including the Bernstein's Procrustean method, the Entanglement Concentration and the CNOT-purification method, etc. The schemes of implementing Swapping, which include the Swapping based on Bell-state measurement and the Swapping in Cavity QED, are also introduced. Then a comparison between these realizations and evaluations on them are presented. At last, the article discusses the experimental schemes of quantum repeaters at present, documents some remaining problems and emerging trends in this field.
Fundamental base closure environmental principles
Yim, R.A.
1994-12-31
Military base closures present a paradox. The rate, scale and timing of military base closures is historically unique. However, each base itself typically does not present unique problems. Thus, the challenge is to design innovative solutions to base redevelopment and remediation issues, while simultaneously adopting common, streamlined or pre-approved strategies to shared problems. The author presents six environmental principles that are fundamental to base closure. They are: remediation not clean up; remediation will impact reuse; reuse will impact remediation; remediation and reuse must be coordinated; environmental contamination must be evaluated as any other initial physical constraint on development, not as an overlay after plans are created; and remediation will impact development, financing and marketability.
Cognition is … Fundamentally Cultural.
Bender, Andrea; Beller, Sieghard
2013-03-01
A prevailing concept of cognition in psychology is inspired by the computer metaphor. Its focus on mental states that are generated and altered by information input, processing, storage and transmission invites a disregard for the cultural dimension of cognition, based on three (implicit) assumptions: cognition is internal, processing can be distinguished from content, and processing is independent of cultural background. Arguing against each of these assumptions, we point out how culture may affect cognitive processes in various ways, drawing on instances from numerical cognition, ethnobiological reasoning, and theory of mind. Given the pervasive cultural modulation of cognition-on all of Marr's levels of description-we conclude that cognition is indeed fundamentally cultural, and that consideration of its cultural dimension is essential for a comprehensive understanding.
Fundamentals of Neurogastroenterology: Basic Science.
Vanner, Stephen; Greenwood-Van Meerveld, Beverley; Mawe, Gary; Shea-Donohue, Terez; Verdu, Elena F; Wood, Jackie; Grundy, David
2016-02-18
This review examines the fundamentals of neurogastroenterology that may underlie the pathophysiology of functional GI disorders (FGIDs). It was prepared by an invited committee of international experts and represents an abbreviated version of their consensus document that will be published in its entirety in the forthcoming book and online version entitled ROME IV. It emphasizes recent advances in our understanding of the enteric nervous system, sensory physiology underlying pain, and stress signaling pathways. There is also a focus on neuroimmmune signaling and intestinal barrier function, given the recent evidence implicating the microbiome, diet, and mucosal immune activation in FGIDs. Together, these advances provide a host of exciting new targets to identify and treat FGIDs and new areas for future research into their pathophysiology. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
Fundamental enabling issues in nanotechnology :
Floro, Jerrold Anthony; Foiles, Stephen Martin; Hearne, Sean Joseph; Hoyt, Jeffrey John; Seel, Steven Craig; Webb III, Edmund Blackburn; Morales, Alfredo Martin; Zimmerman, Jonathan A.
2007-10-01
To effectively integrate nanotechnology into functional devices, fundamental aspects of material behavior at the nanometer scale must be understood. Stresses generated during thin film growth strongly influence component lifetime and performance; stress has also been proposed as a mechanism for stabilizing supported nanoscale structures. Yet the intrinsic connections between the evolving morphology of supported nanostructures and stress generation are still a matter of debate. This report presents results from a combined experiment and modeling approach to study stress evolution during thin film growth. Fully atomistic simulations are presented predicting stress generation mechanisms and magnitudes during all growth stages, from island nucleation to coalescence and film thickening. Simulations are validated by electrodeposition growth experiments, which establish the dependence of microstructure and growth stresses on process conditions and deposition geometry. Sandia is one of the few facilities with the resources to combine experiments and modeling/theory in this close a fashion. Experiments predicted an ongoing coalescence process that generates signficant tensile stress. Data from deposition experiments also supports the existence of a kinetically limited compressive stress generation mechanism. Atomistic simulations explored island coalescence and deposition onto surfaces intersected by grain boundary structures to permit investigation of stress evolution during later growth stages, e.g. continual island coalescence and adatom incorporation into grain boundaries. The predictive capabilities of simulation permit direct determination of fundamental processes active in stress generation at the nanometer scale while connecting those processes, via new theory, to continuum models for much larger island and film structures. Our combined experiment and simulation results reveal the necessary materials science to tailor stress, and therefore performance, in
Omnidirectional antenna having constant phase
Sena, Matthew
2017-04-04
Various technologies presented herein relate to constructing and/or operating an antenna having an omnidirectional electrical field of constant phase. The antenna comprises an upper plate made up of multiple conductive rings, a lower ground-plane plate, a plurality of grounding posts, a conical feed, and a radio frequency (RF) feed connector. The upper plate has a multi-ring configuration comprising a large outer ring and several smaller rings of equal size located within the outer ring. The large outer ring and the four smaller rings have the same cross-section. The grounding posts ground the upper plate to the lower plate while maintaining a required spacing/parallelism therebetween.
Philicities, Fugalities, and Equilibrium Constants.
Mayr, Herbert; Ofial, Armin R
2016-05-17
The mechanistic model of Organic Chemistry is based on relationships between rate and equilibrium constants. Thus, strong bases are generally considered to be good nucleophiles and poor nucleofuges. Exceptions to this rule have long been known, and the ability of iodide ions to catalyze nucleophilic substitutions, because they are good nucleophiles as well as good nucleofuges, is just a prominent example for exceptions from the general rule. In a reaction series, the Leffler-Hammond parameter α = δΔG(⧧)/δΔG° describes the fraction of the change in the Gibbs energy of reaction, which is reflected in the change of the Gibbs energy of activation. It has long been considered as a measure for the position of the transition state; thus, an α value close to 0 was associated with an early transition state, while an α value close to 1 was considered to be indicative of a late transition state. Bordwell's observation in 1969 that substituent variation in phenylnitromethanes has a larger effect on the rates of deprotonation than on the corresponding equilibrium constants (nitroalkane anomaly) triggered the breakdown of this interpretation. In the past, most systematic investigations of the relationships between rates and equilibria of organic reactions have dealt with proton transfer reactions, because only for few other reaction series complementary kinetic and thermodynamic data have been available. In this Account we report on a more general investigation of the relationships between Lewis basicities, nucleophilicities, and nucleofugalities as well as between Lewis acidities, electrophilicities, and electrofugalities. Definitions of these terms are summarized, and it is suggested to replace the hybrid terms "kinetic basicity" and "kinetic acidity" by "protophilicity" and "protofugality", respectively; in this way, the terms "acidity" and "basicity" are exclusively assigned to thermodynamic properties, while "philicity" and "fugality" refer to kinetics
Constant magnification optical tracking system
NASA Technical Reports Server (NTRS)
Frazer, R. E. (Inventor)
1982-01-01
A constant magnification optical tracking system for continuously tracking of a moving object is described. In the tracking system, a traveling objective lens maintains a fixed relationship with an object to be optically tracked. The objective lens was chosen to provide a collimated light beam oriented in the direction of travel of the moving object. A reflective surface is attached to the traveling objective lens for reflecting an image of the moving object. The object to be tracked is a free-falling object which is located at the focal point of the objective lens for at least a portion of its free-fall path. A motor and control means is provided for mantaining the traveling objective lens in a fixed relationship relative to the free-falling object, thereby keeping the free-falling object at the focal point and centered on the axis of the traveling objective lens throughout its entire free-fall path.
Dissociation constant of nitric acid
NASA Astrophysics Data System (ADS)
Levanov, A. V.; Isaikina, O. Ya.; Lunin, V. V.
2017-07-01
The composition of nitric acid solutions is investigated by means Raman spectroscopy (RS). The results are compared to critically selected data from other authors. The value of the thermodynamic dissociation constant in an aqueous nitric acid solution at 25°C ( K a = [ {{H^ + }} ]{[ {NO_3^ - } ]_{γ '}}_ ± ^2/[ {HN{O_3}} ]{γ '_{HN{O_3}}} = 35.5 ± 1.5M) is determined by analyzing an extensive set of reliable and consistent literature and original data. Expressions for the dependences of the activity coefficient of undissociated HNO3 molecules ({γ '_{HN{O_3}}} ) and the mean ionic coefficient ({γ '_ ± } = √ {{{γ '}_H} + {{γ '}_{NO_3^ - }}} ) on the stoichiometric concentration of nitric acid in the range of 0-18 M are found.
BOOK REVIEWS: Quantum Mechanics: Fundamentals
NASA Astrophysics Data System (ADS)
Whitaker, A.
2004-02-01
This review is of three books, all published by Springer, all on quantum theory at a level above introductory, but very different in content, style and intended audience. That of Gottfried and Yan is of exceptional interest, historical and otherwise. It is a second edition of Gottfried’s well-known book published by Benjamin in 1966. This was written as a text for a graduate quantum mechanics course, and has become one of the most used and respected accounts of quantum theory, at a level mathematically respectable but not rigorous. Quantum mechanics was already solidly established by 1966, but this second edition gives an indication of progress made and changes in perspective over the last thirty-five years, and also recognises the very substantial increase in knowledge of quantum theory obtained at the undergraduate level. Topics absent from the first edition but included in the second include the Feynman path integral, seen in 1966 as an imaginative but not very useful formulation of quantum theory. Feynman methods were given only a cursory mention by Gottfried. Their practical importance has now been fully recognised, and a substantial account of them is provided in the new book. Other new topics include semiclassical quantum mechanics, motion in a magnetic field, the S matrix and inelastic collisions, radiation and scattering of light, identical particle systems and the Dirac equation. A topic that was all but totally neglected in 1966, but which has flourished increasingly since, is that of the foundations of quantum theory. John Bell’s work of the mid-1960s has led to genuine theoretical and experimental achievement, which has facilitated the development of quantum optics and quantum information theory. Gottfried’s 1966 book played a modest part in this development. When Bell became increasingly irritated with the standard theoretical approach to quantum measurement, Viki Weisskopf repeatedly directed him to Gottfried’s book. Gottfried had devoted a
NASA Astrophysics Data System (ADS)
Hönerlage, B.
This document is part of Volume 44 `Semiconductors', Subvolume A `New Data and Updates for I-VII, III-V, III-VI and IV-VI Compounds' of Landolt-Börnstein Group III `Condensed Matter'. It contains data on AgF (silver fluoride), Element System Ag-F.
NASA Astrophysics Data System (ADS)
Hönerlage, B.
This document is part of Subvolume D 'New Data and Updates for IV-IV; III-V; II-VI and I-VII Compounds; their Mixed Crystals and Diluted Magnetic Semiconductors' of Volume 44 'Semiconductors' of Landolt-Börnstein - Group III 'Condensed Matter'.
How Do Fundamental Christians Deal with Depression?
ERIC Educational Resources Information Center
Spinney, Douglas Harvey
1991-01-01
Provides explanation of developmental dynamics in experience of fundamental Christians that provoke reactive depression. Describes depressant retardant defenses against depression that have been observed in Christian fundamental subculture. Suggests four counseling strategies for helping fundamentalists. (Author/ABL)
Simulating Supercapacitors: Can We Model Electrodes As Constant Charge Surfaces?
Merlet, Céline; Péan, Clarisse; Rotenberg, Benjamin; Madden, Paul A; Simon, Patrice; Salanne, Mathieu
2013-01-17
Supercapacitors based on an ionic liquid electrolyte and graphite or nanoporous carbon electrodes are simulated using molecular dynamics. We compare a simplified electrode model in which a constant, uniform charge is assigned to each carbon atom with a realistic model in which a constant potential is applied between the electrodes (the carbon charges are allowed to fluctuate). We show that the simulations performed with the simplified model do not provide a correct description of the properties of the system. First, the structure of the adsorbed electrolyte is partly modified. Second, dramatic differences are observed for the dynamics of the system during transient regimes. In particular, upon application of a constant applied potential difference, the increase in the temperature, due to the Joule effect, associated with the creation of an electric current across the cell follows Ohm's law, while unphysically high temperatures are rapidly observed when constant charges are assigned to each carbon atom.
Future Fundamental Combustion Research for Aeropropulsion Systems.
1985-01-01
AD-MISS 771 FUTURE FUNDAMENTAL COMBUSTION RESEARCH FOR I AEROPROPULSION SYSTEMS(U) NATIONAL AERONAUTICS AND I SPACE ADMINISTRATION CLEVELAND OH LEWIS... Future Fundamental Combustion Research for Aeropropulsion Systems u. Edward J. Mularz V Propulsion Laboratory A VSCOM Research and Technology Laboratories... FUTURE FUNDAMENTAL COMBUSTION RESEARCH FOR AEROPROPULSION SYSTEMS Edward J. Mularz
Is There a Cosmological Constant?
NASA Astrophysics Data System (ADS)
Kochanek, Christopher
2002-07-01
The grant contributed to the publication of 18 refereed papers and 5 conference proceedings. The primary uses of the funding have been for page charges, travel for invited talks related to the grant research, and the support of a graduate student, Charles Keeton. The refereed papers address four of the primary goals of the proposal: (1) the statistics of radio lenses as a probe of the cosmological model (#1), (2) the role of spiral galaxies as lenses (#3), (3) the effects of dust on statistics of lenses (#7, #8), and (4) the role of groups and clusters as lenses (#2, #6, #10, #13, #15, #16). Four papers (#4, #5, #11, #12) address general issues of lens models, calibrations, and the relationship between lens galaxies and nearby galaxies. One considered cosmological effects in lensing X-ray sources (#9), and two addressed issues related to the overall power spectrum and theories of gravity (#17, #18). Our theoretical studies combined with the explosion in the number of lenses and the quality of the data obtained for them is greatly increasing our ability to characterize and understand the lens population. We can now firmly conclude both from our study of the statistics of radio lenses and our survey of extinctions in individual lenses that the statistics of optically selected quasars were significantly affected by extinction. However, the limits on the cosmological constant remain at lambda < 0.65 at a 2-sigma confidence level, which is in mild conflict with the results of the Type la supernova surveys. We continue to find that neither spiral galaxies nor groups and clusters contribute significantly to the production of gravitational lenses. The lack of group and cluster lenses is strong evidence for the role of baryonic cooling in increasing the efficiency of galaxies as lenses compared to groups and clusters of higher mass but lower central density. Unfortunately for the ultimate objective of the proposal, improved constraints on the cosmological constant, the next
Is There a Cosmological Constant?
NASA Technical Reports Server (NTRS)
Kochanek, Christopher; Oliversen, Ronald J. (Technical Monitor)
2002-01-01
The grant contributed to the publication of 18 refereed papers and 5 conference proceedings. The primary uses of the funding have been for page charges, travel for invited talks related to the grant research, and the support of a graduate student, Charles Keeton. The refereed papers address four of the primary goals of the proposal: (1) the statistics of radio lenses as a probe of the cosmological model (#1), (2) the role of spiral galaxies as lenses (#3), (3) the effects of dust on statistics of lenses (#7, #8), and (4) the role of groups and clusters as lenses (#2, #6, #10, #13, #15, #16). Four papers (#4, #5, #11, #12) address general issues of lens models, calibrations, and the relationship between lens galaxies and nearby galaxies. One considered cosmological effects in lensing X-ray sources (#9), and two addressed issues related to the overall power spectrum and theories of gravity (#17, #18). Our theoretical studies combined with the explosion in the number of lenses and the quality of the data obtained for them is greatly increasing our ability to characterize and understand the lens population. We can now firmly conclude both from our study of the statistics of radio lenses and our survey of extinctions in individual lenses that the statistics of optically selected quasars were significantly affected by extinction. However, the limits on the cosmological constant remain at lambda < 0.65 at a 2-sigma confidence level, which is in mild conflict with the results of the Type la supernova surveys. We continue to find that neither spiral galaxies nor groups and clusters contribute significantly to the production of gravitational lenses. The lack of group and cluster lenses is strong evidence for the role of baryonic cooling in increasing the efficiency of galaxies as lenses compared to groups and clusters of higher mass but lower central density. Unfortunately for the ultimate objective of the proposal, improved constraints on the cosmological constant, the next
Fundamental Performance on Disc Type Thermomagnetic Engine
NASA Astrophysics Data System (ADS)
Takahashi, Yutaka; Matsuzawa, Tomohiro; Nishikawa, Masahiro
This paper is described on the fundamental performance of the disc type thermomagnetic engine. The disc type engine has been designed in order to decrease the eddy current braking loss. The performance characteristics such as power, torque and loss has been measured, and compared with that of the cylindrical engine in the condition of the same volume of the temperature sensitive magnetic material. The eddy current braking loss is 0.04W which corresponds to 1/30 the loss in the cylindrical engine at the rotation speed of 0.4rps with the maximum power output. The total loss including partial losses due to the friction, the hydraulic effect and the eddy current braking is 0.9W in the disc type engine and is 1.8W in the cylindrical engine. The total loss in the disc type engine is reduced to be 50% of the value of the total loss in the cylindrical engine at the same condition mentioned above. The maximum output power is 6.0W at the rotation speed of 0.4rps in the disc type engine which is about 1.6 times larger than that of the cylindrical engine. The eddy current braking loss in the disc type engine is 0.7% of the value of the maximum output power, which is negligible effect in this engine. The power per unit volume of disc has the maximum value at the disc width of 40mm. The clearance between discs is decided to be of 1mm due to keeping the working fluid flow condition at a constant. The rotor thickness includes with the clearance and the disc thickness. The power per unit rotor thickness also has the maximum value at the disc thickness of 0.5mm. The thermomagnetic engine with the optimum condition can be designed by using these results. When the permanent magnet fixes the size constant, the disc type engine generates high output power in comparison with the cylindrical engine at the point of effective use of magnetic field.
Fundamental Scaling Laws in Nanophotonics.
Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J
2016-11-21
The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of "smaller-is-better" has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors.
Fundamental Scaling Laws in Nanophotonics
NASA Astrophysics Data System (ADS)
Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J.
2016-11-01
The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors.
Information physics fundamentals of nanophotonics.
Naruse, Makoto; Tate, Naoya; Aono, Masashi; Ohtsu, Motoichi
2013-05-01
Nanophotonics has been extensively studied with the aim of unveiling and exploiting light-matter interactions that occur at a scale below the diffraction limit of light, and recent progress made in experimental technologies--both in nanomaterial fabrication and characterization--is driving further advancements in the field. From the viewpoint of information, on the other hand, novel architectures, design and analysis principles, and even novel computing paradigms should be considered so that we can fully benefit from the potential of nanophotonics. This paper examines the information physics aspects of nanophotonics. More specifically, we present some fundamental and emergent information properties that stem from optical excitation transfer mediated by optical near-field interactions and the hierarchical properties inherent in optical near-fields. We theoretically and experimentally investigate aspects such as unidirectional signal transfer, energy efficiency and networking effects, among others, and we present their basic theoretical formalisms and describe demonstrations of practical applications. A stochastic analysis of light-assisted material formation is also presented, where an information-based approach provides a deeper understanding of the phenomena involved, such as self-organization. Furthermore, the spatio-temporal dynamics of optical excitation transfer and its inherent stochastic attributes are utilized for solution searching, paving the way to a novel computing paradigm that exploits coherent and dissipative processes in nanophotonics.
Levitated Optomechanics for Fundamental Physics
NASA Astrophysics Data System (ADS)
Rashid, Muddassar; Bateman, James; Vovrosh, Jamie; Hempston, David; Ulbricht, Hendrik
2015-05-01
Optomechanics with levitated nano- and microparticles is believed to form a platform for testing fundamental principles of quantum physics, as well as find applications in sensing. We will report on a new scheme to trap nanoparticles, which is based on a parabolic mirror with a numerical aperture of 1. Combined with achromatic focussing, the setup is a cheap and readily straightforward solution to trapping nanoparticles for further study. Here, we report on the latest progress made in experimentation with levitated nanoparticles; these include the trapping of 100 nm nanodiamonds (with NV-centres) down to 1 mbar as well as the trapping of 50 nm Silica spheres down to 10?4 mbar without any form of feedback cooling. We will also report on the progress to implement feedback stabilisation of the centre of mass motion of the trapped particle using digital electronics. Finally, we argue that such a stabilised particle trap can be the particle source for a nanoparticle matterwave interferometer. We will present our Talbot interferometer scheme, which holds promise to test the quantum superposition principle in the new mass range of 106 amu. EPSRC, John Templeton Foundation.
Fundamental Physics within Complex Plasmas
NASA Astrophysics Data System (ADS)
Douglass, Angela Michelle
In this work, both experimental and numerical methods are used to investigate several of the fundamental processes and assumptions commonly found in an earth-based radio-frequency (RF) complex plasma discharge. First the manner in which the dust particle charge varies with the particle's height above the powered electrode is investigated. Knowledge of the dust particle charge is required to understand nearly all complex plasma experiments since it affects the dust particle's levitation height and particle-particle interactions. A fluid model which includes effects due to ion flow and electron depletion (which are significant dust charging effects within the sheath where the particles levitate) is employed to determine the plasma parameters required to calculate the dust particle charge. Second, the levitation limits of the dust particles and the structure of the sheath are investigated. The CASPER GEC RF reference cell is used to perform two experiments: one to measure the dust levitation height as a function of applied RF voltage and one to determine the electric force profile. The fluid model is then used to interpret the experimental results. This study provides a better understanding of the sheath structure, particle behavior within the sheath, and provides a new, in situ experimental method for locating the approximate height of the sheath edge in any dusty plasma system. Finally, both molecular dynamics (MD) simulations and an experiment are employed to determine the physical processes that a complex plasma system goes through as it rapidly transitions from a liquid to solid state.
Fundamental theory of crystal decomposition
NASA Astrophysics Data System (ADS)
Kunz, A. B.
1991-05-01
Lattice defects in or on crystalline materials, determine many technologically important properties. Reliable computerized simulation of such defects are of potential value, and may be expected to contribute to a fundamental understanding of the physical processes that determine the structure and properties of these materials. In the case of point defects, it is attractive to use quantum mechanics to describe the region of the crystal in proximity to the defect, perhaps embedding this region in an external potential determined by some auxiliary principle. The hope here is that the structural response of the lattice to the point defect may then be described by some method which is simpler than the quantum mechanical method used to describe the point defect itself. In the present case, the development is begun for the case of non-metals. In many studies performed prior to the present for such systems, the use of a classical shell model, based upon point charges, and masses, interacting by simple parameterized potentials has been successful in correlating perfect lattice equilibrium data with the ground state properties of defects in these systems. Therefore, the study was begun by choosing to think of the embedding lattice in terms of the classical shell model. It was found that it is possible to retain the functional form of the shell model, but determine all needed parameters from the quantum mechanical calculation, and to augment this functional form with appropriate angular potentials as well.
Fundamental Scaling Laws in Nanophotonics
Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J.
2016-01-01
The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors. PMID:27869159
Gas cell neutralizers (Fundamental principles)
Fuehrer, B.
1985-06-01
Neutralizing an ion-beam of the size and energy levels involved in the neutral-particle-beam program represents a considerable extension of the state-of-the-art of neutralizer technology. Many different mediums (e.g., solid, liquid, gas, plasma, photons) can be used to strip the hydrogen ion of its extra electron. A large, multidisciplinary R and D effort will no doubt be required to sort out all of the ''pros and cons'' of these various techniques. The purpose of this particular presentation is to discuss some basic configurations and fundamental principles of the gas type of neutralizer cell. Particular emphasis is placed on the ''Gasdynamic Free-Jet'' neutralizer since this configuration has the potential of being much shorter than other type of gas cells (in the beam direction) and it could operate in nearly a continuous mode (CW) if necessary. These were important considerations in the ATSU design which is discussed in some detail in the second presentation entitled ''ATSU Point Design''.
Do goldfish miss the fundamental?
NASA Astrophysics Data System (ADS)
Fay, Richard R.
2003-10-01
The perception of harmonic complexes was studied in goldfish using classical respiratory conditioning and a stimulus generalization paradigm. Groups of animals were initially conditioned to several harmonic complexes with a fundamental frequency (f0) of 100 Hz. ln some cases the f0 component was present, and in other cases, the f0 component was absent. After conditioning, animals were tested for generalization to novel harmonic complexes having different f0's, some with f0 present and some with f0 absent. Generalization gradients always peaked at 100 Hz, indicating that the pitch value of the conditioning complexes was consistent with the f0, whether or not f0 was present in the conditioning or test complexes. Thus, goldfish do not miss the fundmental with respect to a pitch-like perceptual dimension. However, generalization gradients tended to have different skirt slopes for the f0-present and f0-absent conditioning and test stimuli. This suggests that goldfish distinguish between f0 present/absent stimuli, probably on the basis of a timbre-like perceptual dimension. These and other results demonstrate that goldfish respond to complex sounds as if they possessed perceptual dimensions similar to pitch and timbre as defined for human and other vertebrate listeners. [Work supported by NIH/NIDCD.
Fundamental studies of fusion plasmas
Aamodt, R.E.; Catto, P.J.; D'Ippolito, D.A.; Myra, J.R.; Russell, D.A.
1992-05-26
The major portion of this program is devoted to critical ICH phenomena. The topics include edge physics, fast wave propagation, ICH induced high frequency instabilities, and a preliminary antenna design for Ignitor. This research was strongly coordinated with the world's experimental and design teams at JET, Culham, ORNL, and Ignitor. The results have been widely publicized at both general scientific meetings and topical workshops including the speciality workshop on ICRF design and physics sponsored by Lodestar in April 1992. The combination of theory, empirical modeling, and engineering design in this program makes this research particularly important for the design of future devices and for the understanding and performance projections of present tokamak devices. Additionally, the development of a diagnostic of runaway electrons on TEXT has proven particularly useful for the fundamental understanding of energetic electron confinement. This work has led to a better quantitative basis for quasilinear theory and the role of magnetic vs. electrostatic field fluctuations on electron transport. An APS invited talk was given on this subject and collaboration with PPPL personnel was also initiated. Ongoing research on these topics will continue for the remainder fo the contract period and the strong collaborations are expected to continue, enhancing both the relevance of the work and its immediate impact on areas needing critical understanding.
Hyperbolic metamaterials: fundamentals and applications.
Shekhar, Prashant; Atkinson, Jonathan; Jacob, Zubin
2014-01-01
Metamaterials are nano-engineered media with designed properties beyond those available in nature with applications in all aspects of materials science. In particular, metamaterials have shown promise for next generation optical materials with electromagnetic responses that cannot be obtained from conventional media. We review the fundamental properties of metamaterials with hyperbolic dispersion and present the various applications where such media offer potential for transformative impact. These artificial materials support unique bulk electromagnetic states which can tailor light-matter interaction at the nanoscale. We present a unified view of practical approaches to achieve hyperbolic dispersion using thin film and nanowire structures. We also review current research in the field of hyperbolic metamaterials such as sub-wavelength imaging and broadband photonic density of states engineering. The review introduces the concepts central to the theory of hyperbolic media as well as nanofabrication and characterization details essential to experimentalists. Finally, we outline the challenges in the area and offer a set of directions for future work.
Fundamental approach to dipmeter analysis
Enderlin, M.B.; Hansen, D.K.T.
1988-01-01
Historically, in dipmeter analysis, depositional patterns are delineated for environmental, structural, and stratigraphic interpretations. The proposed method is a fundamental approach using raw data measurements from the dipmeter sonde to help the geologist describe subsurface structures on a stratigraphic scale. Raw data are available at the well site, require no post-processing, are cost effective, easy to use, require only a basic understanding of sedimentary features and can be combined with computed results. A case study illustrates the reconstruction of sedimentary features from a raw data log recorded by a six-arm dipmeter. The dipmeter is a wireline tool with a series of evenly spaced, focused electrodes applied to the circumference of the borehole wall. The raw data are presented as curves representing the electrode response and tool orientation. In outcrop, the geologist usually can see an entire sedimentary feature in a large perspective, that is, with the surrounding landscape. Therefore, a large range of features can be resolved. However, in the borehole environment the perspective is reduced to the borehole diameter, thus reducing the range of recognizable features. In this study, a table was assembled that identifies the features distinguished by the proposed method as a function of borehole diameter.
On the fundamental role of dynamics in quantum physics
NASA Astrophysics Data System (ADS)
Hofmann, Holger F.
2016-05-01
Quantum theory expresses the observable relations between physical properties in terms of probabilities that depend on the specific context described by the "state" of a system. However, the laws of physics that emerge at the macroscopic level are fully deterministic. Here, it is shown that the relation between quantum statistics and deterministic dynamics can be explained in terms of ergodic averages over complex valued probabilities, where the fundamental causality of motion is expressed by an action that appears as the phase of the complex probability multiplied with the fundamental constant ħ. Importantly, classical physics emerges as an approximation of this more fundamental theory of motion, indicating that the assumption of a classical reality described by differential geometry is merely an artefact of an extrapolation from the observation of macroscopic dynamics to a fictitious level of precision that does not exist within our actual experience of the world around us. It is therefore possible to completely replace the classical concepts of trajectories with the more fundamental concept of action phase probabilities as a universally valid description of the deterministic causality of motion that is observed in the physical world.
Prospects for Fundamental Symmetry Tests with Polyatomic Molecules
NASA Astrophysics Data System (ADS)
Berger, Robert; Isaev, Timur
2013-06-01
Special features of polyatomic molecules make them attractive candidates for search for violation of fundamental symmetries and variation of fundamental constants [1, 2]. We discuss the possibility of searching for nuclear spin-dependent space-parity violating (NSD-PV) interaction in closed-shell and open-shell polyatomic molecules. The parameter W_{a} of the effective molecular spin-rotational Hamiltonian characterising the strength of NSD-PV interaction in open-shell linear molecules is discussed and approaches for its calculation outlined. In addition, possibilities for detecting NSD-PV in chiral molecules via NMR and MW spectroscopy are presented. REFERENCES: C. Stoeffler et al, Phys. Chem. Chem. Phys., 13 (3), 2011; M. Quack, J. Stohner and M. Willeke, Ann. Rev. Phys. Chem., 59, 2008 J. Bagdonaite et al, Science, 339 (6115), 2013.
PMN-PT nanowires with a very high piezoelectric constant.
Xu, Shiyou; Poirier, Gerald; Yao, Nan
2012-05-09
A profound way to increase the output voltage (or power) of the piezoelectric nanogenerators is to utilize a material with higher piezoelectric constants. Here we report the synthesis of novel piezoelectric 0.72Pb(Mg(1/3)Nb(2/3))O(3)-0.28PbTiO(3) (PMN-PT) nanowires using a hydrothermal process. The unpoled single-crystal PMN-PT nanowires show a piezoelectric constant (d(33)) up to 381 pm/V, with an average value of 373 ± 5 pm/V. This is about 15 times higher than the maximum reported value of 1-D ZnO nanostructures and 3 times higher than the largest reported value of 1-D PZT nanostructures. These PMN-PT nanostructures are of good potential being used as the fundamental building block for higher power nanogenerators, high sensitivity nanosensors, and large strain nanoactuators.
Induced cosmological constant and other features of asymmetric brane embedding
Shtanov, Yuri; Sahni, Varun; Shafieloo, Arman; Toporensky, Alexey E-mail: varun@iucaa.ernet.in E-mail: lesha@xray.sai.msu.ru
2009-04-15
We investigate the cosmological properties of an 'induced gravity' brane scenario in the absence of mirror symmetry with respect to the brane. We find that brane evolution can proceed along one of four distinct branches. By contrast, when mirror symmetry is imposed, only two branches exist, one of which represents the self-accelerating brane, while the other is the so-called normal branch. This model incorporates many of the well-known possibilities of brane cosmology including phantom acceleration (w < -1), self-acceleration, transient acceleration, quiescent singularities, and cosmic mimicry. Significantly, the absence of mirror symmetry also provides an interesting way of inducing a sufficiently small cosmological constant on the brane. A small (positive) {Lambda}-term in this case is induced by a small asymmetry in the values of bulk fundamental constants on the two sides of the brane.
Fundamentals of Middle Atmospheric Dynamics
1989-03-31
from year to year . These trends are partly a result of man-made perturbations and involve an intricate interplay of chemical, radiative and dynamical...processes on the one hand, and radiation and chemistry on the other. (4) Accomplishments since 1 April 1988: Dr Dritschel has continued to make important...appear in the Quarterly Journal of the Royal Meteorological Society, probably early next year . Dr McIntyre was seconded to the field phase of the NASA/NOAA
Fundamental Aspects of Pressuremeter Testing.
1987-04-30
feort:. arid reproducIc’e sampilt , of sei- (.;Ii, Ito jors pare (! aid to -1e (1 ’lie %,to if. ap l,:r, tee too part eularl., w.ito .1 focr I ike 4 ;elh... Eurl , a,d Earth Supported Structures, Vol. 2, Purdu ’niversitv, West Lafayettc. Indi- ana, pp. 1-54. Cace’ci, M. S. and Cacheris, W. P. (19)%1
Fundamental Principles of Proper Space Kinematics
NASA Astrophysics Data System (ADS)
Wade, Sean
It is desirable to understand the movement of both matter and energy in the universe based upon fundamental principles of space and time. Time dilation and length contraction are features of Special Relativity derived from the observed constancy of the speed of light. Quantum Mechanics asserts that motion in the universe is probabilistic and not deterministic. While the practicality of these dissimilar theories is well established through widespread application inconsistencies in their marriage persist, marring their utility, and preventing their full expression. After identifying an error in perspective the current theories are tested by modifying logical assumptions to eliminate paradoxical contradictions. Analysis of simultaneous frames of reference leads to a new formulation of space and time that predicts the motion of both kinds of particles. Proper Space is a real, three-dimensional space clocked by proper time that is undergoing a densification at the rate of c. Coordinate transformations to a familiar object space and a mathematical stationary space clarify the counterintuitive aspects of Special Relativity. These symmetries demonstrate that within the local universe stationary observers are a forbidden frame of reference; all is in motion. In lieu of Quantum Mechanics and Uncertainty the use of the imaginary number i is restricted for application to the labeling of mass as either material or immaterial. This material phase difference accounts for both the perceived constant velocity of light and its apparent statistical nature. The application of Proper Space Kinematics will advance more accurate representations of microscopic, oscopic, and cosmological processes and serve as a foundation for further study and reflection thereafter leading to greater insight.
Fundamental structures of dynamic social networks
Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune
2016-01-01
Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision. PMID:27555584
The dependency of timbre on fundamental frequency
NASA Astrophysics Data System (ADS)
Marozeau, Jeremy; de Cheveigné, Alain; McAdams, Stephen; Winsberg, Suzanne
2003-11-01
The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0's, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0's.
Fundamental structures of dynamic social networks.
Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune
2016-09-06
Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision.
The dependency of timbre on fundamental frequency.
Marozeau, Jeremy; de Cheveigné, Alain; McAdams, Stephen; Winsberg, Suzanne
2003-11-01
The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0's, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0's.
The Boundary Function Method. Fundamentals
NASA Astrophysics Data System (ADS)
Kot, V. A.
2017-03-01
The boundary function method is proposed for solving applied problems of mathematical physics in the region defined by a partial differential equation of the general form involving constant or variable coefficients with a Dirichlet, Neumann, or Robin boundary condition. In this method, the desired function is defined by a power polynomial, and a boundary function represented in the form of the desired function or its derivative at one of the boundary points is introduced. Different sequences of boundary equations have been set up with the use of differential operators. Systems of linear algebraic equations constructed on the basis of these sequences allow one to determine the coefficients of a power polynomial. Constitutive equations have been derived for initial boundary-value problems of all the main types. With these equations, an initial boundary-value problem is transformed into the Cauchy problem for the boundary function. The determination of the boundary function by its derivative with respect to the time coordinate completes the solution of the problem.
NASA Astrophysics Data System (ADS)
Ozkanlar, Abdullah; Rodriguez, Jorge H.
2009-03-01
Some (bio)chemical reactions are non-adiabatic processes whereby the total spin angular momentum, before and after the reaction, is not conserved. These are named spin- forbidden reactions. The application of spin density functional theory (SDFT) to the prediction of rate constants is a challenging task of fundamental and practical importance. We apply non-adiabatic transition state theory in conjunction with SDFT to predict the rate constant of the spin- forbidden dihydrogen binding to iron tetracarbonyl. To model the surface hopping probability between singlet and triplet states, the Landau-Zener formalism is used. The lowest energy point for singlet-triplet crossing, known as minimum energy crossing point (MECP), was located and used to compute, in a semi-quantum approach, reaction rate constants at 300 K. The predicted rates are in good agreement with experiment. In addition, we present results which are relevant to the ligand binding reactions of metalloproteins. This work is supported in part by NSF via CAREER award CHE-0349189 (JHR).
Life, the Universe, and everything—42 fundamental questions
NASA Astrophysics Data System (ADS)
Allen, Roland E.; Lidström, Suzy
2017-01-01
In The Hitchhiker’s Guide to the Galaxy, by Douglas Adams, the Answer to the Ultimate Question of Life, the Universe, and Everything is found to be 42—but the meaning of this is left open to interpretation. We take it to mean that there are 42 fundamental questions which must be answered on the road to full enlightenment, and we attempt a first draft (or personal selection) of these ultimate questions, on topics ranging from the cosmological constant and origin of the Universe to the origin of life and consciousness.