Science.gov

Sample records for fundamental constants part

  1. How fundamental are fundamental constants?

    NASA Astrophysics Data System (ADS)

    Duff, M. J.

    2015-01-01

    I argue that the laws of physics should be independent of one's choice of units or measuring apparatus. This is the case if they are framed in terms of dimensionless numbers such as the fine structure constant, ?. For example, the standard model of particle physics has 19 such dimensionless parameters whose values all observers can agree on, irrespective of what clock, rulers or scales? they use to measure them. Dimensional constants, on the other hand, such as ?, c, G, e and k ?, are merely human constructs whose number and values differ from one choice of units to the next. In this sense, only dimensionless constants are 'fundamental'. Similarly, the possible time variation of dimensionless fundamental 'constants' of nature is operationally well defined and a legitimate subject of physical enquiry. By contrast, the time variation of dimensional constants such as ? or ? on which a good many (in my opinion, confusing) papers have been written, is a unit-dependent phenomenon on which different observers might disagree depending on their apparatus. All these confusions disappear if one asks only unit-independent questions. We provide a selection of opposing opinions in the literature and respond accordingly.

  2. Fundamental Physical Constants

    National Institute of Standards and Technology Data Gateway

    SRD 121 CODATA Fundamental Physical Constants (Web, free access)   This site, developed in the Physics Laboratory at NIST, addresses three topics: fundamental physical constants, the International System of Units (SI), which is the modern metric system, and expressing the uncertainty of measurement results.

  3. Wall of fundamental constants

    SciTech Connect

    Olive, Keith A.; Peloso, Marco; Uzan, Jean-Philippe

    2011-02-15

    We consider the signatures of a domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of the fine-structure constant, {alpha}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in {alpha} relative to the terrestrial value, depending on our relative position with respect to the wall. This wall could resolve the contradiction between claims of a variation of {alpha} based on Keck/Hires data and of the constancy of {alpha} based on Very Large Telescope data. We derive the properties of the wall and the parameters of the underlying microscopic model required to reproduce the possible spatial variation of {alpha}. We discuss the constraints on the existence of the low-energy domain wall and describe its observational implications concerning the variation of the fundamental constants.

  4. Connecting Fundamental Constants

    SciTech Connect

    Di Mario, D.

    2008-05-29

    A model for a black hole electron is built from three basic constants only: h, c and G. The result is a description of the electron with its mass and charge. The nature of this black hole seems to fit the properties of the Planck particle and new relationships among basic constants are possible. The time dilation factor in a black hole associated with a variable gravitational field would appear to us as a charge; on the other hand the Planck time is acting as a time gap drastically limiting what we are able to measure and its dimension will appear in some quantities. This is why the Planck time is numerically very close to the gravitational/electric force ratio in an electron: its difference, disregarding a {pi}{radical}(2) factor, is only 0.2%. This is not a coincidence, it is always the same particle and the small difference is between a rotating and a non-rotating particle. The determination of its rotational speed yields accurate numbers for many quantities, including the fine structure constant and the electron magnetic moment.

  5. Derivatives of imidopyrophosphoric acids as extractants. Part I. The preparation and fundamental constants of tetraalkylimidopyrophosphoric acids

    SciTech Connect

    Preez, J.G.H. du; Knabl, K.U.; Krueger, L.; Brecht, B.J.A.M. van )

    1992-12-01

    Improved methods for the preparation of imidotetraalkylpyrophosphates are reported. The dissociation constant of the water soluble tetraethyl analogue was determined by potentiometric titration. The values for the partition constant and aggregation constant of the tetradodecyl analogue were determined by two phase EMF potentiometric titration of which the data were processed through a sophisticated general optimization technique. The application of this method also made it possible to obtain a species distribution curve of the organic phase in terms of variation of pH in the aqueous phase. 17 refs., 7 figs., 4 tabs.

  6. Direct numerical simulation of ignition front propagation in a constant volume with temperature inhomogeneities, part I : Fundamental analysis and diagnostics.

    SciTech Connect

    Sankaran, Ramanan; Mason, Scott D.; Chen, Jacqueline H.; Hawkes, Evatt R.; Im, Hong G.

    2005-01-01

    The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry. Parametric studies on the effect of the initial amplitude of the temperature fluctuations, the initial length scales of the temperature and velocity fluctuations, and the turbulence intensity are performed. The combustion mode is characterized using the diagnostic measures developed in Part I of this study. Specifically, the ignition front speed and the scalar mixing timescales are used to identify the roles of molecular diffusion and heat conduction in each case. Predictions from a multizone model initialized from the DNS fields are presented and differences are explained using the diagnostic tools developed.

  7. New Quasar Studies Keep Fundamental Physical Constant Constant

    NASA Astrophysics Data System (ADS)

    2004-03-01

    Very Large Telescope sets stringent limit on possible variation of the fine-structure constant over cosmological time Summary Detecting or constraining the possible time variations of fundamental physical constants is an important step toward a complete understanding of basic physics and hence the world in which we live. A step in which astrophysics proves most useful. Previous astronomical measurements of the fine structure constant - the dimensionless number that determines the strength of interactions between charged particles and electromagnetic fields - suggested that this particular constant is increasing very slightly with time. If confirmed, this would have very profound implications for our understanding of fundamental physics. New studies, conducted using the UVES spectrograph on Kueyen, one of the 8.2-m telescopes of ESO's Very Large Telescope array at Paranal (Chile), secured new data with unprecedented quality. These data, combined with a very careful analysis, have provided the strongest astronomical constraints to date on the possible variation of the fine structure constant. They show that, contrary to previous claims, no evidence exist for assuming a time variation of this fundamental constant. PR Photo 07/04: Relative Changes with Redshift of the Fine Structure Constant (VLT/UVES) A fine constant To explain the Universe and to represent it mathematically, scientists rely on so-called fundamental constants or fixed numbers. The fundamental laws of physics, as we presently understand them, depend on about 25 such constants. Well-known examples are the gravitational constant, which defines the strength of the force acting between two bodies, such as the Earth and the Moon, and the speed of light. One of these constants is the so-called "fine structure constant", alpha = 1/137.03599958, a combination of electrical charge of the electron, the Planck constant and the speed of light. The fine structure constant describes how electromagnetic forces hold

  8. New Quasar Studies Keep Fundamental Physical Constant Constant

    NASA Astrophysics Data System (ADS)

    2004-03-01

    fundamental constant at play here, alpha. However, the observed distribution of the elements is consistent with calculations assuming that the value of alpha at that time was precisely the same as the value today. Over the 2 billion years, the change of alpha has therefore to be smaller than about 2 parts per 100 millions. If present at all, this is a rather small change indeed. But what about changes much earlier in the history of the Universe? To measure this we must find means to probe still further into the past. And this is where astronomy can help. Because, even though astronomers can't generally do experiments, the Universe itself is a huge atomic physics laboratory. By studying very remote objects, astronomers can look back over a long time span. In this way it becomes possible to test the values of the physical constants when the Universe had only 25% of is present age, that is, about 10,000 million years ago. Very far beacons To do so, astronomers rely on spectroscopy - the measurement of the properties of light emitted or absorbed by matter. When the light from a flame is observed through a prism, a rainbow is visible. When sprinkling salt on the flame, distinct yellow lines are superimposed on the usual colours of the rainbow, so-called emission lines. Putting a gas cell between the flame and the prism, one sees however dark lines onto the rainbow: these are absorption lines. The wavelength of these emission and absorption lines is directly related to the energy levels of the atoms in the salt or in the gas. Spectroscopy thus allows us to study atomic structure. The fine structure of atoms can be observed spectroscopically as the splitting of certain energy levels in those atoms. So if alpha were to change over time, the emission and absorption spectra of these atoms would change as well. One way to look for any changes in the value of alpha over the history of the Universe is therefore to measure the spectra of distant quasars, and compare the wavelengths of

  9. Fundamental Constants and Tests with Simple Atoms

    NASA Astrophysics Data System (ADS)

    Tan, Joseph

    2015-05-01

    Precise measurements with simple atoms provide stringent tests of physical laws, improving the accuracy of fundamental constants--a set of which will be selected to fully define the proposed New International System of Units. This talk focuses on the atomic constants (namely, the Rydberg constant, the fine-structure constant, and the proton charge radius), discussing the impact of the proton radius obtained from the Lamb-shift measurements in muonic hydrogen. Significant discrepancies persist despite years of careful examination: the slightly smaller proton radius obtained from muonic hydrogen requires the Rydberg constant and the fine-structure constant to have values that disagree significantly with the CODATA recommendations. After giving a general overview, I will discuss our effort to produce one-electron ions in Rydberg states, to enable a different test of theory and measurement of the Rydberg constant.

  10. Man's Size in Terms of Fundamental Constants.

    ERIC Educational Resources Information Center

    Press, William H.

    1980-01-01

    Reviews calculations that derive an order of magnitude expression for the size of man in terms of fundamental constants, assuming that man satifies these three properties: he is made of complicated molecules; he requires an atmosphere which is not hydrogen and helium; he is as large as possible. (CS)

  11. Black hole constraints on varying fundamental constants.

    PubMed

    MacGibbon, Jane H

    2007-08-10

    We apply the generalized second law of thermodynamics and derive upper limits on the variation in the fundamental constants. The maximum variation in the electronic charge permitted for black holes accreting and emitting in the present cosmic microwave background corresponds to a variation in the fine-structure constant of Deltaalpha/alpha approximately 2 x 10(-23) per second. This value matches the variation measured by Webb et al. [Phys. Rev. Lett. 82, 884 (1999); Phys. Rev. Lett. 87, 091301 (2001)] using absorption lines in the spectra of distant quasars and suggests the variation mechanism may be a coupling between the electron and the cosmic photon background. PMID:17930813

  12. PREFACE: Fundamental Constants in Physics and Metrology

    NASA Astrophysics Data System (ADS)

    Klose, Volkmar; Kramer, Bernhard

    1986-01-01

    This volume contains the papers presented at the 70th PTB Seminar which, the second on the subject "Fundamental Constants in Physics and Metrology", was held at the Physikalisch-Technische Bundesanstalt in Braunschweig from October 21 to 22, 1985. About 100 participants from the universities and various research institutes of the Federal Republic of Germany participated in the meeting. Besides a number of review lectures on various broader subjects there was a poster session which contained a variety of topical contributed papers ranging from the theory of the quantum Hall effect to reports on the status of the metrological experiments at the PTB. In addition, the participants were also offered the possibility to visit the PTB laboratories during the course of the seminar. During the preparation of the meeting we noticed that even most of the general subjects which were going to be discussed in the lectures are of great importance in connection with metrological experiments and should be made accessible to the scientific community. This eventually resulted in the idea of the publication of the papers in a regular journal. We are grateful to the editor of Metrologia for providing this opportunity. We have included quite a number of papers from basic physical research. For example, certain aspects of high-energy physics and quantum optics, as well as the many-faceted role of Sommerfeld's fine-structure constant, are covered. We think that questions such as "What are the intrinsic fundamental parameters of nature?" or "What are we doing when we perform an experiment?" can shed new light on the art of metrology, and do, potentially, lead to new ideas. This appears to be especially necessary when we notice the increasing importance of the role of the fundamental constants and macroscopic quantum effects for the definition and the realization of the physical units. In some cases we have reached a point where the limitations of our knowledge of a fundamental constant and

  13. Search for a Variation of Fundamental Constants

    NASA Astrophysics Data System (ADS)

    Ubachs, W.

    2013-06-01

    Since the days of Dirac scientists have speculated about the possibility that the laws of nature, and the fundamental constants appearing in those laws, are not rock-solid and eternal but may be subject to change in time or space. Such a scenario of evolving constants might provide an answer to the deepest puzzle of contemporary science, namely why the conditions in our local Universe allow for extreme complexity: the fine-tuning problem. In the past decade it has been established that spectral lines of atoms and molecules, which can currently be measured at ever-higher accuracies, form an ideal test ground for probing drifting constants. This has brought this subject from the realm of metaphysics to that of experimental science. In particular the spectra of molecules are sensitive for probing a variation of the proton-electron mass ratio μ, either on a cosmological time scale, or on a laboratory time scale. A comparison can be made between spectra of molecular hydrogen observed in the laboratory and at a high redshift (z=2-3), using the Very Large Telescope (Paranal, Chile) and the Keck telescope (Hawaii). This puts a constraint on a varying mass ratio Δμ/μ at the 10^{-5} level. The optical work can also be extended to include CO molecules. Further a novel direction will be discussed: it was discovered that molecules exhibiting hindered internal rotation have spectral lines in the radio-spectrum that are extremely sensitive to a varying proton-electron mass ratio. Such lines in the spectrum of methanol were recently observed with the radio-telescope in Effelsberg (Germany). F. van Weerdenburg, M.T. Murphy, A.L. Malec, L. Kaper, W. Ubachs, Phys. Rev. Lett. 106, 180802 (2011). A. Malec, R. Buning, M.T. Murphy, N. Milutinovic, S.L. Ellison, J.X. Prochaska, L. Kaper, J. Tumlinson, R.F. Carswell, W. Ubachs, Mon. Not. Roy. Astron. Soc. 403, 1541 (2010). E.J. Salumbides, M.L. Niu, J. Bagdonaite, N. de Oliveira, D. Joyeux, L. Nahon, W. Ubachs, Phys. Rev. A 86, 022510

  14. Spatial and temporal variations of fundamental constants

    NASA Astrophysics Data System (ADS)

    Levshakov, S. A.; Agafonova, I. I.; Molaro, P.; Reimers, D.

    2010-11-01

    Spatial and temporal variations in the electron-to-proton mass ratio, μ, and in the fine-structure constant, α, are not present in the Standard Model of particle physics but they arise quite naturally in grant unification theories, multidimensional theories and in general when a coupling of light scalar fields to baryonic matter is considered. The light scalar fields are usually attributed to a negative pressure substance permeating the entire visible Universe and known as dark energy. This substance is thought to be responsible for a cosmic acceleration at low redshifts, z < 1. A strong dependence of μ and α on the ambient matter density is predicted by chameleon-like scalar field models. Calculations of atomic and molecular spectra show that different transitions have different sensitivities to changes in fundamental constants. Thus, measuring the relative line positions, Δ V, between such transitions one can probe the hypothetical variability of physical constants. In particular, interstellar molecular clouds can be used to test the matter density dependence of μ, since gas density in these clouds is ~15 orders of magnitude lower than that in terrestrial environment. We use the best quality radio spectra of the inversion transition of NH3 (J,K)=(1,1) and rotational transitions of other molecules to estimate the radial velocity offsets, Δ V ≡ Vrot - Vinv. The obtained value of Δ V shows a statistically significant positive shift of 23±4stat±3sys m s-1 (1σ). Being interpreted in terms of the electron-to-proton mass ratio variation, this gives Δμ/μ = (22±4stat±3sys)×10-9. A strong constraint on variation of the quantity F = α2/μ in the Milky Way is found from comparison of the fine-structure transition J=1-0 in atomic carbon C i with the low-J rotational lines in carbon monoxide 13CO arising in the interstellar molecular clouds: |Δ F/F| < 3×10-7. This yields |Δ α/α| < 1.5×10-7 at z = 0. Since extragalactic absorbers have gas densities

  15. Resource Letter FC-1: The physics of fundamental constants

    NASA Astrophysics Data System (ADS)

    Mohr, Peter J.; Newell, David B.

    2010-04-01

    This Resource Letter provides a guide to the literature on the physics of fundamental constants and their values as determined within the International System of Units (SI). Journal articles, books, and websites that provide relevant information are surveyed. Literature on redefining the SI in terms of exact values of fundamental constants is also included.

  16. Fundamental constants: The teamwork of precision

    NASA Astrophysics Data System (ADS)

    Myers, Edmund G.

    2014-02-01

    A new value for the atomic mass of the electron is a link in a chain of measurements that will enable a test of the standard model of particle physics with better than part-per-trillion precision. See Letter p.467

  17. Differential Mobility Spectrometry: Preliminary Findings on Determination of Fundamental Constants

    NASA Technical Reports Server (NTRS)

    Limero, Thomas; Cheng, Patti; Boyd, John

    2007-01-01

    The electron capture detector (ECD) has been used for 40+ years (1) to derive fundamental constants such as a compound's electron affinity. Given this historical perspective, it is not surprising that differential mobility spectrometry (DMS) might be used in a like manner. This paper will present data from a gas chromatography (GC)-DMS instrument that illustrates the potential capability of this device to derive fundamental constants for electron-capturing compounds. Potential energy curves will be used to provide possible explanation of the data.

  18. The physical basis of natural units and truly fundamental constants

    NASA Astrophysics Data System (ADS)

    Hsu, L.; Hsu, J. P.

    2012-01-01

    The natural unit system, in which the value of fundamental constants such as c and ℏ are set equal to one and all quantities are expressed in terms of a single unit, is usually introduced as a calculational convenience. However, we demonstrate that this system of natural units has a physical justification as well. We discuss and review the natural units, including definitions for each of the seven base units in the International System of Units (SI) in terms of a single unit. We also review the fundamental constants, which can be classified as units-dependent or units-independent. Units-independent constants, whose values are not determined by human conventions of units, may be interpreted as inherent constants of nature.

  19. Planck intermediate results. XXIV. Constraints on variations in fundamental constants

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Butler, R. C.; Calabrese, E.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombo, L. P. L.; Couchot, F.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Diego, J. M.; Dole, H.; Doré, O.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Fabre, O.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Pratt, G. W.; Prunet, S.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Ristorcelli, I.; Rocha, G.; Roudier, G.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Uzan, J.-P.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2015-08-01

    Any variation in the fundamental physical constants, more particularly in the fine structure constant, α, or in the mass of the electron, me, affects the recombination history of the Universe and cause an imprint on the cosmic microwave background angular power spectra. We show that the Planck data allow one to improve the constraint on the time variation of the fine structure constant at redshift z ~ 103 by about a factor of 5 compared to WMAP data, as well as to break the degeneracy with the Hubble constant, H0. In addition to α, we can set a constraint on the variation in the mass of the electron, me, and in the simultaneous variation of the two constants. We examine in detail the degeneracies between fundamental constants and the cosmological parameters, in order to compare the limits obtained from Planck and WMAP and to determine the constraining power gained by including other cosmological probes. We conclude that independent time variations of the fine structure constant and of the mass of the electron are constrained by Planck to Δα/α = (3.6 ± 3.7) × 10-3 and Δme/me = (4 ± 11) × 10-3 at the 68% confidence level. We also investigate the possibility of a spatial variation of the fine structure constant. The relative amplitude of a dipolar spatial variation in α (corresponding to a gradient across our Hubble volume) is constrained to be δα/α = (-2.4 ± 3.7) × 10-2. Appendices are available in electronic form at http://www.aanda.org

  20. Early universe constraints on time variation of fundamental constants

    SciTech Connect

    Landau, Susana J.; Mosquera, Mercedes E.; Scoccola, Claudia G.; Vucetich, Hector

    2008-10-15

    We study the time variation of fundamental constants in the early Universe. Using data from primordial light nuclei abundances, cosmic microwave background, and the 2dFGRS power spectrum, we put constraints on the time variation of the fine structure constant {alpha} and the Higgs vacuum expectation value without assuming any theoretical framework. A variation in leads to a variation in the electron mass, among other effects. Along the same line, we study the variation of {alpha} and the electron mass m{sub e}. In a purely phenomenological fashion, we derive a relationship between both variations.

  1. Dynamical dark energy and variation of fundamental "constants"

    NASA Astrophysics Data System (ADS)

    Stern, Steffen

    2008-12-01

    In this thesis we study the influence of a possible variation of fundamental "constants" on the process of Big Bang Nucleosynthesis (BBN). Our findings are combined with further studies on variations of constants in other physical processes to constrain models of grand unification (GUT) and quintessence. We will find that the 7Li problem of BBN can be ameliorated if one allows for varying constants, where especially varying light quark masses show a strong influence. Furthermore, we show that recent studies of varying constants are in contradiction with each other and BBN in the framework of six exemplary GUT scenarios, if one assumes monotonic variation with time. We conclude that there is strong tension between recent claims of varying constants, hence either some claims have to be revised, or there are much more sophisticated GUT relations (and/or non-monotonic variations) realized in nature. The methods introduced in this thesis prove to be powerful tools to probe regimes well beyond the Standard Model of particle physics or the concordance model of cosmology, which are currently inaccessible by experiments. Once the first irrefutable proofs of varying constants are available, our method will allow for probing the consistency of models beyond the standard theories like GUT or quintessence and also the compatibility between these models.

  2. Fundamental constants and cosmic vacuum: The micro and macro connection

    NASA Astrophysics Data System (ADS)

    Fritzsch, Harald; Solà, Joan

    2015-06-01

    The idea that the vacuum energy density ρΛ could be time-dependent is a most reasonable one in the expanding Universe; in fact, much more reasonable than just a rigid cosmological constant for the entire cosmic history. Being ρΛ = ρΛ(t) dynamical, it offers a possibility to tackle the cosmological constant problem in its various facets. Furthermore, for a long time (most prominently since Dirac’s first proposal on a time variable gravitational coupling) the possibility that the fundamental “constants” of Nature are slowly drifting with the cosmic expansion has been continuously investigated. In the last two decades, and specially in recent times, mounting experimental evidence attests that this could be the case. In this paper, we consider the possibility that these two groups of facts might be intimately connected, namely that the observed acceleration of the Universe and the possible time variation of the fundamental constants are two manifestations of the same underlying dynamics. We call it: the “micro and macro connection”, and on its basis we expect that the cosmological term in Einstein’s equations, Newton’s coupling and the masses of all the particles in the Universe, both the dark matter (DM) particles and the ordinary baryons and leptons, should all drift with the cosmic expansion. Here, we discuss specific cosmological models realizing such possibility in a way that preserves the principle of covariance of general relativity (GR).

  3. CONSTRAINING FUNDAMENTAL CONSTANT EVOLUTION WITH H I AND OH LINES

    SciTech Connect

    Kanekar, N.; Langston, G. I.; Stocke, J. T.; Carilli, C. L.; Menten, K. M.

    2012-02-20

    We report deep Green Bank Telescope spectroscopy in the redshifted H I 21 cm and OH 18 cm lines from the z = 0.765 absorption system toward PMN J0134-0931. A comparison between the 'satellite' OH 18 cm line redshifts, or between the redshifts of the H I 21 cm and 'main' OH 18 cm lines, is sensitive to changes in different combinations of three fundamental constants, the fine structure constant {alpha}, the proton-electron mass ratio {mu} {identical_to} m{sub p} /m{sub e} , and the proton g-factor g{sub p} . We find that the satellite OH 18 cm lines are not perfectly conjugate, with both different line shapes and stronger 1612 MHz absorption than 1720 MHz emission. This implies that the satellite lines of this absorber are not suitable to probe fundamental constant evolution. A comparison between the redshifts of the H I 21 cm and OH 18 cm lines, via a multi-Gaussian fit, yields the strong constraint [{Delta}F/F] = [ - 5.2 {+-} 4.3] Multiplication-Sign 10{sup -6}, where F {identical_to} g{sub p} [{mu}{alpha}{sup 2}]{sup 1.57} and the error budget includes contributions from both statistical and systematic errors. We thus find no evidence for a change in the constants between z = 0.765 and the present epoch. Incorporating the constraint [{Delta}{mu}/{mu}] < 3.6 Multiplication-Sign 10{sup -7} from another absorber at a similar redshift and assuming that fractional changes in g{sub p} are much smaller than those in {alpha}, we obtain [{Delta}{alpha}/{alpha}] = (- 1.7 {+-} 1.4) Multiplication-Sign 10{sup -6} over a look-back time of 6.7 Gyr.

  4. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  5. Violation of fundamental symmetries and variation of fundamental constants in atomic phenomena

    SciTech Connect

    Flambaum, V. V.

    2007-06-13

    We present a review of recent works on variation of fundamental constants and violation of parity in atoms and nuclei.Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental 'constants' in expanding Universe. The spatial variation can explain fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence.We describe recent works devoted to the variation of the fine structure constant {alpha}, strong interaction and fundamental masses (Higgs vacuum). There are some hints for the variation in quasar absorption spectra, Big Bang nucleosynthesis, and Oklo natural nuclear reactor data.A very promising method to search for the variation consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transitions between very close atomic and molecular energy levels. A new idea is to build a 'nuclear' clock based on UV transition in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude. Measurements of violation of fundamental symmetries, parity (P) and time reversal (T), in atoms allows one to test unification theories in atomic experiments. We have developed an accurate method of many-body calculations - all-orders summation of dominating diagrams in residual e-e interaction. To calculate QED radiative corrections to energy levels and electromagnetic amplitudes in many-electron atoms and molecules we derived the ''radiative potential'' and the low-energy theorem. This method is simple and can be easily incorporated into any many-body theory approach. Using the radiative correction and many-body calculations we obtained the PNC amplitude EPNC = -0.898(1 {+-} 0.5%) x 10-11ieaB(-QW/N). From the measurements of the PNC amplitude we extracted the Cs weak charge QW = -72.66(29)exp(36)theor. The

  6. Magnetic vortex filaments, universal scale invariants, and the fundamental constants

    SciTech Connect

    Lerner, E.J.

    1986-12-01

    An explanation for the observed scale invariants in the universe is presented. Force-free magnetic vortex filaments are proposed to play a crucial role in the formation of superclusters, clusters, galaxies, and stars by initiating gravitational compression. The critical velocities involved in vortex formation are shown to explain the observed constant orbital velocities of clusters, galaxies, and stars. A second scale invariant nr = C where n is particle density and r is average distance between objects, is also noted here and explained by the model. The model predicts a maximum size for magnetic vortices, which is comparable to the dimensions of the observable universe and a density for such vortices which is close to that actually observed, eliminating any theoretical need for missing mass. On this basis, they present an alternative cosmology to that of the ''Big Bang,'' one which provides a much better fit to recent observations of large-scale structure and motion. The model suggests scale invariants between microscopic and cosmological scales, leading to the derivation of a simple analytical expression for the fundamental constants G, m/sub rho//m/sub e/, and e/sup 2//hc. We conclude that these expressions indicate the existence of vortex phenomena on the particle level.

  7. A Fundamental Breakdown. Part II: Manipulative Skills

    ERIC Educational Resources Information Center

    Townsend, J. Scott; Mohr, Derek J.

    2005-01-01

    In the May, 2005, issue of "TEPE," the "Research to Practice" section initiated a two-part series focused on assessing fundamental locomotor and manipulative skills. The series was generated in response to research by Pappa, Evanggelinou, & Karabourniotis (2005), recommending that curricular programming in physical education at the elementary…

  8. Search for variation of fundamental constants and violations of fundamental symmetries using isotope comparisons

    SciTech Connect

    Berengut, J. C.; Flambaum, V. V.; Kava, E. M.

    2011-10-15

    Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes of experimental interest including {sup 201,199}Hg and {sup 87,85}Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.

  9. Corrections to fundamental constants from photoelectric observations of lunar occultations

    NASA Astrophysics Data System (ADS)

    Rossello, G.

    1982-12-01

    A catalog of photoelectric occultations, which are more accurate than visual observations, is presented along with an analysis of the occultations intended to correct the FK4 stellar reference frame and lunar theory constants. A constant correction at the epoch 1969.0 of plus 0.87 plus or minus 0.06 to the FK4 system is consistent with those obtained by other authors, and the corrections to the semidiameter and parallactic inequality are in accord with values recently obtained by Morrison and Appleby (1981).

  10. Big Bang nucleosynthesis as a probe of varying fundamental ``constants''

    NASA Astrophysics Data System (ADS)

    Dent, Thomas; Stern, Steffen; Wetterich, Christof

    2007-11-01

    We analyze the effect of variation of fundamental couplings and mass scales on primordial nucleosynthesis in a systematic way. The first step establishes the response of primordial element abundances to the variation of a large number of nuclear physics parameters, including nuclear binding energies. We find a strong influence of the n-p mass difference, of the nucleon mass and of A = 3,4,7 binding energies. A second step relates the nuclear parameters to the parameters of the Standard Model of particle physics. The deuterium, and, above all, 7Li abundances depend strongly on the average light quark mass. We calculate the behaviour of abundances when variations of fundamental parameters obey relations arising from grand unification. We also discuss the possibility of a substantial shift in the lithium abundance while the deuterium and 4He abundances are only weakly affected.

  11. Fundamental Constants as Monitors of Particle Physics and Dark Energy

    NASA Astrophysics Data System (ADS)

    Thompson, Rodger

    2016-03-01

    This contribution considers the constraints on particle physics and dark energy parameter space imposed by the astronomical observational constraints on the variation of the proton to electron mass ratio μ and the fine structure constant α. These constraints impose limits on the temporal variation of these parameters on a time scale greater than half the age of the universe, a time scale inaccessible by laboratory facilities such as the Large Hadron Collider. The limits on the variance of μ and α constrain combinations of the QCD Scale, the Higgs VEV and the Yukawa coupling on the particle physics side and a combination of the temporal variation of rolling scalar field and its coupling to the constants on the dark energy side.

  12. Progress in the accuracy of the fundamental physical constants: 2010 CODATA recommended values

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2013-09-01

    Every four years, the CODATA Task Group on Fundamental Constants presents tables of recommended values of the fundamental physical constants. Recently, 2010 CODATA recommended values (Mohr P J, Taylor B N, and Newell D B "CODATA recommended values of the fundamental physical constants: 2010" Rev. Mod. Phys. 84 1527 (2012)), based on global data up to 31 December 2010, were published. In the present review, we briefly analyze the new recommended values, as well as new original data, on which the determination is based. To facilitate the consideration, the data are subdivided into several groups. New original theoretical and experimental results are discussed for each group separately. Special attention is paid to experimental and theoretical progress in the determination of the Rydberg constant R_{\\infty}, the electron-proton mass ratio m_{ e}/m_{ p}, the fine-structure constant \\alpha, the Planck constant h, the Boltzmann constant k, the Newtonian constant of gravitation G, and the anomalous magnetic moment of the muon \\alpha_{\\mu}. In conclusion, the prospects of redefining units of the International System (SI) in terms of fundamental physical constants, which is currently under active discussion by the metrological community, are considered. The very possibility and efficiency of a practical realization of such a scenario with the redefinition directly depends on the status of the determination of the fundamental constants.

  13. Constraints on alternate universes: stars and habitable planets with different fundamental constants

    NASA Astrophysics Data System (ADS)

    Adams, Fred C.

    2016-02-01

    This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant α and the gravitational structure constant αG, and find the region in the α-αG plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper delineates the portion of the α-αG plane that satisfies all of these constraints. The results indicate that viable universes—with working stars and habitable planets—can exist within a parameter space where the structure constants α and αG vary by several orders of magnitude. These constraints also provide upper bounds on the structure constants (α,αG) and their ratio. We find the limit αG/α lesssim 10-34, which shows that habitable universes must have a large hierarchy between the strengths of the gravitational force and the electromagnetic force.

  14. [Aerosinusitis: part 1: Fundamentals, pathophysiology and prophylaxis].

    PubMed

    Weber, R; Kühnel, T; Graf, J; Hosemann, W

    2014-01-01

    The relevance of aerosinusitis stems from the high number of flight passengers and the impaired fitness for work of the flight personnel. The frontal sinus is more frequently affected than the maxillary sinus and the condition generally occurs during descent. Sinonasal diseases and anatomic variations leading to obstruction of paranasal sinus ventilation favor the development of aerosinusitis. This Continuing Medical Education (CME) article is based on selective literature searches of the PubMed database (search terms: "aerosinusitis", "barosinusitis", "barotrauma" AND "sinus", "barotrauma" AND "sinusitis", "sinusitis" AND "flying" OR "aviator"). Additionally, currently available monographs and further articles that could be identified based on the publication reviews were also included. Part 1 presents the pathophysiology, symptoms, risk factors, epidemiology and prophylaxis of aerosinusitis. In part 2, diagnosis, conservative and surgical treatment will be discussed. PMID:24337391

  15. Variation of Fundamental Constants from the Big Bang to Atomic Clocks:. Theory and Observations

    NASA Astrophysics Data System (ADS)

    Flambaum, V. V.; Berengut, J. C.

    2009-04-01

    Theories unifying gravity with other interactions suggest the possibility of temporal and spatial variation of the fundamental "constants" in an expanding Universe. In this review we discuss the effects of variation of the fine-structure constant and fundamental masses on measurements covering the lifespan of the Universe from a few minutes after Big Bang to the present time. Measurements give controversial results, including some hints for variation in Big Bang nucleosynthesis and quasar absorption spectra data. Furthermore there are very promising methods to search for the variation of fundamental constants by comparison of different atomic clocks. Huge enhancements of the relative variation effects happen in transitions between accidentally degenerate nuclear, atomic, and molecular energy levels.

  16. Effects of the variation of fundamental constants on Pop III stellar evolution

    SciTech Connect

    Coc, A.; Descouvemont, P.; Uzan, J.-Ph.; Vangioni, E.

    2010-08-12

    The effect of variations of the fundamental constants on the thermonuclear rate of the triple alpha reaction, {sup 4}He({alpha}{alpha}, {gamma}){sup 12}C, that bridges the gap between {sup 4}He and {sup 12}C is investigated. We have followed the evolution of 15 and 60 M{center_dot} zero metallicity stellar models, up to the end of core helium burning. They are assumed to be representative of the first (Population III) stars. The calculated oxygen carbon abundances resulting from helium burning can then be used to constrain the variation of the fundamental constants.

  17. Protonated nitrous oxide, NNOH+: fundamental vibrational frequencies and spectroscopic constants from quartic force fields.

    PubMed

    Huang, Xinchuan; Fortenberry, Ryan C; Lee, Timothy J

    2013-08-28

    The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(J) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(-1), and the vibrational configuration interaction computed result is 3330.9 cm(-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the interstellar medium and the laboratory. PMID:24007003

  18. Protonated Nitrous Oxide, NNOH(+): Fundamental Vibrational Frequencies and Spectroscopic Constants from Quartic Force Fields

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Fortenberry, Ryan C.; Lee, Timothy J.

    2013-01-01

    The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(subJ) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(exp-1), and the vibrational configuration interaction computed result is 3330.9 cm(exp-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the ISM and the laboratory.

  19. Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD(.).

    PubMed

    Biesheuvel, J; Karr, J-Ph; Hilico, L; Eikema, K S E; Ubachs, W; Koelemeij, J C J

    2016-01-01

    The simplest molecules in nature, molecular hydrogen ions in the form of H2(+) and HD(+), provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD(+) by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws. PMID:26815886

  20. Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD+

    PubMed Central

    Biesheuvel, J.; Karr, J.-Ph.; Hilico, L.; Eikema, K. S. E.; Ubachs, W.; Koelemeij, J. C. J.

    2016-01-01

    The simplest molecules in nature, molecular hydrogen ions in the form of H2+ and HD+, provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD+ by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws. PMID:26815886

  1. Gyromagnetic factors and atomic clock constraints on the variation of fundamental constants

    SciTech Connect

    Luo Feng; Olive, Keith A.; Uzan, Jean-Philippe

    2011-11-01

    We consider the effect of the coupled variations of fundamental constants on the nucleon magnetic moment. The nucleon g-factor enters into the interpretation of the measurements of variations in the fine-structure constant, {alpha}, in both the laboratory (through atomic clock measurements) and in astrophysical systems (e.g. through measurements of the 21 cm transitions). A null result can be translated into a limit on the variation of a set of fundamental constants, that is usually reduced to {alpha}. However, in specific models, particularly unification models, changes in {alpha} are always accompanied by corresponding changes in other fundamental quantities such as the QCD scale, {Lambda}{sub QCD}. This work tracks the changes in the nucleon g-factors induced from changes in {Lambda}{sub QCD} and the light quark masses. In principle, these coupled variations can improve the bounds on the variation of {alpha} by an order of magnitude from existing atomic clock and astrophysical measurements. Unfortunately, the calculation of the dependence of g-factors on fundamental parameters is notoriously model-dependent.

  2. Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?

    PubMed

    Stadnik, Y V; Flambaum, V V

    2015-11-13

    We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ, which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ, produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial (4)He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial (4)He abundance. PMID:26613429

  3. Dependence of macrophysical phenomena on the values of the fundamental constants

    NASA Astrophysics Data System (ADS)

    Press, W. H.; Lightman, A. P.

    1983-12-01

    Using simple arguments, it is considered how the fundamental constants determine the scales of various macroscopic phenomena, including the properties of solid matter; the distinction between rocks, asteroids, planets, and stars; the conditions on habitable planets; the length of the day and year; and the size and athletic ability of human beings. Most of the results, where testable, are accurate to within a couple of orders of magnitude.

  4. A search for varying fundamental constants using hertz-level frequency measurements of cold CH molecules

    PubMed Central

    Truppe, S.; Hendricks, R.J.; Tokunaga, S.K.; Lewandowski, H.J.; Kozlov, M.G.; Henkel, Christian; Hinds, E.A.; Tarbutt, M.R.

    2013-01-01

    Many modern theories predict that the fundamental constants depend on time, position or the local density of matter. Here we develop a spectroscopic method for pulsed beams of cold molecules, and use it to measure the frequencies of microwave transitions in CH with accuracy down to 3 Hz. By comparing these frequencies with those measured from sources of CH in the Milky Way, we test the hypothesis that fundamental constants may differ between the high- and low-density environments of the Earth and the interstellar medium. For the fine structure constant we find Δα/α=(0.3±1.1) × 10−7, the strongest limit to date on such a variation of α. For the electron-to-proton mass ratio we find Δμ/μ=(−0.7±2.2) × 10−7. We suggest how dedicated astrophysical measurements can improve these constraints further and can also constrain temporal variation of the constants. PMID:24129439

  5. A search for varying fundamental constants using hertz-level frequency measurements of cold CH molecules.

    PubMed

    Truppe, S; Hendricks, R J; Tokunaga, S K; Lewandowski, H J; Kozlov, M G; Henkel, Christian; Hinds, E A; Tarbutt, M R

    2013-01-01

    Many modern theories predict that the fundamental constants depend on time, position or the local density of matter. Here we develop a spectroscopic method for pulsed beams of cold molecules, and use it to measure the frequencies of microwave transitions in CH with accuracy down to 3 Hz. By comparing these frequencies with those measured from sources of CH in the Milky Way, we test the hypothesis that fundamental constants may differ between the high- and low-density environments of the Earth and the interstellar medium. For the fine structure constant we find Δα/α=(0.3 ± 1.1) × 10⁻⁷, the strongest limit to date on such a variation of α. For the electron-to-proton mass ratio we find Δμ/μ=(-0.7 ± 2.2) × 10⁻⁷. We suggest how dedicated astrophysical measurements can improve these constraints further and can also constrain temporal variation of the constants. PMID:24129439

  6. Competing bounds on the present-day time variation of fundamental constants

    SciTech Connect

    Dent, Thomas; Stern, Steffen; Wetterich, Christof

    2009-04-15

    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time-varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the weak equivalence principle also lead to comparable bounds on present variations of constants. The 'winner in sensitivity' depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. Weak equivalence principle tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favor dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.

  7. A Different Look at Dark Energy and the Time Variation of Fundamental Constants

    SciTech Connect

    Weinstein, Marvin; /SLAC

    2011-02-07

    This paper makes the simple observation that a fundamental length, or cutoff, in the context of Friedmann-Lemaitre-Robertson-Walker (FRW) cosmology implies very different things than for a static universe. It is argued that it is reasonable to assume that this cutoff is implemented by fixing the number of quantum degrees of freedom per co-moving volume (as opposed to a Planck volume) and the relationship of the vacuum-energy of all of the fields in the theory to the cosmological constant (or dark energy) is re-examined. The restrictions that need to be satisfied by a generic theory to avoid conflicts with current experiments are discussed, and it is shown that in any theory satisfying these constraints knowing the difference between w and minus one allows one to predict w. It is argued that this is a robust result and if this prediction fails the idea of a fundamental cutoff of the type being discussed can be ruled out. Finally, it is observed that, within the context of a specific theory, a co-moving cutoff implies a predictable time variation of fundamental constants. This is accompanied by a general discussion of why this is so, what are the strongest phenomenological limits upon this predicted variation, and which limits are in tension with the idea of a co-moving cutoff. It is pointed out, however, that a careful comparison of the predicted time variation of fundamental constants is not possible without restricting to a particular model field-theory and that is not done in this paper.

  8. Spectroscopy of antiprotonic helium atoms and its contribution to the fundamental physical constants

    PubMed Central

    Hayano, Ryugo S.

    2010-01-01

    Antiprotonic helium atom, a metastable neutral system consisting of an antiproton, an electron and a helium nucleus, was serendipitously discovered, and has been studied at CERN’s antiproton decelerator facility. Its transition frequencies have recently been measured to nine digits of precision by laser spectroscopy. By comparing these experimental results with three-body QED calculations, the antiproton-to-electron massratio was determined as 1836.152674(5). This result contributed to the CODATA recommended values of the fundamental physical constants. PMID:20075605

  9. METHODOLOGICAL NOTES: On the redefinition of the kilogram and ampere in terms of fundamental physical constants

    NASA Astrophysics Data System (ADS)

    Karshenboim, Savelii G.

    2006-09-01

    In the summer of 2005, a meeting of the Consultative Committee for Units of the International Committee on Weights and Measures took place. One of the topics discussed at the meeting was a possible redefinition of the kilogram in terms of fundamental physical constants — a question of relevance to a wide circle of specialists, from school teachers to physicists performing research in a great variety of fields. In this paper, the current situation regarding this question is briefly reviewed and its discussion at the Consultative Committee for Units and other bodies involved is covered. Other issues related to the International System of Units (SI) and broached at the meeting are also discussed.

  10. Fundamentals of Physics, Part 5 (Chapters 38-44)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-05-01

    Chapter 38. Photons and Matter Waves. Chapter 39. More About Matter Waves. Chapter 40. All About Atoms. Chapter 41. Conduction of Electricity in Solids. Chapter 42. Nuclear Physics. Chapter 43. Energy from the Nucleus. Chapter 44. Quarks, Leptons, and the Big Bang. Appendix A: The International System of Units (SI). Appendix B: Some Fundamental Constants of Physics. Appendix C: Some Astronomical Data. Appendix D: Conversion Factors. Appendix E: Mathematical Formulas. Appendix F: Properties of the Elements. Appendix G: Periodic Tables of the Elements. Answers to Checkpoints and Odd-Numbered Questions, Exercises, and Problems. Index.

  11. Manifestations of Dark matter and variation of the fundamental constants in atomic and astrophysical phenomena

    NASA Astrophysics Data System (ADS)

    Flambaum, Victor

    2016-05-01

    Low-mass boson dark matter particles produced after Big Bang form classical field and/or topological defects. In contrast to traditional dark matter searches, effects produced by interaction of an ordinary matter with this field and defects may be first power in the underlying interaction strength rather than the second or fourth power (which appears in a traditional search for the dark matter). This may give a huge advantage since the dark matter interaction constant is extremely small. Interaction between the density of the dark matter particles and ordinary matter produces both `slow' cosmological evolution and oscillating variations of the fundamental constants including the fine structure constant alpha and particle masses. Recent atomic dysprosium spectroscopy measurements and the primordial helium abundance data allowed us to improve on existing constraints on the quadratic interactions of the scalar dark matter with the photon, electron and light quarks by up to 15 orders of magnitude. Limits on the linear and quadratic interactions of the dark matter with W and Z bosons have been obtained for the first time. In addition to traditional methods to search for the variation of the fundamental constants (atomic clocks, quasar spectra, Big Bang Nucleosynthesis, etc) we discuss variations in phase shifts produced in laser/maser interferometers (such as giant LIGO, Virgo, GEO600 and TAMA300, and the table-top silicon cavity and sapphire interferometers), changes in pulsar rotational frequencies (which may have been observed already in pulsar glitches), non-gravitational lensing of cosmic radiation and the time-delay of pulsar signals. Other effects of dark matter and dark energy include apparent violation of the fundamental symmetries: oscillating or transient atomic electric dipole moments, precession of electron and nuclear spins about the direction of Earth's motion through an axion condensate, and axion-mediated spin-gravity couplings, violation of Lorentz

  12. Constraints on Changes in Fundamental Constants from a Cosmologically Distant OH Absorber or Emitter

    SciTech Connect

    Kanekar, N.; Carilli, C.L.; Langston, G.I.; Rocha, G.; Combes, F.; Subrahmanyan, R.; Stocke, J.T.; Menten, K.M.; Briggs, F.H.; Wiklind, T.

    2005-12-31

    We have detected the four 18 cm OH lines from the z{approx}0.765 gravitational lens toward PMN J0134-0931. The 1612 and 1720 MHz lines are in conjugate absorption and emission, providing a laboratory to test the evolution of fundamental constants over a large lookback time. We compare the HI and OH main line absorption redshifts of the different components in the z{approx}0.765 absorber and the z{approx}0.685 lens toward B0218+357 to place stringent constraints on changes in F{identical_to}g{sub p}[{alpha}{sup 2}/{mu}]{sup 1.57}. We obtain [{delta}F/F]=(0.44{+-}0.36{sup stat}{+-}1.0{sup syst})x10{sup -5}, consistent with no evolution over the redshift range 0

  13. Constraints on changes in fundamental constants from a cosmologically distant OH absorber or emitter.

    PubMed

    Kanekar, N; Carilli, C L; Langston, G I; Rocha, G; Combes, F; Subrahmanyan, R; Stocke, J T; Menten, K M; Briggs, F H; Wiklind, T

    2005-12-31

    We have detected the four 18 cm OH lines from the z approximaetely 0.765 gravitational lens toward PMN J0134-0931. The 1612 and 1720 MHz lines are in conjugate absorption and emission, providing a laboratory to test the evolution of fundamental constants over a large lookback time. We compare the HI and OH main line absorption redshifts of the different components in the z approximately 0.765 absorber and the z approximately 0.685 lens toward B0218 + 357 to place stringent constraints on changes in F triple-bond g(p)[alpha(2)/mu](1.57). We obtain [DeltaF/F] = (0.44 +/- 0.36(stat) +/- 1.0(sys)t) x 10(-5), consistent with no evolution over the redshift range 0 < z < or = 0.7. The measurements have a 2sigma sensitivity of [Deltaalpha/alpha] < 6.7 x 10(-6) or [Deltamu/mu] < 1.4 x 10(-5) to fractional changes in alpha and mu over a period of approximately 6.5 G yr, half the age of the Universe. These are among the most sensitive constraints on changes in mu. PMID:16486334

  14. Diagnostic and interventional musculoskeletal ultrasound: part 1. Fundamentals.

    PubMed

    Smith, Jay; Finnoff, Jonathan T

    2009-01-01

    Musculoskeletal ultrasound involves the use of high-frequency sound waves to image soft tissues and bony structures in the body for the purposes of diagnosing pathology or guiding real-time interventional procedures. Recently, an increasing number of physicians have integrated musculoskeletal ultrasound into their practices to facilitate patient care. Technological advancements, improved portability, and reduced costs continue to drive the proliferation of ultrasound in clinical medicine. This increased interest creates a need for education pertaining to all aspects of musculoskeletal ultrasound. The primary purpose of this article is to review diagnostic ultrasound technology and its potential clinical applications in the evaluation and treatment of patients with neurologic and musculoskeletal disorders. After reviewing this article, physicians should be able to (1) list the advantages and disadvantages of ultrasound compared with other available imaging modalities, (2) describe how ultrasound machines produce images using sound waves, (3) discuss the steps necessary to acquire and optimize an ultrasound image, (4) understand the different ultrasound appearances of tendons, nerves, muscles, ligaments, blood vessels, and bones, and (5) identify multiple applications for diagnostic and interventional musculoskeletal ultrasound in musculoskeletal practice. Part 1 of this 2-part article reviews the fundamentals of clinical ultrasonographic imaging, including relevant physics, equipment, training, image optimization, and scanning principles for diagnostic and interventional purposes. PMID:19627875

  15. Fundamentals of Physics, Part 1 (Chapters 1-11)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    . 10-8 Torque. 10-9 Newton's Second Law for Rotation. 10-10 Work and Rotational Kinetic Energy. Review & Summary. Questions. Problems. Chapter 11.Rolling, Torque, and Angular Momentum. When a jet-powered car became supersonic in setting the land-speed record, what was the danger to the wheels? 11-1 What Is Physics? 11-2 Rolling as Translation and Rotation Combined. 11-3 The Kinetic Energy of Rolling. 11-4 The Forces of Rolling. 11-5 The Yo-Yo. 11-6 Torque Revisited. 11-7 Angular Momentum. 11-8 Newton's Second Law in Angular Form. 11-9 The Angular Momentum of a System of Particles. 11-10 The Angular Momentum of a Rigid Body Rotating About a Fixed Axis. 11-11 Conservation of Angular Momentum. 11-12 Precession of a Gyroscope. Review & Summary. Questions. Problems. Appendix A: The International System of Units (SI). Appendix B: Some Fundamental Constants of Physics. Appendix C: Some Astronomical Data. Appendix D: Conversion Factors. Appendix E: Mathematical Formulas. Appendix F: Properties of the Elements. Appendix G: Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  16. Data stewardship - a fundamental part of the scientific method (Invited)

    NASA Astrophysics Data System (ADS)

    Foster, C.; Ross, J.; Wyborn, L. A.

    2013-12-01

    This paper emphasises the importance of data stewardship as a fundamental part of the scientific method, and the need to effect cultural change to ensure engagement by earth scientists. It is differentiated from the science of data stewardship per se. Earth System science generates vast quantities of data, and in the past, data analysis has been constrained by compute power, such that sub-sampling of data often provided the only way to reach an outcome. This is analogous to Kahneman's System 1 heuristic, with its simplistic and often erroneous outcomes. The development of HPC has liberated earth sciences such that the complexity and heterogeneity of natural systems can be utilised in modelling at any scale, global, or regional, or local; for example, movement of crustal fluids. Paradoxically, now that compute power is available, it is the stewardship of the data that is presenting the main challenges. There is a wide spectrum of issues: from effectively handling and accessing acquired data volumes [e.g. satellite feeds per day/hour]; through agreed taxonomy to effect machine to machine analyses; to idiosyncratic approaches by individual scientists. Except for the latter, most agree that data stewardship is essential. Indeed it is an essential part of the science workflow. As science struggles to engage and inform on issues of community importance, such as shale gas and fraccing, all parties must have equal access to data used for decision making; without that, there will be no social licence to operate or indeed access to additional science funding (Heidorn, 2008). The stewardship of scientific data is an essential part of the science process; but often it is regarded, wrongly, as entirely in the domain of data custodians or stewards. Geoscience Australia has developed a set of six principles that apply to all science activities within the agency: Relevance to Government Collaborative science Quality science Transparent science Communicated science Sustained

  17. Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Silveira, Joshua A.; Michelmann, Karsten; Ridgeway, Mark E.; Park, Melvin A.

    2016-04-01

    Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory.

  18. Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics.

    PubMed

    Silveira, Joshua A; Michelmann, Karsten; Ridgeway, Mark E; Park, Melvin A

    2016-04-01

    Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory. Graphical Abstract ᅟ. PMID:26864793

  19. Fundamentals of Physics, Part 4 (Chapters 34-38)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-04-01

    of Time. 37-6 The Relativity of Length. 37-7 The Lorentz Transformation. 37-8 Some Consequences of the Lorentz Equations. 37-9 The Relativity of Velocities. 37-10 Doppler Effect for Light. 37-11 A New Look at Momentum. 37-12 A New Look at Energy. Review & Summary. Questions. Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  20. Fundamental constants in the theory of two-dimensional uniform spanning trees

    NASA Astrophysics Data System (ADS)

    Poghosyan, V. S.; Priezzhev, V. B.

    2016-06-01

    Three characteristics of two-dimensional uniform spanning trees are nontrivially related to one another: the average density of a sandpile, the looping constant of a square lattice, and the return probability of a loop-erased random walk. We briefly trace the long history of the discovery of their unexpected rational values.

  1. Rovibrational spectroscopic constants and fundamental vibrational frequencies for isotopologues of cyclic and bent singlet HC{sub 2}N isomers

    SciTech Connect

    Inostroza, Natalia; Fortenberry, Ryan C.; Lee, Timothy J.; Huang, Xinchuan

    2013-12-01

    Through established, highly accurate ab initio quartic force fields, a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1 {sup 1} A' and bent 2 {sup 1} A' DCCN, H{sup 13}CCN, HC{sup 13}CN, and HCC{sup 15}N isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good, with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1-3.2 cm{sup –1} range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X {sup 3} A' HCCN.

  2. Rovibrational Spectroscopic Constants and Fundamental Vibrational Frequencies for Isotopologues of Cyclic and Bent Singlet HC2N isomers

    NASA Technical Reports Server (NTRS)

    Inostroza, Natalia; Fortenberry, Ryan C.; Huang, Xinchuan; Lee, Timothy J.

    2013-01-01

    Through established, highly-accurate ab initio quartic force fields (QFFs), a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1(sup 1) 1A' and bent 2(sup 1)A' DCCN, H(C13)CCN, HC(C-13)N, and HCC(N-15) isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1 to 3.2 / cm range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly-dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X 3A0 HCCN.

  3. On uniform constants of strong uniqueness in Chebyshev approximations and fundamental results of N. G. Chebotarev

    NASA Astrophysics Data System (ADS)

    Marinov, Anatolii V.

    2011-06-01

    In the problem of the best uniform approximation of a continuous real-valued function f\\in C(Q) in a finite-dimensional Chebyshev subspace M\\subset C(Q), where Q is a compactum, one studies the positivity of the uniform strong uniqueness constant \\gamma(N)=\\inf\\{\\gamma(f)\\colon f\\in N\\}. Here \\gamma(f) stands for the strong uniqueness constant of an element f_M\\in M of best approximation of f, that is, the largest constant \\gamma>0 such that the strong uniqueness inequality \\Vert f-\\varphi\\Vert\\ge\\Vert f-f_M\\Vert+\\gamma\\Vert f_M-\\varphi\\Vert holds for any \\varphi\\in M. We obtain a characterization of the subsets N\\subset C(Q) for which there is a neighbourhood O(N) of N satisfying the condition \\gamma(O(N))>0. The pioneering results of N. G. Chebotarev were published in 1943 and concerned the sharpness of the minimum in minimax problems and the strong uniqueness of algebraic polynomials of best approximation. They seem to have been neglected by the specialists, and we discuss them in detail.

  4. STOL aircraft transient ground effects. Part 1: Fundamental analytical study

    NASA Technical Reports Server (NTRS)

    Goldhammer, M. I.; Crowder, J. P.; Smyth, D. N.

    1975-01-01

    The first phases of a fundamental analytical study of STOL ground effects were presented. Ground effects were studied in two dimensions to establish the importance of nonlinear effects, to examine transient aspects of ascent and descent near the ground, and to study the modelling of the jet impingement on the ground. Powered lift system effects were treated using the jet-flap analogy. The status of a three-dimensional jet-wing ground effect method was presented. It was shown, for two-dimensional unblown airfoils, that the transient effects are small and are primarily due to airfoil/freestream/ground orientation rather than to unsteady effects. The three-dimensional study showed phenomena similar to the two-dimensional results. For unblown wings, the wing/freestream/ground orientation effects were shown to be of the same order of magnitude as for unblown airfoils. This may be used to study the nonplanar, nonlinear, jet-wing ground effect.

  5. New Limits on Coupling of Fundamental Constants to Gravity Using {sup 87}Sr Optical Lattice Clocks

    SciTech Connect

    Blatt, S.; Ludlow, A. D.; Campbell, G. K.; Thomsen, J. W.; Zelevinsky, T.; Boyd, M. M.; Ye, J.; Baillard, X.; Fouche, M.; Le Targat, R.; Brusch, A.; Lemonde, P.; Takamoto, M.; Hong, F.-L.; Katori, H.; Flambaum, V. V.

    2008-04-11

    The {sup 1}S{sub 0}-{sup 3}P{sub 0} clock transition frequency {nu}{sub Sr} in neutral {sup 87}Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1x10{sup -15} level makes {nu}{sub Sr} the best agreed-upon optical atomic frequency. We combine periodic variations in the {sup 87}Sr clock frequency with {sup 199}Hg{sup +} and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant {alpha}, electron-proton mass ratio {mu}, and light quark mass. Furthermore, after {sup 199}Hg{sup +}, {sup 171}Yb{sup +}, and H, we add {sup 87}Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of {alpha} and {mu}.

  6. Identification of Parts Failures. FOS: Fundamentals of Service.

    ERIC Educational Resources Information Center

    John Deere Co., Moline, IL.

    This parts failures identification manual is one of a series of power mechanics texts and visual aids covering theory of operation, diagnosis of trouble problems, and repair of automotive and off-the-road construction and agricultural equipment. Materials provide basic information with many illustrations for use by vocational students and teachers…

  7. Frequency ratio of two optical clock transitions in 171Yb+ and constraints on the time variation of fundamental constants.

    PubMed

    Godun, R M; Nisbet-Jones, P B R; Jones, J M; King, S A; Johnson, L A M; Margolis, H S; Szymaniec, K; Lea, S N; Bongs, K; Gill, P

    2014-11-21

    Singly ionized ytterbium, with ultranarrow optical clock transitions at 467 and 436 nm, is a convenient system for the realization of optical atomic clocks and tests of present-day variation of fundamental constants. We present the first direct measurement of the frequency ratio of these two clock transitions, without reference to a cesium primary standard, and using the same single ion of 171Yb+. The absolute frequencies of both transitions are also presented, each with a relative standard uncertainty of 6×10(-16). Combining our results with those from other experiments, we report a threefold improvement in the constraint on the time variation of the proton-to-electron mass ratio, μ/μ=0.2(1.1)×10(-16)  yr(-1), along with an improved constraint on time variation of the fine structure constant, α/α=-0.7(2.1)×10(-17)  yr(-1). PMID:25479482

  8. Fundamentals of Physics, Part 1 (Chapters 1-11)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    Chapter 1.Measurement. How does the appearance of a new type of cloud signal changes in Earth's atmosphere? 1-1 What Is Physics? 1-2 Measuring Things. 1-3 The International System of Units. 1-4 Changing Units. 1-5 Length. 1-6 Time. 1-7 Mass. Review & Summary. Problems. Chapter 2.Motion Along a Straight Line. What causes whiplash injury in rear-end collisions of cars? 2-1 What Is Physics? 2-2 Motion. 2-3 Position and Displacement. 2-4 Average Velocity and Average Speed. 2-5 Instantaneous Velocity and Speed. 2-6 Acceleration. 2-7 Constant Acceleration: A Special Case. 2-8 Another Look at Constant Acceleration. 2-9 Free-Fall Acceleration. 2-10 Graphical Integration in Motion Analysis. Review & Summary. Questions. Problems. Chapter 3.Vectors. How does an ant know the way home with no guiding clues on the deser t plains? 3-2 Vectors and Scalars. 3-3 Adding Vectors Geometrically. 3-4 Components of Vectors. 3-5 Unit Vectors. 3-6 Adding Vectors by Components. 3-7 Vectors and the Laws of Physics. 3-8 Multiplying Vectors. Review & Summary. Questions. Problems. Chapter 4.Motion in Two and Three Dimensions. In a motorcycle jump for record distance, where does the jumper put the second ramp? 4-1 What Is Physics? 4-2 Position and Displacement. 4-3 Average Velocity and Instantaneous Velocity. 4-4 Average Acceleration and Instantaneous Acceleration. 4-5 Projectile Motion. 4-6 Projectile Motion Analyzed. 4-7 Uniform Circular Motion. 4-8 Relative Motion in One Dimension. 4-9 Relative Motion in Two Dimensions. Review & Summary. Questions. Problems. Chapter 5.Force and Motion-I. When a pilot takes off from an aircraft carrier, what causes the compulsion to fly the plane into the ocean? 5-1 What Is Physics? 5-2 Newtonian Mechanics. 5-3 Newton's First Law. 5-4 Force. 5-5 Mass. 5-6 Newton's Second Law. 5-7 Some Particular Forces. 5-8 Newton's Third Law. 5-9 Applying Newton's Laws. Review & Summary. Questions. Problems. Chapter 6.Force and Motion-II. Can a Grand Prix race car be driven

  9. Determining the translational part of the fundamental group of an infra-solvmanifold of type (R)

    NASA Astrophysics Data System (ADS)

    Dekimpe, Karel

    1997-11-01

    In a recent paper, K. B. Lee introduced the notion of an infra-solvmanifold of type (R). These manifolds are completely determined by their fundamental group [Pi]. Such a [Pi] is a finite extension of a lattice [Gamma] of a solvable Lie group of type (R) and this lattice [Gamma] is called the translational part of [Pi].Having fixed an abstract group [Pi] occurring as the fundamental group of an infra-solvmanifold of type (R), it seems to be hard to describe, in a formal algebraic language, which subgroup of [Pi] is the translational part. In his paper Lee formulated a conjecture which would solve this problem, however, we show that this conjecture fails. Nevertheless, by defining a concept of eigenvalues for automorphisms of certain solvable groups (both Lie groups and discrete groups), we are able to prove a new theorem, characterizing completely the translational part of the fundamental group of an infra-solvmanifold of type (R).

  10. Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts

    PubMed Central

    Tamosiunaite, Minija; Sutterlütti, Rahel M.; Stein, Simon C.; Wörgötter, Florentin

    2015-01-01

    Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them. PMID:26441797

  11. Digital image processing: a primer for JVIR authors and readers: part 1: the fundamentals.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-10-01

    Online submission of manuscripts will be mandatory for most journals in the near future. To prepare authors for this requirement and to acquaint readers with this new development, herein the basics of digital image processing are described. From the fundamentals of digital image architecture, through acquisition, editing, and storage of digital images, the steps necessary to prepare an image for online submission are reviewed. In this article, the first of a three-part series, the structure of the digital image is described. In subsequent articles, the acquisition and editing of digital images will be reviewed. PMID:14551267

  12. Enhanced effects of variation of the fundamental constants in laser interferometers and application to dark-matter detection

    NASA Astrophysics Data System (ADS)

    Stadnik, Y. V.; Flambaum, V. V.

    2016-06-01

    We outline laser interferometer measurements to search for variation of the electromagnetic fine-structure constant α and particle masses (including a nonzero photon mass). We propose a strontium optical lattice clock—silicon single-crystal cavity interferometer as a small-scale platform for these measurements. Our proposed laser interferometer measurements, which may also be performed with large-scale gravitational-wave detectors, such as LIGO, Virgo, GEO600, or TAMA300, may be implemented as an extremely precise tool in the direct detection of scalar dark matter that forms an oscillating classical field or topological defects.

  13. Call to Adopt a Nominal Set of Astrophysical Parameters and Constants to Improve the Accuracy of Fundamental Physical Properties of Stars

    NASA Astrophysics Data System (ADS)

    Harmanec, Petr; Prša, Andrej

    2011-08-01

    The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius Rsolar and luminosity Lsolar by the nominal values that are by definition exact and expressed in SI units: 1 R⊙^N =6.95508 × 108 m and 1 L⊙^N =3.846 × 1026 W (2) computing stellar masses in terms of Msolar by noting that the measurement error of the product GMsolar is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values M⊙2010 =1.988547 × 1030 kg and T⊙2010 =5779.57 K and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.

  14. The Effect of Approximating Some Molecular Integrals in Coupled-Cluster Calculations: Fundamental Frequencies and Rovibrational Spectroscopic Constants of Cyclopropenylidene

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Dateo, Christopher E.

    2005-01-01

    The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, denoted CCSD(T), has been used, in conjunction with approximate integral techniques, to compute highly accurate rovibrational spectroscopic constants of cyclopropenylidene, C3H2. The approximate integral technique was proposed in 1994 by Rendell and Lee in order to avoid disk storage and input/output bottlenecks, and today it will also significantly aid in the development of algorithms for distributed memory, massively parallel computer architectures. It is shown in this study that use of approximate integrals does not impact the accuracy of CCSD(T) calculations. In addition, the most accurate spectroscopic data yet for C3H2 is presented based on a CCSD(T)/cc-pVQZ quartic force field that is modified to include the effects of core-valence electron correlation. Cyclopropenylidene is of great astronomical and astrobiological interest because it is the smallest aromatic ringed compound to be positively identified in the interstellar medium, and is thus involved in the prebiotic processing of carbon and hydrogen. The singles and doubles coupled-cluster method that includes a perturbational estimate of

  15. Direct numerical simulation of ignition front propagation in a constant volume with temperature inhomogeneities. I. Fundamental analysis and diagnostics

    SciTech Connect

    Chen, Jacqueline H.; Hawkes, Evatt R.; Sankaran, Ramanan; Mason, Scott D.; Im, Hong G.

    2006-04-15

    The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry with a view to providing better understanding and modeling of combustion processes in homogeneous charge compression-ignition engines. Numerical diagnostics are developed to analyze the mode of combustion and the dependence of overall ignition progress on initial mixture conditions. The roles of dissipation of heat and mass are divided conceptually into transport within ignition fronts and passive scalar dissipation, which modifies the statistics of the preignition temperature field. Transport within ignition fronts is analyzed by monitoring the propagation speed of ignition fronts using the displacement speed of a scalar that tracks the location of maximum heat release rate. The prevalence of deflagrative versus spontaneous ignition front propagation is found to depend on the local temperature gradient, and may be identified by the ratio of the instantaneous front speed to the laminar deflagration speed. The significance of passive scalar mixing is examined using a mixing timescale based on enthalpy fluctuations. Finally, the predictions of the multizone modeling strategy are compared with the DNS, and the results are explained using the diagnostics developed. (author)

  16. Superposition of super-integrable pseudo-Euclidean potentials in N = 2 with a fundamental constant of motion of arbitrary order in the momenta

    SciTech Connect

    Campoamor-Stursberg, R.

    2014-04-15

    It is shown that for any α,β∈R and k∈Z, the Hamiltonian H{sub k}=p{sub 1}p{sub 2}−αq{sub 2}{sup (2k+1)}q{sub 1}{sup (−2k−3)}−(β)/2 q{sub 2}{sup k}q{sub 1}{sup (−k−2)} is super-integrable, possessing fundamental constants of motion of degrees 2 and 2k + 2 in the momenta.

  17. Recent developments in modeling of hot rolling processes: Part I - Fundamentals

    NASA Astrophysics Data System (ADS)

    Hirt, Gerhard; Bambach, Markus; Seuren, Simon; Henke, Thomas; Lohmar, Johannes

    2013-05-01

    The numerical simulation of industrial rolling processes has gained substantial relevance over the past decades. A large variety of models have been put forward to simulate single and multiple rolling passes taking various interactions between the process, the microstructure evolution and the rolling mill into account. On the one hand, these include sophisticated approaches which couple models on all scales from the product's microstructure level up to the elastic behavior of the roll stand. On the other hand, simplified but fast models are used for on-line process control and automatic pass schedule optimization. This publication gives a short overview of the fundamental equations used in modeling of hot rolling of metals. Part II of this paper will present selected applications of hot rolling simulations.

  18. Reduction of iron-oxide-carbon composites: part I. Estimation of the rate constants

    SciTech Connect

    Halder, S.; Fruehan, R.J.

    2008-12-15

    A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO{sub 2} and wustite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wustite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wustite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wustite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (> 1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.

  19. Constraining the Evolution of the Fundamental Constants with a Solid-State Optical Frequency Reference Based on the {sup 229}Th Nucleus

    SciTech Connect

    Rellergert, Wade G.; Hudson, Eric R.; DeMille, D.; Greco, R. R.; Hehlen, M. P.; Torgerson, J. R.

    2010-05-21

    We describe a novel approach to directly measure the energy of the narrow, low-lying isomeric state in {sup 229}Th. Since nuclear transitions are far less sensitive to environmental conditions than atomic transitions, we argue that the {sup 229}Th optical nuclear transition may be driven inside a host crystal with a high transition Q. This technique might also allow for the construction of a solid-state optical frequency reference that surpasses the short-term stability of current optical clocks, as well as improved limits on the variability of fundamental constants. Based on analysis of the crystal lattice environment, we argue that a precision (short-term stability) of 3x10{sup -17}<{Delta}f/f<1x10{sup -15} after 1 s of photon collection may be achieved with a systematic-limited accuracy (long-term stability) of {Delta}f/f{approx}2x10{sup -16}. Improvement by 10{sup 2}-10{sup 3} of the constraints on the variability of several important fundamental constants also appears possible.

  20. 40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Compounds With Henry's Law Constant Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...

  1. 40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Compounds With Henry's Law Constant Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...

  2. Reduction of Iron-Oxide-Carbon Composites: Part I. Estimation of the Rate Constants

    NASA Astrophysics Data System (ADS)

    Halder, S.; Fruehan, R. J.

    2008-12-01

    A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO2 and wüstite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wüstite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wüstite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wüstite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (>1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.

  3. Frequency Ratio of Two Optical Clock Transitions in Yb+ 171 and Constraints on the Time Variation of Fundamental Constants

    NASA Astrophysics Data System (ADS)

    Godun, R. M.; Nisbet-Jones, P. B. R.; Jones, J. M.; King, S. A.; Johnson, L. A. M.; Margolis, H. S.; Szymaniec, K.; Lea, S. N.; Bongs, K.; Gill, P.

    2014-11-01

    Singly ionized ytterbium, with ultranarrow optical clock transitions at 467 and 436 nm, is a convenient system for the realization of optical atomic clocks and tests of present-day variation of fundamental constants. We present the first direct measurement of the frequency ratio of these two clock transitions, without reference to a cesium primary standard, and using the same single ion of Yb+ 171 . The absolute frequencies of both transitions are also presented, each with a relative standard uncertainty of 6 ×1 0-16. Combining our results with those from other experiments, we report a threefold improvement in the constraint on the time variation of the proton-to-electron mass ratio, μ ˙ /μ =0.2 (1.1 )×1 0-16 yr-1 , along with an improved constraint on time variation of the fine structure constant, α ˙ /α =-0.7 (2.1 )×1 0-17 yr-1 .

  4. High resolution infrared synchrotron study of CH2D81Br: ground state constants and analysis of the ν5, ν6 and ν9 fundamentals

    NASA Astrophysics Data System (ADS)

    Baldacci, A.; Stoppa, P.; Visinoni, R.; Wugt Larsen, R.

    2012-09-01

    The high resolution infrared absorption spectrum of CH2D81Br has been recorded by Fourier transform spectroscopy in the range 550-1075 cm-1, with an unapodized resolution of 0.0025 cm-1, employing a synchrotron radiation source. This spectral region is characterized by the ν6 (593.872 cm-1), ν5 (768.710 cm-1) and ν9 (930.295 cm-1) fundamental bands. The ground state constants up to sextic centrifugal distortion terms have been obtained for the first time by ground-state combination differences from the three bands and subsequently employed for the evaluation of the excited state parameters. Watson's A-reduced Hamiltonian in the Ir representation has been used in the calculations. The ν 6 = 1 level is essentially free from perturbation whereas the ν 5 = 1 and ν 9 = 1 states are mutually interacting through a-type Coriolis coupling. Accurate spectroscopic parameters of the three excited vibrational states and a high-order coupling constant which takes into account the interaction between ν5 and ν9 have been determined.

  5. 40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Compounds With Henry's Law Constant Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) INTERIM STATUS STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND DISPOSAL...

  6. Fundamental two-stage formulation for Bayesian system identification, Part I: General theory

    NASA Astrophysics Data System (ADS)

    Au, Siu-Kui; Zhang, Feng-Liang

    2016-01-01

    Structural system identification is concerned with the determination of structural model parameters (e.g., stiffness, mass) based on measured response data collected from the subject structure. For linear structures, one popular strategy is to adopt a 'two-stage' approach. That is, modal identification (e.g., frequency, mode shape) is performed in Stage I, whose information is used for inferring the structural parameters in Stage II. Different variants of Bayesian two-stage formulations have been proposed in the past. A prediction error model is commonly introduced to build a link between Stages I and II, treating the most probable values of the natural frequencies and mode shapes identified in Stage I as 'data' for Stage II. This type of formulation, which casts a prediction error model through descriptive statistics, involves heuristics that distort the fundamental nature of the Bayesian approach, although it has appeared to be inevitable. In this paper, a fundamental theory is developed for the Bayesian two-stage problem. The posterior distribution of structural parameters is derived rigorously in terms of the information available in the problem, namely the prior distribution of structural parameters, the posterior distribution of modal parameters in Stage I and the distribution of modal parameters conditional on the structural parameters that connects Stages I and II. The theory reveals a fundamental principle that ensures no double-counting of prior information in the two-stage identification process. Mathematical statements are also derived that provide insights into the role of the structural modeling error. Beyond the original structural model identification problem that motivated the work, the developed theory can be applied in more general settings. In the companion paper, examples with synthetic and real experimental data are provided to illustrate the proposed theory.

  7. Wall jet analysis for circulation control aerodynamics. Part 1: Fundamental CFD and turbulence modeling concepts

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.

    1987-01-01

    An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.

  8. Fundamentals of log analysis. Part 10: Determining rock mechanical property values from log analysis

    SciTech Connect

    Hunt, E.R.; McCain, W.D. Jr.

    1997-10-01

    Correct design and execution of well completions, including hydraulic fracturing, can enhance a reservoir`s productivity. Success in this optimization depends in part on being able to predict how hydraulic fracturing affects performance. Controls on the performance of a hydraulically fractured well are the fracture, reservoir characteristics and the well. This article will cover methods for obtaining values of in-situ stress in a specific rock layer and the in-situ stress profile, and determining Young`s modulus.

  9. Estimation of brittleness index using dynamic and static elastic constants in the Haenam Basin, Southwestern Part of Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Hwang, Seho; Shin, Jehyun; Kim, Jongman; Won, Byeongho; Song, Wonkyoung; Kim, Changryol; Ki, Jungseok

    2014-05-01

    One of the most important physical properties is the measurement of the elastic constants of the formation in the evaluation of shale gas. Normally the elastic constants by geophysical well logging and the laboratory test are used in the design of hydraulic fracturing . The three inches diameter borehole of the depth of 505 m for the evaluation of shale gas drilled and was fully cored at the Haenan Basin, southwestern part of Korea Peninsula. We performed a various laboratory tests and geophysical well logging using slime hole logging system. Geophysical well logs include the radioactive logs such as natural gamma log, density log and neutron log, and monopole and dipole sonic log, and image logs. Laboratory tests are the axial compression test, elastic wave velocities and density, and static elastic constants measurements for 21 shale and sandstone cores. We analyzed the relationships between the physical properties by well logs and laboratory test as well as static elastic constants by laboratory tests. In the case of an sonic log using a monopole source of main frequency 23 kHz, measuring P-wave velocity was performed reliably. When using the dipole excitation of low frequency, the signal to noise ratio of the measured shear wave was very low. But when measuring using time mode in a predetermined depth, the signal to noise ratio of measured data relatively improved to discriminate the shear wave. P-wave velocities by laboratory test and sonic logging agreed well overall, but S-wave velocities didn't. The reason for the discrepancy between the laboratory test and sonic log is mainly the low signal to noise ratio of sonic log data by low frequency dipole source, and measuring S-wave in the small diameter borehole is still challenge. The relationship between the P-wave velocity and two dynamic elastic constants, Young's modulus and Poisson's ratio, shows a good correlation. And the relationship between the static elastic constants and dynamic elastic constants also

  10. Toward a fundamental theory of optimal feature selection: Part II - Implementation and computational complexity

    SciTech Connect

    Morgera, S.D.

    1987-01-01

    Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules.

  11. Scattering of the Fundamental Shear Horizontal Mode by Part-Thickness Surface-Breaking Cracks in AN Isotropic Plate

    NASA Astrophysics Data System (ADS)

    Rajagopal, P.; Lowe, M. J. S.

    2008-02-01

    The interaction of the fundamental shear horizontal (SH0) mode with cracks in isotropic plates in the context of array imaging using ultrasonic guided waves is a subject of continued interest to the authors. Previous work [1-3] in this regard has illuminated different aspects of the scattering of circular crested SH0 waves from through-cracks. In this paper, the relationship between the scattering from part- and through-thickness cracks is explored. First a framework for such a relationship is proposed, in which the scattering from part- and through-thickness cracks are related by a suitable correction factor. The limits of the model are then tested using results from FE simulations of the problem for different configurations.

  12. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype-phenotype maps.

    PubMed

    Greenbury, S F; Ahnert, S E

    2015-12-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype-phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into 'constrained' and 'unconstrained' sequences, in the broadest possible sense. As 'constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. 'Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with 'coding' and 'non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063

  13. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype–phenotype maps

    PubMed Central

    Greenbury, S. F.; Ahnert, S. E.

    2015-01-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype–phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into ‘constrained' and ‘unconstrained' sequences, in the broadest possible sense. As ‘constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. ‘Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with ‘coding' and ‘non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063

  14. Two-dimensional analytical solutions for chemical transport in aquifers. Part 1. Simplified solutions for sources with constant concentration. Part 2. Exact solutions for sources with constant flux rate

    SciTech Connect

    Shan, C.; Javandel, I.

    1996-05-01

    Analytical solutions are developed for modeling solute transport in a vertical section of a homogeneous aquifer. Part 1 of the series presents a simplified analytical solution for cases in which a constant-concentration source is located at the top (or the bottom) of the aquifer. The following transport mechanisms have been considered: advection (in the horizontal direction), transverse dispersion (in the vertical direction), adsorption, and biodegradation. In the simplified solution, however, longitudinal dispersion is assumed to be relatively insignificant with respect to advection, and has been neglected. Example calculations are given to show the movement of the contamination front, the development of concentration profiles, the mass transfer rate, and an application to determine the vertical dispersivity. The analytical solution developed in this study can be a useful tool in designing an appropriate monitoring system and an effective groundwater remediation method.

  15. Direct numerical simulation of ignition front propagation in a constant volume with temperature inhomogeneities, Part II : Parametric study.

    SciTech Connect

    Sankaran, Ramanan; Chen, Jacqueline H.; Hawkes, Evatt R.; Pebay, Philippe Pierre

    2005-01-01

    The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry. Parametric studies on the effect of the initial amplitude of the temperature fluctuations, the initial length scales of the temperature and velocity fluctuations, and the turbulence intensity are performed. The combustion mode is characterized using the diagnostic measures developed in Part I of this study. Specifically, the ignition front speed and the scalar mixing timescales are used to identify the roles of molecular diffusion and heat conduction in each case. Predictions from a multizone model initialized from the DNS fields are presented and differences are explained using the diagnostic tools developed.

  16. Fifth Fundamental Catalogue (FK5). Part 1: Basic fundamental stars (Fricke, Schwan, and Lederle 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.

  17. ON THE VARIATIONS OF FUNDAMENTAL CONSTANTS AND ACTIVE GALACTIC NUCLEUS FEEDBACK IN THE QUASI-STELLAR OBJECT HOST GALAXY RXJ0911.4+0551 at z = 2.79

    SciTech Connect

    Weiss, A.; Henkel, C.; Menten, K. M.; Walter, F.; Downes, D.; Cox, P.; Carrili, C. L.

    2012-07-10

    We report on sensitive observations of the CO(J = 7{yields}6) and C I({sup 3}P{sub 2} {yields} {sup 3}P{sub 1}) transitions in the z = 2.79 QSO host galaxy RXJ0911.4+0551 using the IRAM Plateau de Bure interferometer. Our extremely high signal-to-noise spectra combined with the narrow CO line width of this source (FWHM = 120 km s{sup -1}) allows us to estimate sensitive limits on the spacetime variations of the fundamental constants using two emission lines. Our observations show that the C I and CO line shapes are in good agreement with each other but that the C I line profile is of the order of 10% narrower, presumably due to the lower opacity in the latter line. Both lines show faint wings with velocities up to {+-}250 km s{sup -1}, indicative of a molecular outflow. As such, the data provide direct evidence for negative feedback in the molecular gas phase at high redshift. Our observations allow us to determine the observed frequencies of both transitions with so far unmatched accuracy at high redshift. The redshift difference between the CO and C I lines is sensitive to variations of {Delta}F/F with F = {alpha}{sup 2}/{mu} where {alpha} is the fine structure constant and {mu} is the electron-to-proton mass ratio. We find {Delta}F/F (6.9 {+-} 3.7) Multiplication-Sign 10{sup -6} at a look-back time of 11.3 Gyr, which, within the uncertainties, is consistent with no variations of the fundamental constants.

  18. Polydimethylsiloxane-based permeation passive air sampler. Part I: Calibration constants and their relation to retention indices of the analytes.

    PubMed

    Seethapathy, Suresh; Górecki, Tadeusz

    2011-01-01

    A simple and cost effective permeation passive sampler equipped with a polydimethylsiloxane (PDMS) membrane was designed for the determination of time-weighted average (TWA) concentrations of volatile organic compounds (VOCs) in air. Permeation passive samplers have significant advantages over diffusive passive samplers, including insensitivity to moisture and high face velocities of air across the surface of the sampler. Calibration constants of the sampler towards 41 analytes belonging to alkane, aromatic hydrocarbon, chlorinated hydrocarbon, ester and alcohol groups were determined. The calibration constants allowed for the determination of the permeability of PDMS towards the selected analytes. They ranged from 0.026 cm² min⁻¹ for 1,1-dichloroethylene to 0.605 cm² min⁻¹ for n-octanol. Further, the mechanism of analyte transport across PDMS membranes allowed for the calibration constants of the sampler to be estimated from the linear temperature programmed retention indices (LTPRI) of the analytes, determined using GC columns coated with pure PDMS stationary phases. Statistical analysis using Student's t test indicated that there was no significant difference at the 95% probability level between the experimentally obtained calibration constants and those estimated using LTPRI for most analyte groups studied. This correlation allows the estimation of the calibration constants of compounds not known to be present at the time of sampler deployment, which makes it possible to determine parameters like total petroleum hydrocarbons in the vapor phase. PMID:21112594

  19. Vibrational Spectra and Force Constants of Symmetric Tops, IL. The ν3Fundamental of Unstable H3SnCI, H3SnBr, and H3SnI Studied by High Resolution FT Spectroscopy of Monoisotopic Species

    NASA Astrophysics Data System (ADS)

    Bürger, Hans; Betzel, Martina

    1985-10-01

    Fourier Transform far infrared spectra of unstable stannyl chloride, bromide and iodide have been measured in the gas phase with a resolution of 0.04 cm-1. At pressures below 10 mbar, their lifetimes at 0 °C in preconditioned cells were found to be 10-30 min. The v3 fundamentals and hot bands of the series (n + 1)v3 - nv3 have been observed. Rotational J structure has been resolved for monoisotopic samples, and band origins v30, anharmonicity constants x33, ɑ3B and DJ0 values have been determined from the rovibrational analyses. The following v30 values were obtained: H3116Sn35Cl 375.470 (5), H3116Sn37Cl 367.689 (6), H3116Sn79Br 263.566 (5) and H3116SnI 209.759 (6) cm-1.

  20. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 3; Constant Stress and Cyclic Stress Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  1. Theory versus experiment for the rotordynamic coefficients of annular gas seals. Part 2: Constant clearance and convergent-tapered geometry

    NASA Technical Reports Server (NTRS)

    Nelson, C. C.; Childs, D. W.; Nicks, C.; Elrod, D.

    1985-01-01

    The leakage and rotordynamic coefficients of constant-clearance and convergent-tapered annular gas seals were measured in an experimental test facility. The results are presented along with the theoretically predicted values. Of particular interest is the prediction that optimally tapered seals have significantly larger direct siffness than straight seals. The experimental results verify this prediction. Generally the theory does quite well, but fails to predict the large increase in direct stiffness when the fluid is pre-rotated.

  2. Karplus dependence of spin-spin coupling constants revisited theoretically. Part 1: second-order double perturbation theory.

    PubMed

    Rusakova, Irina L; Krivdin, Leonid B

    2013-11-01

    A double perturbation theory (DPT) at the second order level of approximation formalism has been applied to examine the dihedral angle dependence of the Fermi-contact (FC) contribution to nuclear spin-spin coupling constants. The unperturbed wave function of the ground state in DPT was approximated by the Hartree-Fock Slater determinant, while the excited states were treated as the single excited determinants. An analytical expression relating the FC term of vicinal proton-proton spin-spin coupling constants across the aliphatic single carbon-carbon bond to the dihedral angle describing inner rotation around the C-C bond in the ten-electron ten-orbital moiety H-C-C-H has been derived and analyzed. In particular, it has been shown that extrema of (3)J(H,H) are observed at φ = πn, n = 0, ±1, ±2,…, which provides a theoretical background of a well-known semiempirical Karplus equation. PMID:24071769

  3. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 2; Constant Stress Rate Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  4. Measurements in the turbulent boundary layer at constant pressure in subsonic and supersonic flow. Part 1: Mean flow

    NASA Technical Reports Server (NTRS)

    Collins, D. J.; Coles, D. E.; Hicks, J. W.

    1978-01-01

    Experiments were carried out to test the accuracy of laser Doppler instrumentation for measurement of Reynolds stresses in turbulent boundary layers in supersonic flow. Two facilities were used to study flow at constant pressure. In one facility, data were obtained on a flat plate at M sub e = 0.1, with Re theta up to 8,000. In the other, data were obtained on an adiabatic nozzle wall at M sub e = 0.6, 0.8, 1.0, 1.3, and 2.2, with Re theta = 23,000 and 40,000. The mean flow as observed using Pitot tube, Preston tube, and floating element instrumentation is described. Emphasis is on the use of similarity laws with Van Driest scaling and on the inference of the shearing stress profile and the normal velocity component from the equations of mean motion. The experimental data are tabulated.

  5. Characterization of high-power lithium-ion cells during constant current cycling. Part I. Cycle performance and electrochemical diagnostics

    SciTech Connect

    Shim, Joongpyo; Striebel, Kathryn A.

    2003-01-24

    Twelve-cm{sup 2} pouch type lithium-ion cells were assembled with graphite anodes, LiNi{sub 0.8}Co{sub 0.15}Al{sub 0.05}O{sub 2} cathodes and 1M LiPF{sub 6}/EC/DEC electrolyte. These pouch cells were cycled at different depths of discharge (100 percent and 70 percent DOD) at room temperature to investigate cycle performance and pulse power capability. The capacity loss and power fade of the cells cycled over 100 percent DOD was significantly faster than the cell cycled over 70 percent DOD. The overall cell impedance increased with cycling, although the ohmic resistance from the electrolyte was almost constant. From electrochemical analysis of each electrode after cycling, structural and/or impedance changes in the cathode are responsible for most of the capacity and power fade, not the consumption of cycleable Li from side-reactions.

  6. Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Precesses with Applications to Hybrid Rocket Motors. Part 1; Experimental Investigation

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Johnson, David K.; Serin, Nadir; Risha, Grant A.; Merkle, Charles L.; Venkateswaran, Sankaran

    1996-01-01

    This final report summarizes the major findings on the subject of 'Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Processes with Applications to Hybrid Rocket Motors', performed from 1 April 1994 to 30 June 1996. Both experimental results from Task 1 and theoretical/numerical results from Task 2 are reported here in two parts. Part 1 covers the experimental work performed and describes the test facility setup, data reduction techniques employed, and results of the test firings, including effects of operating conditions and fuel additives on solid fuel regression rate and thermal profiles of the condensed phase. Part 2 concerns the theoretical/numerical work. It covers physical modeling of the combustion processes including gas/surface coupling, and radiation effect on regression rate. The numerical solution of the flowfield structure and condensed phase regression behavior are presented. Experimental data from the test firings were used for numerical model validation.

  7. Fabric transitions in quartz via viscoplastic self-consistent modeling part I: Axial compression and simple shear under constant strain

    NASA Astrophysics Data System (ADS)

    Morales, Luiz F. G.; Lloyd, Geoffrey E.; Mainprice, David

    2014-12-01

    Quartz is a common crustal mineral that deforms plastically in a wide range of temperatures and pressures, leading to the development of different types of crystallographic preferred orientation (CPO) patterns. In this contribution we present the results of an extensive modeling of quartz fabric transitions via a viscoplastic self-consistent (VPSC) approach. For that, we have performed systematic simulations using different sets of relative critical resolved shear stress of the main quartz slip systems. We have performed these simulations in axial compression and simple shear regimes under constant Von Mises equivalent strain of 100% (γ = 1.73), assuming that the aggregates deformed exclusively by dislocation glide. Some of the predicted CPOs patterns are similar to those observed in naturally and experimentally deformed quartz. Nevertheless, some classical CPO patterns usually interpreted as result from dislocation glide (e.g. Y-maxima due to prism < a > slip) are clearly not developed in the simulated conditions. In addition we reported new potential preferred orientation patterns that might happen in high temperature conditions, both in axial compression and simple shear. We have demonstrated that CPOs generated under axial compression are usually stronger that those predicted under simple shear, due to the continuous rotation observed in the later simulations. The fabric strength depends essentially on the dominant active slip system, and normally the stronger CPOs result from dominant basal slip in < a >, followed by rhomb < a > and prism [c] slip, whereas prism < a > slip does not produce strong fabrics. The opening angle of quartz [0001] fabric used as a proxy of temperature seems to be reliable for deformation temperatures of ~ 400 °C, when the main slip systems have similar behaviors.

  8. Selectivity and delignification kinetics for oxidative short-term lime pretreatment of poplar wood, Part I: Constant-pressure.

    PubMed

    Sierra-Ramírez, Rocío; Garcia, Laura A; Holtzapple, Mark Thomas

    2011-07-01

    Kinetic models applied to oxygen bleaching of paper pulp focus on the degradation of polymers, either lignin or carbohydrates. Traditionally, they separately model different moieties that degrade at three different rates: rapid, medium, and slow. These models were successfully applied to lignin and carbohydrate degradation of poplar wood submitted to oxidative pretreatment with lime at the following conditions: temperature 110-180°C, total pressure 7.9-21.7 bar, and excess lime loading of 0.5 g Ca(OH)2 per gram dry biomass. These conditions were held constant for 1-6 h. The models properly fit experimental data and were used to determine pretreatment selectivity in two fashions: differential and integral. By assessing selectivity, the detrimental effect of pretreatment on carbohydrates at high temperatures and at low lignin content was determined. The models can be used to identify pretreatment conditions that selectively remove lignin while preserving carbohydrates. Lignin removal≥50% with glucan preservation≥90% was observed for differential glucan selectivities between ∼10 and ∼30 g lignin degraded per gram glucan degraded. Pretreatment conditions complying with these reference values were preferably observed at 140°C, total pressure≥14.7 bars, and for pretreatment times between 2 and 6 h depending on the total pressure (the higher the pressure, the less time). They were also observed at 160°C, total pressure of 14.7 and 21.7 bars, and pretreatment time of 2 h. Generally, at 110°C lignin removal is insufficient and at 180°C carbohydrates do not preserve well. PMID:21692196

  9. On the Global Uniqueness for the Einstein-Maxwell-Scalar Field System with a Cosmological Constant. Part 2. Structure of the Solutions and Stability of the Cauchy Horizon

    NASA Astrophysics Data System (ADS)

    Costa, João L.; Girão, Pedro M.; Natário, José; Silva, Jorge Drumond

    2015-11-01

    This paper is the second part of a trilogy dedicated to the following problem: given spherically symmetric characteristic initial data for the Einstein-Maxwell-scalar field system with a cosmological constant , with the data on the outgoing initial null hypersurface given by a subextremal Reissner-Nordström black hole event horizon, study the future extendibility of the corresponding maximal globally hyperbolic development as a "suitably regular" Lorentzian manifold. In the first paper of this sequence (Costa et al., Class Quantum Gravity 32:015017, 2015), we established well posedness of the characteristic problem with general initial data. In this second paper, we generalize the results of Dafermos (Ann Math 158:875-928, 2003) on the stability of the radius function at the Cauchy horizon by including a cosmological constant. This requires a considerable deviation from the strategy followed in Dafermos (Ann Math 158:875-928, 2003), focusing on the level sets of the radius function instead of the red-shift and blue-shift regions. We also present new results on the global structure of the solution when the free data is not identically zero in a neighborhood of the origin. In the third and final paper (Costa et al., On the global uniqueness for the Einstein-Maxwell-scalar field system with a cosmological constant. Part 3. Mass inflation and extendibility of the solutions. arXiv:1406.7261, 2015), we will consider the issue of mass inflation and extendibility of solutions beyond the Cauchy horizon.

  10. Nucleosynthesis and the variation of fundamental couplings

    SciTech Connect

    Mueller, Christian M.; Schaefer, Gregor; Wetterich, Christof

    2004-10-15

    We determine the influence of a variation of the fundamental 'constants' on the predicted helium abundance in Big Bang Nucleosynthesis. The analytic estimate is performed in two parts: the first step determines the dependence of the helium abundance on the nuclear physics parameters, while the second step relates those parameters to the fundamental couplings of particle physics. This procedure can incorporate in a flexible way the time variation of several couplings within a grand unified theory while keeping the nuclear physics computation separate from any GUT model dependence.

  11. Marketing fundamentals.

    PubMed

    Redmond, W H

    2001-01-01

    This chapter outlines current marketing practice from a managerial perspective. The role of marketing within an organization is discussed in relation to efficiency and adaptation to changing environments. Fundamental terms and concepts are presented in an applied context. The implementation of marketing plans is organized around the four P's of marketing: product (or service), promotion (including advertising), place of delivery, and pricing. These are the tools with which marketers seek to better serve their clients and form the basis for competing with other organizations. Basic concepts of strategic relationship management are outlined. Lastly, alternate viewpoints on the role of advertising in healthcare markets are examined. PMID:11401791

  12. Non-Dimensional Characterization of the Friction Stir/Spot Welding Process Using a Simple Couette Flow Model Part I: Constant Property Bingham Plastic Solution

    NASA Astrophysics Data System (ADS)

    Buck, Gregory A.; Langerman, Michael

    2004-06-01

    A simplified model for the material flow created during a friction stir/spot welding process has been developed using a boundary driven cylindrical Couette flow model with a specified heat flux at the inner cylinder for a Bingham plastic material. Non-dimensionalization of the constant property governing equations identified three parameters that influence the velocity and temperature fields. Analytic solutions to these equations are presented and some representative results from a parametric study (parameters chosen and varied over ranges expected for the welding of a wide variety of metals) are discussed. The results also provide an expression for the critical radius (location of vanishing material velocity) as functions of the relevant non-dimensional parameters. A final study was conducted in which values for the non-dimensional heat flux parameter were chosen to produce peak dimensional temperatures on the order of 80% of the melting temperature for a typical 2000 series aluminum. Under these conditions it was discovered that the ratio of the maximum rate of shear work within the material (viscous dissipation) to the rate of energy input at the boundary due to frictional heating, ranged from about 0.0005 % for the lowest pin tool rotation rate, to about 1.3 % for the highest tool rotation rate studied. Curve fits to previous Gleeble data taken for a number of aluminum alloys provide reasonable justification for the Bingham plastic constitutive model, and although these fits indicate a strong temperature dependence for critical flow stress and viscosity, this work provides a simple tool for more sophisticated model validation. Part II of this study will present numerical solutions for velocity and temperature fields resulting from the non-linear coupling of the momentum and energy equations created by temperature dependent transport properties.

  13. Formulas for determining rotational constants

    NASA Astrophysics Data System (ADS)

    Guelachvili, G.

    This document is part of Subvolume B `Linear Triatomic Molecules', Part 9, of Volume 20 `Molecular Constants mostly from Infrared Spectroscopy' of Landolt-Börnstein Group II `Molecules and Radicals'. Part of the introduction, it states formulas for determining rotational constants, band center, band origin, and quadrupole coupling. Specific comments relate to BHO (HBO) and COS (OCS).

  14. Food Service Fundamentals.

    ERIC Educational Resources Information Center

    Marine Corps Inst., Washington, DC.

    Developed as part of the Marine Corps Institute (MCI) correspondence training program, this course on food service fundamentals is designed to provide a general background in the basic aspects of the food service program in the Marine Corps; it is adaptable for nonmilitary instruction. Introductory materials include specific information for MCI…

  15. Fundamentals of Library Instruction

    ERIC Educational Resources Information Center

    McAdoo, Monty L.

    2012-01-01

    Being a great teacher is part and parcel of being a great librarian. In this book, veteran instruction services librarian McAdoo lays out the fundamentals of the discipline in easily accessible language. Succinctly covering the topic from top to bottom, he: (1) Offers an overview of the historical context of library instruction, drawing on recent…

  16. Development of procedures for calculating stiffness and damping properties of elastomers in engineering applications. Part 2: Elastomer characteristics at constant temperature

    NASA Technical Reports Server (NTRS)

    Gupta, P. K.; Tessarzik, J. M.; Cziglenyi, L.

    1974-01-01

    Dynamic properties of a commerical polybutadiene compound were determined at a constant temperature of 32 C by a forced-vibration resonant mass type of apparatus. The constant thermal state of the elastomer was ensured by keeping the ambient temperature constant and by limiting the power dissipation in the specimen. Experiments were performed with both compression and shear specimens at several preloads (nominal strain varying from 0 to 5 percent), and the results are reported in terms of a complex stiffness as a function of frequency. Very weak frequency dependence is observed and a simple power law type of correlation is shown to represent the data well. Variations in the complex stiffness as a function of preload are also found to be small for both compression and shear specimens.

  17. Effect of a Nonplanar Melt-Solid Interface on Lateral Compositional Distribution during Unidirectional Solidification of a Binary Alloy with a Constant Growth Velocity V. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Wang, Jai-Ching; Watring, Dale A.; Lehoczky, Sandor L.; Su, Ching-Hua; Gillies, Don; Szofran, Frank

    1999-01-01

    Infrared detector materials, such as Hg(1-x)Cd(x)Te, Hg(1-x)Zn(x)Te have energy gaps almost linearly proportional to its composition. Due to the wide separation of liquidus and solidus curves of their phase diagram, there are compositional segregations in both of axial and radial directions of these crystals grown in the Bridgman system unidirectionally with constant growth rate. It is important to understand the mechanisms which affect lateral segregation such that large uniform radial composition crystal is possible. Following Coriell, etc's treatment, we have developed a theory to study the effect of a curved melt-solid interface shape on the lateral composition distribution. The system is considered to be cylindrical system with azimuthal symmetric with a curved melt-solid interface shape which can be expressed as a linear combination of a series of Bessell's functions. The results show that melt-solid interface shape has a dominate effect on lateral composition distribution of these systems. For small values of b, the solute concentration at the melt-solid interface scales linearly with interface shape with a proportional constant of the product of b and (1 - k), where b = VR/D, with V as growth velocity, R as sample radius, D as diffusion constant and k as distribution constant. A detailed theory will be presented. A computer code has been developed and simulations have been performed and compared with experimental results. These will be published in another paper.

  18. The Search for More Effective Methods of Teaching High-School Biology to Slow Learners Through Interaction Analysis, Part II. The Effects of Various Constant Teaching Patterns

    ERIC Educational Resources Information Center

    Citron, Irvin M.; Barnes, Cyrus W.

    1970-01-01

    Presents the procedures, results, and conclusions of a study designed to determine whether constant patterns of teaching of various kinds over an extended period could affect concept formation, problem solving, and total achievement of slow learners in a high school biology course. (LC)

  19. Measurement fundamentals

    SciTech Connect

    Webb, R.A.

    1995-12-01

    The need to have accurate petroleum measurement is obvious. Petroleum measurement is the basis of commerce between oil producers, royalty owners, oil transporters, refiners, marketers, the Department of Revenue, and the motoring public. Furthermore, petroleum measurements are often used to detect operational problems or unwanted releases in pipelines, tanks, marine vessels, underground storage tanks, etc. Therefore, consistent, accurate petroleum measurement is an essential part of any operation. While there are several methods and different types of equipment used to perform petroleum measurement, the basic process stays the same. The basic measurement process is the act of comparing an unknown quantity, to a known quantity, in order to establish its magnitude. The process can be seen in a variety of forms; such as measuring for a first-down in a football game, weighing meat and produce at the grocery, or the use of an automobile odometer.

  20. Modelling the fate of nonylphenolic compounds in the Seine River--part 1: determination of in-situ attenuation rate constants.

    PubMed

    Cladière, Mathieu; Bonhomme, Céline; Vilmin, Lauriane; Gasperi, Johnny; Flipo, Nicolas; Tassin, Bruno

    2014-01-15

    Assessing the fate of endocrine disrupting compounds (EDCs) in the environment is currently a key issue for determining their impacts on aquatic ecosystems. The 4-nonylphenol (4-NP) is a well known EDC and results from the biodegradation of surfactant nonylphenol ethoxylates (NPnEOs). Fate mechanisms of NPnEO are well documented but their rate constants have been mainly determined through laboratory experiments. This study aims at evaluating the in-situ fate of 4-NP, nonylphenol monoethoxylate (NP1EO) and nonylphenolic acetic acid (NP1EC). Two sampling campaigns were carried out on the Seine River in July and September 2011, along a 28km-transect downstream Paris City. The field measurements are used for the calibration of a sub-model of NPnEO fate, included into a hydro-ecological model of the Seine River (ProSe). The timing of the sampling is based on the Seine River velocity in order to follow a volume of water. Based on our results, in-situ attenuation rate constants of 4-NP, NP1EO and NP1EC for both campaigns are evaluated. These rate constants vary greatly. Although the attenuation rate constants in July are especially high (higher than 1d(-1)), those obtained in September are lower and consistent with the literature. This is probably due to the biogeochemical conditions in the Seine River. Indeed, the July sampling campaign took place at the end of an algal bloom leading to an unusual bacterial biomass while the September campaign was carried out during common biogeochemical status. Finally, the uncertainties on measurements and on the calibration parameters are estimated through a sensitivity analysis. This study provides relevant information regarding the fate of biodegradable pollutants in an aquatic environment by coupling field measurements and a biogeochemical model. Such data may be very helpful in the future to better understand the fate of nonylphenolic compounds or any other pollutants at the basin scale. PMID:24100207

  1. Dielectric Constant of Suspensions

    NASA Astrophysics Data System (ADS)

    Mendelson, Kenneth S.; Ackmann, James J.

    1997-03-01

    We have used a finite element method to calculate the dielectric constant of a cubic array of spheres. Extensive calculations support preliminary conclusions reported previously (K. Mendelson and J. Ackmann, Bull. Am. Phys. Soc. 41), 657 (1996).. At frequencies below 100 kHz the real part of the dielectric constant (ɛ') shows oscillations as a function of the volume fraction of suspension. These oscillations disappear at low conductivities of the suspending fluid. Measurements of the dielectric constant (J. Ackmann, et al., Ann. Biomed. Eng. 24), 58 (1996). (H. Fricke and H. Curtis, J. Phys. Chem. 41), 729 (1937). are not sufficiently sensitive to show oscillations but appear to be consistent with the theoretical results.

  2. Extending the Constant Power Speed Range of the Brushless DC Motor through Dual Mode Inverter Control -- Part I: Theory and Simulation

    SciTech Connect

    Lawler, J.S.

    2001-10-29

    An inverter topology and control scheme has been developed that can drive low-inductance, surface-mounted permanent magnet motors over the wide constant power speed range required in electric vehicle applications. This new controller is called the dual-mode inverter control (DMIC) [1]. The DMIC can drive either the Permanent Magnet Synchronous Machine (PMSM) with sinusoidal back emf, or the brushless dc machine (BDCM) with trapezoidal emf in the motoring and regenerative braking modes. In this paper we concentrate on the BDCM under high-speed motoring conditions. Simulation results show that if all motor and inverter loss mechanisms are neglected, the constant power speed range of the DMIC is infinite. The simulation results are supported by closed form expressions for peak and rms motor current and average power derived from analytical solution to the differential equations governing the DMIC/BDCM drive for the lossless case. The analytical solution shows that the range of motor inductance that can be accommodated by the DMIC is more than an order of magnitude such that the DMIC is compatible with both low- and high-inductance BDCMs. Finally, method is given for integrating the classical hysteresis band current control, used for motor control below base speed, with the phase advance of DMIC that is applied above base speed. The power versus speed performance of the DMIC is then simulated across the entire speed range.

  3. Uptake of metal ions by a new chelating ion exchange resin. Part 3: Protonation constants via potentiometric titration and solid state [sup 31]P NMR spectroscopy

    SciTech Connect

    Nash, K.L.; Rickert, P.G.; Muntean, J.V.; Alexandratos, S.D.

    1994-01-01

    A new chelating ion exchange resin which incorporates methylenediphosphonate, carboxylate, and sulfonate functional groups in a polystyrene-divinylbenzene matrix has been prepared. This resin exhibits exceptionally high affinity for polyvalent cations even from moderately acidic aqueous media. Metal ion coordination occurs primarily at the diphosphonate group with the secondary binding sites contributing to charge neutralization when necessary and possible, and to increasing hydrophilicity of the resin pores. In the present investigation, the protonation equilibria of the phosphonate groups in the resin are investigated via potentiometric titration and solid-state [sup 31]P NMR spectroscopy of the resin. Intrinsic equilibrium constants for the first two diphosphonate protonation reactions are pK[sub 4] = 10.47 and pK[sub 3] = 7.24. The last two protons added to the diphosphonate group are acidic having pK[sub a] values less than 2.5. These protonation constants are consistent with those reported previously for monomer analog 1,1-diphosphonic acids. This result implies that thermodynamic data available in the literature can be used to predict the relative affinity of the resin for polyvalent cations. 17 refs., 2 figs., 3 tabs.

  4. Jumping on the Train of Personalized Medicine: A Primer for Non-Geneticist Clinicians: Part 2. Fundamental Concepts in Genetic Epidemiology

    PubMed Central

    Li, Aihua; Meyre, David

    2014-01-01

    With the decrease in sequencing costs, personalized genome sequencing will eventually become common in medical practice. We therefore write this series of three reviews to help non-geneticist clinicians to jump into the fast-moving field of personalized medicine. In the first article of this series, we reviewed the fundamental concepts in molecular genetics. In this second article, we cover the key concepts and methods in genetic epidemiology including the classification of genetic disorders, study designs and their implementation, genetic marker selection, genotyping and sequencing technologies, gene identification strategies, data analyses and data interpretation. This review will help the reader critically appraise a genetic association study. In the next article, we will discuss the clinical applications of genetic epidemiology in the personalized medicine area. PMID:25598767

  5. Fundamental Study of a Single Point Lean Direct Injector. Part I: Effect of Air Swirler Angle and Injector Tip Location on Spray Characteristics

    NASA Technical Reports Server (NTRS)

    Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.

    2015-01-01

    Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 degrees is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.

  6. Fundamental Study of a Single Point Lean Direct Injector. Part I: Effect of Air Swirler Angle and Injector Tip Location on Spray Characteristics

    NASA Technical Reports Server (NTRS)

    Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.

    2014-01-01

    Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 deg is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.

  7. Fundamentals of battery dynamics

    NASA Astrophysics Data System (ADS)

    Jossen, Andreas

    Modern applications, such as wireless communication systems or hybrid electric vehicles operate at high power fluctuations. For some applications, where the power frequencies are high (above some 10 or 100 Hz) it is possible to filter the high frequencies using passive components; yet this results in additional costs. In other applications, where the dynamic time constants are in the range up to some seconds, filtering cannot be done. Batteries are hence operated with the dynamic loads. But what happens under these dynamic operation conditions? This paper describes the fundamentals of the dynamic characteristics of batteries in a frequency range from some MHz down to the mHz range. As the dynamic behaviour depends on the actual state of charge (SOC) and the state of health (SOH), it is possible to gain information on the battery state by analysing the dynamic behaviour. High dynamic loads can influence the battery temperature, the battery performance and the battery lifetime.

  8. Extending the Constant Power Speed Range of the Brushless DC Motor through Dual Mode Inverter Control -- Part II: Laboratory Proof-of-Principle

    SciTech Connect

    Lawler, J.S.

    2001-10-29

    Previous theoretical work has shown that when all loss mechanisms are neglected the constant power speed range (CPSR) of a brushless dc motor (BDCM) is infinite when the motor is driven by the dual-mode inverter control (DMIC) [1,2]. In a physical drive, losses, particularly speed-sensitive losses, will limit the CPSR to a finite value. In this paper we report the results of laboratory testing of a low-inductance, 7.5-hp BDCM driven by the DMIC. The speed rating of the test motor rotor limited the upper speed of the testing, and the results show that the CPSR of the test machine is greater than 6:1 when driven by the DMIC. Current wave shape, peak, and rms values remained controlled and within rating over the entire speed range. The laboratory measurements allowed the speed-sensitive losses to be quantified and incorporated into computer simulation models, which then accurately reproduce the results of lab testing. The simulator shows that the limiting CPSR of the test motor is 8:1. These results confirm that the DMIC is capable of driving low-inductance BDCMs over the wide CPSR that would be required in electric vehicle applications.

  9. Constant Communities in Complex Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh

    2013-05-01

    Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.

  10. Measurements in the Turbulent Boundary Layer at Constant Pressure in Subsonic and Supersonic Flow. Part 2: Laser-Doppler Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Dimotakis, P. E.; Collins, D. J.; Lang, D. B.

    1979-01-01

    A description of both the mean and the fluctuating components of the flow, and of the Reynolds stress as observed using a dual forward scattering laser-Doppler velocimeter is presented. A detailed description of the instrument and of the data analysis techniques were included in order to fully document the data. A detailed comparison was made between the laser-Doppler results and those presented in Part 1, and an assessment was made of the ability of the laser-Doppler velocimeter to measure the details of the flows involved.

  11. Planck Constant Determination from Power Equivalence

    NASA Astrophysics Data System (ADS)

    Newell, David B.

    2000-04-01

    Equating mechanical to electrical power links the kilogram, the meter, and the second to the practical realizations of the ohm and the volt derived from the quantum Hall and the Josephson effects, yielding an SI determination of the Planck constant. The NIST watt balance uses this power equivalence principle, and in 1998 measured the Planck constant with a combined relative standard uncertainty of 8.7 x 10-8, the most accurate determination to date. The next generation of the NIST watt balance is now being assembled. Modification to the experimental facilities have been made to reduce the uncertainty components from vibrations and electromagnetic interference. A vacuum chamber has been installed to reduce the uncertainty components associated with performing the experiment in air. Most of the apparatus is in place and diagnostic testing of the balance should begin this year. Once a combined relative standard uncertainty of one part in 10-8 has been reached, the power equivalence principle can be used to monitor the possible drift in the artifact mass standard, the kilogram, and provide an accurate alternative definition of mass in terms of fundamental constants. *Electricity Division, Electronics and Electrical Engineering Laboratory, Technology Administration, U.S. Department of Commerce. Contribution of the National Institute of Standards and Technology, not subject to copyright in the U.S.

  12. A Fundamental Breakdown. Part I: Locomotion

    ERIC Educational Resources Information Center

    Townsend, J. Scott; Mohr, Derek J.

    2005-01-01

    In an earlier issue of "TEPE" (January, 2005) the "Research to Practice" column examined the effects of a developmental curriculum on elementary-aged children's performance. Pappa, Evanggelinou, and Karabourniotis (2005) found support for a line of research suggesting that curricular programming should place a specific focus on development of…

  13. Combustion Fundamentals Research

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.

  14. Exchange Rates and Fundamentals.

    ERIC Educational Resources Information Center

    Engel, Charles; West, Kenneth D.

    2005-01-01

    We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…

  15. Precision laser spectroscopy in fundamental studies

    NASA Astrophysics Data System (ADS)

    Kolachevsky, N. N.; Khabarova, K. Yu

    2014-12-01

    The role of precision spectroscopic measurements in the development of fundamental theories is discussed, with particular emphasis on the hydrogen atom, the simplest stable atomic system amenable to the accurate calculation of energy levels from quantum electrodynamics. Research areas that greatly benefited from the participation of the Lebedev Physical Institute are reviewed, including the violation of fundamental symmetries, the stability of the fine-structure constant α, and sensitive tests of quantum electrodynamics.

  16. Assessing uncertainty in physical constants

    NASA Astrophysics Data System (ADS)

    Henrion, Max; Fischhoff, Baruch

    1986-09-01

    Assessing the uncertainty due to possible systematic errors in a physical measurement unavoidably involves an element of subjective judgment. Examination of historical measurements and recommended values for the fundamental physical constants shows that the reported uncertainties have a consistent bias towards underestimating the actual errors. These findings are comparable to findings of persistent overconfidence in psychological research on the assessment of subjective probability distributions. Awareness of these biases could help in interpreting the precision of measurements, as well as provide a basis for improving the assessment of uncertainty in measurements.

  17. (In)validity of the constant field and constant currents assumptions in theories of ion transport.

    PubMed Central

    Syganow, A; von Kitzing, E

    1999-01-01

    Constant electric fields and constant ion currents are often considered in theories of ion transport. Therefore, it is important to understand the validity of these helpful concepts. The constant field assumption requires that the charge density of permeant ions and flexible polar groups is virtually voltage independent. We present analytic relations that indicate the conditions under which the constant field approximation applies. Barrier models are frequently fitted to experimental current-voltage curves to describe ion transport. These models are based on three fundamental characteristics: a constant electric field, negligible concerted motions of ions inside the channel (an ion can enter only an empty site), and concentration-independent energy profiles. An analysis of those fundamental assumptions of barrier models shows that those approximations require large barriers because the electrostatic interaction is strong and has a long range. In the constant currents assumption, the current of each permeating ion species is considered to be constant throughout the channel; thus ion pairing is explicitly ignored. In inhomogeneous steady-state systems, the association rate constant determines the strength of ion pairing. Among permeable ions, however, the ion association rate constants are not small, according to modern diffusion-limited reaction rate theories. A mathematical formulation of a constant currents condition indicates that ion pairing very likely has an effect but does not dominate ion transport. PMID:9929480

  18. Fundamentals of phosphate transfer.

    PubMed

    Kirby, Anthony J; Nome, Faruk

    2015-07-21

    Historically, the chemistry of phosphate transfer-a class of reactions fundamental to the chemistry of Life-has been discussed almost exclusively in terms of the nucleophile and the leaving group. Reactivity always depends significantly on both factors; but recent results for reactions of phosphate triesters have shown that it can also depend strongly on the nature of the nonleaving or "spectator" groups. The extreme stabilities of fully ionised mono- and dialkyl phosphate esters can be seen as extensions of the same effect, with one or two triester OR groups replaced by O(-). Our chosen lead reaction is hydrolysis-phosphate transfer to water: because water is the medium in which biological chemistry takes place; because the half-life of a system in water is an accepted basic index of stability; and because the typical mechanisms of hydrolysis, with solvent H2O providing specific molecules to act as nucleophiles and as general acids or bases, are models for reactions involving better nucleophiles and stronger general species catalysts. Not least those available in enzyme active sites. Alkyl monoester dianions compete with alkyl diester monoanions for the slowest estimated rates of spontaneous hydrolysis. High stability at physiological pH is a vital factor in the biological roles of organic phosphates, but a significant limitation for experimental investigations. Almost all kinetic measurements of phosphate transfer reactions involving mono- and diesters have been followed by UV-visible spectroscopy using activated systems, conveniently compounds with good leaving groups. (A "good leaving group" OR* is electron-withdrawing, and can be displaced to generate an anion R*O(-) in water near pH 7.) Reactivities at normal temperatures of P-O-alkyl derivatives-better models for typical biological substrates-have typically had to be estimated: by extended extrapolation from linear free energy relationships, or from rate measurements at high temperatures. Calculation is free

  19. On the Khinchin Constant

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Craw, James M. (Technical Monitor)

    1995-01-01

    We prove known identities for the Khinchin constant and develop new identities for the more general Hoelder mean limits of continued fractions. Any of these constants can be developed as a rapidly converging series involving values of the Riemann zeta function and rational coefficients. Such identities allow for efficient numerical evaluation of the relevant constants. We present free-parameter, optimizable versions of the identities, and report numerical results.

  20. Solar constant secular changes

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.; Orosz, Jerome A.

    1990-01-01

    A recent model for solar constant secular changes is used to calculate a 'proxy' solar constant for: (1) the past four centuries, based upon the sunspot record, (2) the past nine centuries, based upon C-14 observations and their relation to solar activity, and (3) the next decade, based upon a dynamo theory model for the solar cycle. The proxy solar constant data is tabulated as it may be useful for climate modelers studying global climate changes.

  1. Optical constants of solid methane

    NASA Technical Reports Server (NTRS)

    Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.

    1989-01-01

    Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented of the optical constants of solid methane for the 0.4 to 2.6 micron region. K is reported for both the amorphous and the crystalline (annealed) states. Using the previously measured values of the real part of the refractive index, n, of liquid methane at 110 K n is computed for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH4.

  2. Universal constants and equations of turbulent motion

    NASA Astrophysics Data System (ADS)

    Baumert, Helmut

    2011-11-01

    For turbulence at high Reynolds number we present an analogy with the kinetic theory of gases, with dipoles made of vortex tubes as frictionless, incompressible but deformable quasi-particles. Their movements are governed by Helmholtz' elementary vortex rules applied locally. A contact interaction or ``collision'' leads either to random scatter of a trajectory or to the formation of two likewise rotating, fundamentally unstable whirls forming a dissipative patch slowly rotating around its center of mass, the latter almost at rest. This approach predicts von Karman's constant as 1/sqrt(2 pi) = 0.399 and the spatio-temporal dynamics of energy-containing time and length scales controlling turbulent mixing [Baumert 2005, 2009]. A link to turbulence spectra was missing so far. In the present contribution it is shown that the above image of dipole movements is compatible with Kolmogorov's spectra if dissipative patches, beginning as two likewise rotating eddies, evolve locally into a space-filling bearing in the sense of Herrmann [1990], i.e. into an ``Apollonian gear.'' Its parts and pieces are are frictionless, excepting the dissipative scale of size zero. Our approach predicts the dimensionless pre-factor in the 3D Eulerian wavenumber spectrum (in terms of pi) as 1.8, and in the Lagrangian frequency spectrum as the integer number 2. Our derivations are free of empirical relations and rest on geometry, methods from many-particle physics, and on elementary conservation laws only. Department of the Navy Grant, ONR Global

  3. Optical constants of solid methane

    NASA Technical Reports Server (NTRS)

    Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.

    1990-01-01

    Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented on the optical constants of solid methane for the 0.4 to 2.6 micrometer region. Deposition onto a substrate at 10 K produces glassy (semi-amorphous) material. Annealing this material at approximately 33 K for approximately 1 hour results in a crystalline material as seen by sharper, more structured bands and negligible background extinction due to scattering. The constant k is reported for both the amorphous and the crystalline (annealed) states. Typical values (at absorption maxima) are in the .001 to .0001 range. Below lambda = 1.1 micrometers the bands are too weak to be detected by transmission through the films less than or equal to 215 micrometers in thickness, employed in the studies to date. Using previously measured values of the real part of the refractive index, n, of liquid methane at 110 K, n is computed for solid methane using the Lorentz-Lorenz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for

  4. Astronomical reach of fundamental physics.

    PubMed

    Burrows, Adam S; Ostriker, Jeremiah P

    2014-02-18

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law. PMID:24477692

  5. Astronomical reach of fundamental physics

    PubMed Central

    Burrows, Adam S.; Ostriker, Jeremiah P.

    2014-01-01

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law. PMID:24477692

  6. The cosmological constant problem

    SciTech Connect

    Dolgov, A.D.

    1989-05-01

    A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs.

  7. Space Shuttle astrodynamical constants

    NASA Technical Reports Server (NTRS)

    Cockrell, B. F.; Williamson, B.

    1978-01-01

    Basic space shuttle astrodynamic constants are reported for use in mission planning and construction of ground and onboard software input loads. The data included here are provided to facilitate the use of consistent numerical values throughout the project.

  8. Development of Monopole Interaction Models for Ionic Compounds. Part I: Estimation of Aqueous Henry’s Law Constants for Ions and Gas Phase pKa Values for Acidic Compounds

    EPA Science Inventory

    The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry’s Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aq...

  9. Constant potential pulse polarography

    USGS Publications Warehouse

    Christie, J.H.; Jackson, L.L.; Osteryoung, R.A.

    1976-01-01

    The new technique of constant potential pulse polarography, In which all pulses are to be the same potential, is presented theoretically and evaluated experimentally. The response obtained is in the form of a faradaic current wave superimposed on a constant capacitative component. Results obtained with a computer-controlled system exhibit a capillary response current similar to that observed In normal pulse polarography. Calibration curves for Pb obtained using a modified commercial pulse polarographic instrument are in good accord with theoretical predictions.

  10. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  11. History and progress on accurate measurements of the Planck constant

    NASA Astrophysics Data System (ADS)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  12. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  13. Fundamental symmetries and interactions—selected topics

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2015-11-01

    In the field of fundamental interactions and symmetries numerous experiments are underway or planned in order to verify the standard model in particle physics, to search for possible extensions to it or to exploit the standard model for extracting most precise values for fundamental constants. We cover selected recent developments, in particular such which exploit stored and confined particles. Emphasis is on experiments with transformative character, i.e. such which may be able to guide and steer theoretical model building into new but defined directions. Among those are projects with antiprotons, muons and certain selected atoms and atomic nuclei.

  14. Fundamentals of fluid lubrication

    NASA Technical Reports Server (NTRS)

    Hamrock, Bernard J.

    1991-01-01

    The aim is to coordinate the topics of design, engineering dynamics, and fluid dynamics in order to aid researchers in the area of fluid film lubrication. The lubrication principles that are covered can serve as a basis for the engineering design of machine elements. The fundamentals of fluid film lubrication are presented clearly so that students that use the book will have confidence in their ability to apply these principles to a wide range of lubrication situations. Some guidance on applying these fundamentals to the solution of engineering problems is also provided.

  15. Fundamentals of fluid sealing

    NASA Technical Reports Server (NTRS)

    Zuk, J.

    1976-01-01

    The fundamentals of fluid sealing, including seal operating regimes, are discussed and the general fluid-flow equations for fluid sealing are developed. Seal performance parameters such as leakage and power loss are presented. Included in the discussion are the effects of geometry, surface deformations, rotation, and both laminar and turbulent flows. The concept of pressure balancing is presented, as are differences between liquid and gas sealing. Mechanisms of seal surface separation, fundamental friction and wear concepts applicable to seals, seal materials, and pressure-velocity (PV) criteria are discussed.

  16. Optical constants of solid methane

    SciTech Connect

    Khare, B.N.; Thompson, W.R.; Sagan, C. . Lab. for Planetary Studies); Arakawa, E.T.; Bruel, C.; Judish, J.P. ); Khanna, R.K. . Dept. of Chemistry and Biochemistry); Pollack, J.B. . Ames Research Center)

    1989-01-01

    Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH{sub 4} for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. We present preliminary results of the optical constants of solid methane for the 0.4 {mu}m to 2.6 {mu}m region. We report k for both the amorphous and the crystalline (annealed) states. Using our previously measured values of the real part of the refractive index, n, of liquid methane at 110{degree}K (Bull. Am. Phys. Soc.31, 700 (1986)) we compute n for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH{sub 4}. 33 refs., 6 figs., 2 tabs.

  17. Unification of Fundamental Forces

    NASA Astrophysics Data System (ADS)

    Salam, Abdus; Taylor, Foreword by John C.

    2005-10-01

    Foreword John C. Taylor; 1. Unification of fundamental forces Abdus Salam; 2. History unfolding: an introduction to the two 1968 lectures by W. Heisenberg and P. A. M. Dirac Abdus Salam; 3. Theory, criticism, and a philosophy Werner Heisenberg; 4. Methods in theoretical physics Paul Adrian Maurice Dirac.

  18. Fundamentals of Diesel Engines.

    ERIC Educational Resources Information Center

    Marine Corps Inst., Washington, DC.

    This student guide, one of a series of correspondence training courses designed to improve the job performance of members of the Marine Corps, deals with the fundamentals of diesel engine mechanics. Addressed in the three individual units of the course are the following topics: basic principles of diesel mechanics; principles, mechanics, and…

  19. Reading Is Fundamental, 1977.

    ERIC Educational Resources Information Center

    Smithsonian Institution, Washington, DC. National Reading is Fun-damental Program.

    Reading Is Fundamental (RIF) is a national, nonprofit organization designed to motivate children to read by making a wide variety of inexpensive books available to them and allowing the children to choose and keep books that interest them. This annual report for 1977 contains the following information on the RIF project: an account of the…

  20. Fundamentals of soil science

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study guide provides comments and references for professional soil scientists who are studying for the soil science fundamentals exam needed as the first step for certification. The performance objectives were determined by the Soil Science Society of America's Council of Soil Science Examiners...

  1. Homeschooling and Religious Fundamentalism

    ERIC Educational Resources Information Center

    Kunzman, Robert

    2010-01-01

    This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to…

  2. Fundamentals of tribology

    SciTech Connect

    Suh, N.P.; Saka, N.

    1980-01-01

    This book presents the proceedings of the June 1978 International Conference on the Fundamentals of Tribology. The papers discuss the effects of surface topography and of the properties of materials on wear; friction, wear, and thermomechanical effects; wear mechanisms in metal processing; polymer wear; wear monitoring and prevention; and lubrication. (LCL)

  3. Fundamental research data base

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A fundamental research data base containing ground truth, image, and Badhwar profile feature data for 17 North Dakota, South Dakota, and Minnesota agricultural sites is described. Image data was provided for a minimum of four acquisition dates for each site and all four images were registered to one another.

  4. Laser Fundamentals and Experiments.

    ERIC Educational Resources Information Center

    Van Pelt, W. F.; And Others

    As a result of work performed at the Southwestern Radiological Health Laboratory with respect to lasers, this manual was prepared in response to the increasing use of lasers in high schools and colleges. It is directed primarily toward the high school instructor who may use the text for a short course in laser fundamentals. The definition of the…

  5. The Fundamental Property Relation.

    ERIC Educational Resources Information Center

    Martin, Joseph J.

    1983-01-01

    Discusses a basic equation in thermodynamics (the fundamental property relation), focusing on a logical approach to the development of the relation where effects other than thermal, compression, and exchange of matter with the surroundings are considered. Also demonstrates erroneous treatments of the relation in three well-known textbooks. (JN)

  6. Fundamental electrode kinetics

    NASA Technical Reports Server (NTRS)

    Elder, J. P.

    1968-01-01

    Report presents the fundamentals of electrode kinetics and the methods used in evaluating the characteristic parameters of rapid-charge transfer processes at electrode-electrolyte interfaces. The concept of electrode kinetics is outlined, followed by the principles underlying the experimental techniques for the investigation of electrode kinetics.

  7. Basic Publication Fundamentals.

    ERIC Educational Resources Information Center

    Savedge, Charles E., Ed.

    Designed for students who produce newspapers and newsmagazines in junior high, middle, and elementary schools, this booklet is both a scorebook and a fundamentals text. The scorebook provides realistic criteria for judging publication excellence at these educational levels. All the basics for good publications are included in the text of the…

  8. A Legal Constant

    ERIC Educational Resources Information Center

    Taylor, Kelley R.

    2009-01-01

    The 21st century has brought many technological, social, and economic changes--nearly all of which have affected schools and the students, administrators, and faculty members who are in them. Luckily, as some things change, other things remain the same. Such is true with the fundamental legal principles that guide school administrators' actions…

  9. Elastic constants of calcite

    USGS Publications Warehouse

    Peselnick, L.; Robie, R.A.

    1962-01-01

    The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics.

  10. Enhancement of Compton scattering by an effective coupling constant

    SciTech Connect

    Barbiellini, Bernardo; Nicolini, Piero

    2011-08-15

    A robust thermodynamic argument shows that a small reduction of the effective coupling constant {alpha} of QED greatly enhances the low-energy Compton-scattering cross section and that the Thomson scattering length is connected to a fundamental scale {lambda}. A discussion provides a possible quantum interpretation of this enormous sensitivity to changes in the effective coupling constant {alpha}.

  11. Measuring Boltzmann's Constant with Carbon Dioxide

    ERIC Educational Resources Information Center

    Ivanov, Dragia; Nikolov, Stefan

    2013-01-01

    In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…

  12. Rotor-Liquid-Fundament System's Oscillation

    NASA Astrophysics Data System (ADS)

    Kydyrbekuly, A.

    The work is devoted to research of oscillation and sustainability of stationary twirl of vertical flexible static dynamically out-of-balance rotor with cavity partly filled with liquid and set on relative frame fundament. The accounting of such factors like oscillation of fundament, liquid oscillation, influence of asymmetry of installation of a rotor on a shaft, anisotropism of shaft support and fundament, static and dynamic out-of-balance of a rotor, an external friction, an internal friction of a shaft, allows to settle an invoice more precisely kinematic and dynamic characteristics of system.

  13. On the role of the Avogadro constant in redefining SI units for mass and amount of substance

    NASA Astrophysics Data System (ADS)

    Leonard, B. P.

    2007-02-01

    There is a common misconception that the Avogadro constant is one of the fundamental constants of nature, in the same category as the speed of light, the Planck constant and the invariant masses of atomic-scale particles. Although the absolute mass of any specified atomic-scale entity is an invariant universal constant of nature, the Avogadro constant relating this to a macroscopic quantity is not. Rather, it is a man-made construct, designed by convention to define a convenient unit relating the atomic and macroscopic scales. The misportrayal seems to stem from the widespread use of the term 'fixed-Avogadro-constant' for describing a redefinition of the kilogram that is, in fact, based on a fixed atomic-scale particle mass. This paper endeavours to clarify the role of the Avogadro constant in current definitions of SI units for mass and amount of substance as well as recently proposed redefinitions of these units—in particular, those based on fixing the numerical values of the Planck and Avogadro constants, respectively. Precise definitions lead naturally to a rational, straightforward and intuitively obvious construction of appropriate (exactly defined) atomic-scale units for these quantities. And this, in turn, suggests a direct and easily comprehended two-part statement of the fixed-Planck-constant kilogram definition involving a well-understood and physically meaningful de Broglie-Compton frequency.

  14. XrayOpticsConstants

    Energy Science and Technology Software Center (ESTSC)

    2005-06-20

    This application (XrayOpticsConstants) is a tool for displaying X-ray and Optical properties for a given material, x-ray photon energy, and in the case of a gas, pressure. The display includes fields such as the photo-electric absorption attenuation length, density, material composition, index of refraction, and emission properties (for scintillator materials).

  15. Use of the ion exchange method for the determination of stability constants of trivalent metal complexes with humic and fulvic acids--part I: Eu3+ and Am3+ complexes in weakly acidic conditions.

    PubMed

    Wenming, Dong; Hongxia, Zhang; Meide, Huang; Zuyi, Tao

    2002-06-01

    The conditional stability constants for tracer concentrations of Eu(III) and Am(III) with a red earth humic acid (REHA), a red earth fulvic acid (REFA) and a fulvic acid from weathered coal (WFA) were determined at pH 5.2-6.4 (such values are similar to those in non-calcareous soils) in the presence of HAc/NaAc or NaNO3 by using the cation exchange method. It was found that 1:1 complexes were predominately formed in weakly acidic conditions. The total exchangeable proton capacities and the degrees of dissociation of these humic substances were determined by using a potentiometric titration method. The key parameters necessary for the experimental determination of the conditional stability constants of metal ions with humic substances in weakly acidic conditions by using the cation exchange method were discussed. The conditional stability constants of 1:1 complexes obtained in this paper were compared with the literature data of Am(III) determined by using the ion exchange method and the solvent extraction method and with the stability constants of 1:1 complexes of UO2(2+) and Th4+ with the same soil humic substances. These results indicate the great stability of bivalent UO2(2+), trivalent Eu3+, Am3+ and tetravalent Th4+ complexes with humic and fulvic acids in weakly acidic conditions. PMID:12102358

  16. Fundamentals of Polarized Light

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael

    2003-01-01

    The analytical and numerical basis for describing scattering properties of media composed of small discrete particles is formed by the classical electromagnetic theory. Although there are several excellent textbooks outlining the fundamentals of this theory, it is convenient for our purposes to begin with a summary of those concepts and equations that are central to the subject of this book and will be used extensively in the following chapters. We start by formulating Maxwell's equations and constitutive relations for time- harmonic macroscopic electromagnetic fields and derive the simplest plane-wave solution that underlies the basic optical idea of a monochromatic parallel beam of light. This solution naturally leads to the introduction of such fundamental quantities as the refractive index and the Stokes parameters. Finally, we define the concept of a quasi-monochromatic beam of light and discuss its implications.

  17. Fundamentals of Geophysics

    NASA Astrophysics Data System (ADS)

    Frohlich, Cliff

    Choosing an intermediate-level geophysics text is always problematic: What should we teach students after they have had introductory courses in geology, math, and physics, but little else? Fundamentals of Geophysics is aimed specifically at these intermediate-level students, and the author's stated approach is to construct a text “using abundant diagrams, a simplified mathematical treatment, and equations in which the student can follow each derivation step-by-step.” Moreover, for Lowrie, the Earth is round, not flat—the “fundamentals of geophysics” here are the essential properties of our Earth the planet, rather than useful techniques for finding oil and minerals. Thus this book is comparable in both level and approach to C. M. R. Fowler's The Solid Earth (Cambridge University Press, 1990).

  18. Fundamental limits on EMC

    NASA Astrophysics Data System (ADS)

    Showers, R. M.; Lin, S.-Y.; Schulz, R. B.

    1981-02-01

    Both fundamental and state-of-the-art limits are treated with emphasis on the former. Fundamental limits result from both natural and man-made electromagnetic noise which then affect two basic ratios, signal-to-noise (S/N) and extraneous-input-to-noise (I/N). Tolerable S/N values are discussed for both digital and analog communications systems. These lead to tolerable signal-to-extraneous-input (S/I) ratios, again for digital and analog communications systems, as well as radar and sonar. State-of-the-art limits for transmitters include RF noise emission, spurious emissions, and intermodulation. Receiver limits include adjacent-channel interactions, image, IF, and other spurious responses, including cross modulation, intermodulation, and desensitization. Unintentional emitters and receivers are also discussed. Coupling limitations between undesired sources and receptors are considered from mechanisms including radiation, induction, and conduction.

  19. Fundamental studies in geodynamics

    NASA Technical Reports Server (NTRS)

    Anderson, D. L.; Hager, B. H.; Kanamori, H.

    1981-01-01

    Research in fundamental studies in geodynamics continued in a number of fields including seismic observations and analysis, synthesis of geochemical data, theoretical investigation of geoid anomalies, extensive numerical experiments in a number of geodynamical contexts, and a new field seismic volcanology. Summaries of work in progress or completed during this report period are given. Abstracts of publications submitted from work in progress during this report period are attached as an appendix.

  20. The apparent fine-tuning of the cosmological, gravitational and fine structure constants

    NASA Astrophysics Data System (ADS)

    Eaves, Laurence

    2016-02-01

    A numerical coincidence relating the values of the cosmological, gravitational and electromagnetic fine structure constants is presented and discussed in relation to the apparent anthropic fine-tuning of these three fundamental constants of nature.

  1. Fundamentals of Structural Geology

    NASA Astrophysics Data System (ADS)

    Pollard, David D.; Fletcher, Raymond C.

    2005-09-01

    Fundamentals of Structural Geology provides a new framework for the investigation of geological structures by integrating field mapping and mechanical analysis. Assuming a basic knowledge of physical geology, introductory calculus and physics, it emphasizes the observational data, modern mapping technology, principles of continuum mechanics, and the mathematical and computational skills, necessary to quantitatively map, describe, model, and explain deformation in Earth's lithosphere. By starting from the fundamental conservation laws of mass and momentum, the constitutive laws of material behavior, and the kinematic relationships for strain and rate of deformation, the authors demonstrate the relevance of solid and fluid mechanics to structural geology. This book offers a modern quantitative approach to structural geology for advanced students and researchers in structural geology and tectonics. It is supported by a website hosting images from the book, additional colour images, student exercises and MATLAB scripts. Solutions to the exercises are available to instructors. The book integrates field mapping using modern technology with the analysis of structures based on a complete mechanics MATLAB is used to visualize physical fields and analytical results and MATLAB scripts can be downloaded from the website to recreate textbook graphics and enable students to explore their choice of parameters and boundary conditions The supplementary website hosts color images of outcrop photographs used in the text, supplementary color images, and images of textbook figures for classroom presentations The textbook website also includes student exercises designed to instill the fundamental relationships, and to encourage the visualization of the evolution of geological structures; solutions are available to instructors

  2. Value of Fundamental Science

    NASA Astrophysics Data System (ADS)

    Burov, Alexey

    Fundamental science is a hard, long-term human adventure that has required high devotion and social support, especially significant in our epoch of Mega-science. The measure of this devotion and this support expresses the real value of the fundamental science in public opinion. Why does fundamental science have value? What determines its strength and what endangers it? The dominant answer is that the value of science arises out of curiosity and is supported by the technological progress. Is this really a good, astute answer? When trying to attract public support, we talk about the ``mystery of the universe''. Why do these words sound so attractive? What is implied by and what is incompatible with them? More than two centuries ago, Immanuel Kant asserted an inseparable entanglement between ethics and metaphysics. Thus, we may ask: which metaphysics supports the value of scientific cognition, and which does not? Should we continue to neglect the dependence of value of pure science on metaphysics? If not, how can this issue be addressed in the public outreach? Is the public alienated by one or another message coming from the face of science? What does it mean to be politically correct in this sort of discussion?

  3. Rare Isotopes and Fundamental Symmetries

    NASA Astrophysics Data System (ADS)

    Brown, B. Alex; Engel, Jonathan; Haxton, Wick; Ramsey-Musolf, Michael; Romalis, Michael; Savard, Guy

    2009-01-01

    Experiments searching for new interactions in nuclear beta decay / Klaus P. Jungmann -- The beta-neutrino correlation in sodium-21 and other nuclei / P. A. Vetter ... [et al.] -- Nuclear structure and fundamental symmetries/ B. Alex Brown -- Schiff moments and nuclear structure / J. Engel -- Superallowed nuclear beta decay: recent results and their impact on V[symbol] / J. C. Hardy and I. S. Towner -- New calculation of the isospin-symmetry breaking correlation to superallowed Fermi beta decay / I. S. Towner and J. C. Hardy -- Precise measurement of the [symbol]H to [symbol]He mass difference / D. E. Pinegar ... [et al.] -- Limits on scalar currents from the 0+ to 0+ decay of [symbol]Ar and isospin breaking in [symbol]Cl and [symbol]Cl / A. Garcia -- Nuclear constraints on the weak nucleon-nucleon interaction / W. C. Haxton -- Atomic PNC theory: current status and future prospects / M. S. Safronova -- Parity-violating nucleon-nucleon interactions: what can we learn from nuclear anapole moments? / B. Desplanques -- Proposed experiment for the measurement of the anapole moment in francium / A. Perez Galvan ... [et al.] -- The Radon-EDM experiment / Tim Chupp for the Radon-EDM collaboration -- The lead radius Eexperiment (PREX) and parity violating measurements of neutron densities / C. J. Horowitz -- Nuclear structure aspects of Schiff moment and search for collective enhancements / Naftali Auerbach and Vladimir Zelevinsky -- The interpretation of atomic electric dipole moments: Schiff theorem and its corrections / C. -P. Liu -- T-violation and the search for a permanent electric dipole moment of the mercury atom / M. D. Swallows ... [et al.] -- The new concept for FRIB and its potential for fundamental interactions studies / Guy Savard -- Collinear laser spectroscopy and polarized exotic nuclei at NSCL / K. Minamisono -- Environmental dependence of masses and coupling constants / M. Pospelov.

  4. Can compactifications solve the cosmological constant problem?

    NASA Astrophysics Data System (ADS)

    Hertzberg, Mark P.; Masoumi, Ali

    2016-06-01

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ = 0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain why Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.

  5. Renormalization of Newton's constant

    NASA Astrophysics Data System (ADS)

    Falls, Kevin

    2015-12-01

    The problem of obtaining a gauge independent beta function for Newton's constant is addressed. By a specific parametrization of metric fluctuations a gauge independent functional integral is constructed for the semiclassical theory around an arbitrary Einstein space. The effective action then has the property that only physical polarizations of the graviton contribute, while all other modes cancel with the functional measure. We are then able to compute a gauge independent beta function for Newton's constant in d dimensions to one-loop order. No Landau pole is present provided Ng<18 , where Ng=d (d -3 )/2 is the number of polarizations of the graviton. While adding a large number of matter fields can change this picture, the absence of a pole persists for the particle content of the standard model in four spacetime dimensions.

  6. Varying constants quantum cosmology

    SciTech Connect

    Leszczyńska, Katarzyna; Balcerzak, Adam; Dabrowski, Mariusz P. E-mail: abalcerz@wmf.univ.szczecin.pl

    2015-02-01

    We discuss minisuperspace models within the framework of varying physical constants theories including Λ-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ansätze for the variability of constants: c(a) = c{sub 0} a{sup n} and G(a)=G{sub 0} a{sup q}. We find that most of the varying c and G minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe ''from nothing'' (a=0) to a Friedmann geometry with the scale factor a{sub t} is large for growing c models and is strongly suppressed for diminishing c models. As for G varying, the probability of tunneling is large for G diminishing, while it is small for G increasing. In general, both varying c and G change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

  7. The Hubble Constant

    NASA Astrophysics Data System (ADS)

    Jackson, Neal

    2015-09-01

    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H_0 values of around 72-74 km s^-1 Mpc^-1, with typical errors of 2-3 km s^-1 Mpc^-1. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s^-1 Mpc^-1 and typical errors of 1-2 km s^-1 Mpc^-1. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  8. Fundamental experiments in velocimetry

    SciTech Connect

    Briggs, Matthew Ellsworth; Hull, Larry; Shinas, Michael

    2009-01-01

    One can understand what velocimetry does and does not measure by understanding a few fundamental experiments. Photon Doppler Velocimetry (PDV) is an interferometer that will produce fringe shifts when the length of one of the legs changes, so we might expect the fringes to change whenever the distance from the probe to the target changes. However, by making PDV measurements of tilted moving surfaces, we have shown that fringe shifts from diffuse surfaces are actually measured only from the changes caused by the component of velocity along the beam. This is an important simplification in the interpretation of PDV results, arising because surface roughness randomizes the scattered phases.

  9. Fundamental research data base

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A fundamental research data base was created on a single 9-track 1600 BPI tape containing ground truth, image, and Badhwar profile feature data for 17 North Dakota, South Dakota, and Minnesota agricultural sites. Each site is 5x6 nm in area. Image data has been provided for a minimum of four acquisition dates for each site. All four images have been registered to one another. A list of the order of the files on tape and the dates of acquisition is provided.

  10. Fundamentals of electrokinetics

    NASA Astrophysics Data System (ADS)

    Kozak, M. W.

    The study of electrokinetics is a very mature field. Experimental studies date from the early 1800s, and acceptable theoretical analyses have existed since the early 1900s. The use of electrokinetics in practical field problems is more recent, but it is still quite mature. Most developments in the fundamental understanding of electrokinetics are in the colloid science literature. A significant and increasing divergence between the theoretical understanding of electrokinetics found in the colloid science literature and the theoretical analyses used in interpreting applied experimental studies in soil science and waste remediation has developed. The soil science literature has to date restricted itself to the use of very early theories, with their associated limitations. The purpose of this contribution is to review fundamental aspects of electrokinetic phenomena from a colloid science viewpoint. It is hoped that a bridge can be built between the two branches of the literature, from which both will benefit. Attention is paid to special topics such as the effects of overlapping double layers, applications in unsaturated soils, the influence of dispersivity, and the differences between electrokinetic theory and conductivity theory.

  11. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  12. Fundamental Atomtronic Circuit Elements

    NASA Astrophysics Data System (ADS)

    Lee, Jeffrey; McIlvain, Brian; Lobb, Christopher; Hill, Wendell T., III

    2012-06-01

    Recent experiments with neutral superfluid gases have shown that it is possible to create atomtronic circuits analogous to existing superconducting circuits. The goals of these experiments are to create complex systems such as Josephson junctions. In addition, there are theoretical models for active atomtronic components analogous to diodes, transistors and oscillators. In order for any of these devices to function, an understanding of the more fundamental atomtronic elements is needed. Here we describe the first experimental realization of these more fundamental elements. We have created an atomtronic capacitor that is discharged through a resistance and inductance. We will discuss a theoretical description of the system that allows us to determine values for the capacitance, resistance and inductance. The resistance is shown to be analogous to the Sharvin resistance, and the inductance analogous to kinetic inductance in electronics. This atomtronic circuit is implemented with a thermal sample of laser cooled rubidium atoms. The atoms are confined using what we call free-space atom chips, a novel optical dipole trap produced using a generalized phase-contrast imaging technique. We will also discuss progress toward implementing this atomtronic system in a degenerate Bose gas.

  13. Unification of Fundamental Forces

    NASA Astrophysics Data System (ADS)

    Salam, Abdus

    1990-05-01

    This is an expanded version of the third Dirac Memorial Lecture, given in 1988 by the Nobel Laureate Abdus Salam. Salam's lecture presents an overview of the developments in modern particle physics from its inception at the turn of the century to the present theories seeking to unify all the fundamental forces. In addition, two previously unpublished lectures by Paul Dirac, and Werner Heisenberg are included. These lectures provide a fascinating insight into their approach to research and the developments in particle physics at that time. Nonspecialists, undergraduates and researchers will find this a fascinating book. It contains a clear introduction to the major themes of particle physics and cosmology by one of the most distinguished contemporary physicists.

  14. Fundamentals in Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Basdevant, Jean-Louis, Rich, James, Spiro, Michael

    This course on nuclear physics leads the reader to the exploration of the field from nuclei to astrophysical issues. Much nuclear phenomenology can be understood from simple arguments such as those based on the Pauli principle and the Coulomb barrier. This book is concerned with extrapolating from such arguments and illustrating nuclear systematics with experimental data. Starting with the basic concepts in nuclear physics, nuclear models, and reactions, the book covers nuclear decays and the fundamental electro-weak interactions, radioactivity, and nuclear energy. After the discussions of fission and fusion leading into nuclear astrophysics, there is a presentation of the latest ideas about cosmology. As a primer this course will lay the foundations for more specialized subjects. This book emerged from a series of topical courses the authors delivered at the Ecole Polytechnique and will be useful for graduate students and for scientists in a variety of fields.

  15. Fundamentals of zoological scaling

    NASA Astrophysics Data System (ADS)

    Lin, Herbert

    1982-01-01

    Most introductory physics courses emphasize highly idealized problems with unique well-defined answers. Though many textbooks complement these problems with estimation problems, few books present anything more than an elementary discussion of scaling. This paper presents some fundamentals of scaling in the zoological domain—a domain complex by any standard, but one also well suited to illustrate the power of very simple physical ideas. We consider the following animal characteristics: skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing flapping, and maximum sizes of animals that fly and hover. These relationships are compared to zoological data and everyday experience, and match reasonably well.

  16. Fundamentals of gel dosimeters

    NASA Astrophysics Data System (ADS)

    McAuley, K. B.; Nasr, A. T.

    2013-06-01

    Fundamental chemical and physical phenomena that occur in Fricke gel dosimeters, polymer gel dosimeters, micelle gel dosimeters and genipin gel dosimeters are discussed. Fricke gel dosimeters are effective even though their radiation sensitivity depends on oxygen concentration. Oxygen contamination can cause severe problems in polymer gel dosimeters, even when THPC is used. Oxygen leakage must be prevented between manufacturing and irradiation of polymer gels, and internal calibration methods should be used so that contamination problems can be detected. Micelle gel dosimeters are promising due to their favourable diffusion properties. The introduction of micelles to gel dosimetry may open up new areas of dosimetry research wherein a range of water-insoluble radiochromic materials can be explored as reporter molecules.

  17. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  18. Fundamentals of Plasma Physics

    NASA Astrophysics Data System (ADS)

    Bellan, Paul M.

    2008-07-01

    Preface; 1. Basic concepts; 2. The Vlasov, two-fluid, and MHD models of plasma dynamics; 3. Motion of a single plasma particle; 4. Elementary plasma waves; 5. Streaming instabilities and the Landau problem; 6. Cold plasma waves in a magnetized plasma; 7. Waves in inhomogeneous plasmas and wave energy relations; 8. Vlasov theory of warm electrostatic waves in a magnetized plasma; 9. MHD equilibria; 10. Stability of static MHD equilibria; 11. Magnetic helicity interpreted and Woltjer-Taylor relaxation; 12. Magnetic reconnection; 13. Fokker-Planck theory of collisions; 14. Wave-particle nonlinearities; 15. Wave-wave nonlinearities; 16. Non-neutral plasmas; 17. Dusty plasmas; Appendix A. Intuitive method for vector calculus identities; Appendix B. Vector calculus in orthogonal curvilinear coordinates; Appendix C. Frequently used physical constants and formulae; Bibliography; References; Index.

  19. The Hubble constant.

    PubMed

    Tully, R B

    1993-06-01

    Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391

  20. When constants are important

    SciTech Connect

    Beiu, V.

    1997-04-01

    In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.

  1. Unitaxial constant velocity microactuator

    DOEpatents

    McIntyre, Timothy J.

    1994-01-01

    A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-manometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment.

  2. A Constant Pressure Bomb

    NASA Technical Reports Server (NTRS)

    Stevens, F W

    1924-01-01

    This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound.

  3. The Hubble constant.

    PubMed Central

    Tully, R B

    1993-01-01

    Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391

  4. Quenching fundamentals: Heat transfer

    SciTech Connect

    MacKenzie, D.S.; Totten, G.E.; Webster, G.M.

    1996-12-31

    Quenching is essentially a heat transfer problem. It is necessary to quench parts fast enough that adequate mechanical and corrosion properties are achieved, but not so fast that detrimental distortion and residual stresses are formed. In addition, non-uniform heat transfer across the surface of a part will produce thermal gradients which will also create distortion or residual stresses. In this paper, the role of agitation will be discussed in terms of the heat transfer coefficient. A brief review of the published heat transfer literature will be discussed in terms of the fluid flow on heat transfer coefficient, with implications on quenching.

  5. GRBs and Fundamental Physics

    NASA Astrophysics Data System (ADS)

    Petitjean, Patrick; Wang, F. Y.; Wu, X. F.; Wei, J. J.

    2016-02-01

    Gamma-ray bursts (GRBs) are short and intense flashes at the cosmological distances, which are the most luminous explosions in the Universe. The high luminosities of GRBs make them detectable out to the edge of the visible universe. So, they are unique tools to probe the properties of high-redshift universe: including the cosmic expansion and dark energy, star formation rate, the reionization epoch and the metal evolution of the Universe. First, they can be used to constrain the history of cosmic acceleration and the evolution of dark energy in a redshift range hardly achievable by other cosmological probes. Second, long GRBs are believed to be formed by collapse of massive stars. So they can be used to derive the high-redshift star formation rate, which can not be probed by current observations. Moreover, the use of GRBs as cosmological tools could unveil the reionization history and metal evolution of the Universe, the intergalactic medium (IGM) properties and the nature of first stars in the early universe. But beyond that, the GRB high-energy photons can be applied to constrain Lorentz invariance violation (LIV) and to test Einstein's Equivalence Principle (EEP). In this paper, we review the progress on the GRB cosmology and fundamental physics probed by GRBs.

  6. Fundamentals of Atmospheric Radiation

    NASA Astrophysics Data System (ADS)

    Bohren, Craig F.; Clothiaux, Eugene E.

    2006-02-01

    This textbook fills a gap in the literature for teaching material suitable for students of atmospheric science and courses on atmospheric radiation. It covers the fundamentals of emission, absorption, and scattering of electromagnetic radiation from ultraviolet to infrared and beyond. Much of the book applies to planetary atmosphere. The authors are physicists and teach at the largest meteorology department of the US at Penn State. Craig T. Bohren has taught the atmospheric radiation course there for the past 20 years with no book. Eugene Clothiaux has taken over and added to the course notes. Problems given in the text come from students, colleagues, and correspondents. The design of the figures especially for this book is meant to ease comprehension. Discussions have a graded approach with a thorough treatment of subjects, such as single scattering by particles, at different levels of complexity. The discussion of the multiple scattering theory begins with piles of plates. This simple theory introduces concepts in more advanced theories, i.e. optical thickness, single-scattering albedo, asymmetry parameter. The more complicated theory, the two-stream theory, then takes the reader beyond the pile-of-plates theory. Ideal for advanced undergraduate and graduate students of atmospheric science.

  7. Fundamentals of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Tang, C. L.

    2005-06-01

    Quantum mechanics has evolved from a subject of study in pure physics to one with a wide range of applications in many diverse fields. The basic concepts of quantum mechanics are explained in this book in a concise and easy-to-read manner emphasising applications in solid state electronics and modern optics. Following a logical sequence, the book is focused on the key ideas and is conceptually and mathematically self-contained. The fundamental principles of quantum mechanics are illustrated by showing their application to systems such as the hydrogen atom, multi-electron ions and atoms, the formation of simple organic molecules and crystalline solids of practical importance. It leads on from these basic concepts to discuss some of the most important applications in modern semiconductor electronics and optics. Containing many homework problems and worked examples, the book is suitable for senior-level undergraduate and graduate level students in electrical engineering, materials science and applied physics. Clear exposition of quantum mechanics written in a concise and accessible style Precise physical interpretation of the mathematical foundations of quantum mechanics Illustrates the important concepts and results by reference to real-world examples in electronics and optoelectronics Contains homeworks and worked examples, with solutions available for instructors

  8. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  9. Fundamentals of the Control of Gas-Turbine Power Plants for Aircraft. Part 2; Principles of Control Common to Jet, Turbine-Propeller Jet, and Ducted-Fan Jet Power Plants

    NASA Technical Reports Server (NTRS)

    Kuehl, H.

    1947-01-01

    After defining the aims and requirements to be set for a control system of gas-turbine power plants for aircraft, the report will deal with devices that prevent the quantity of fuel supplied per unit of time from exceeding the value permissible at a given moment. The general principles of the actuation of the adjustable parts of the power plant are also discussed.

  10. Quantum theory of the complex dielectric constant of free carriers in polar semiconductors

    SciTech Connect

    Jensen, B.

    1982-09-01

    The optical constants and reflectivity of a semiconductor are known as functions of the real and imaginary parts of the complex dielectric constant. The imaginary part of the complex dielectric constant e/sub 2/ is proportional to the optical conductivity, which has recently been calculated from the quantum density matrix equation of motion. The expression obtained for e/sub 2/ reduces to the Drude result, as obtained from the quasi-classical Boltzmann transport equation, in the limit of low frequencies and elastic scattering mechanisms, and to the quantum result found using time dependent perturbation theory in the limit of high frequencies. This paper derives the real part of the complex dielectric constant e/sub 1/ for a III-V or II-VI semiconductor with the band structure of the Kane theory, using the quantum density matrix method. The relation of e/sub 1/ to the second order perturbation energy of the system is shown, and the reflectivity is a minimum when the second order perturbation energy vanishes. The quantum calculation for e/sub 1/ gives approximately the same result as the Drude theory, except near the fundamental absorption edge, and reduces to the Drude result at low frequencies. Using the complex dielectric constant, the real and imaginary parts of the complex refractive index, the skin depth, and surface impedance, and the reflectivity are found. The plasma resonance is examined. The surface impedance and the skin depth are shown to reduce to the usual classical result in the limit that e/sub 1/ = 0 and w tau << 1, where w is the angular frequency of the applied field and tau is the electron scattering time.

  11. Hydrogen molecular ions: new schemes for metrology and fundamental physics tests

    NASA Astrophysics Data System (ADS)

    Karr, Jean-Philippe; Patra, Sayan; Koelemeij, Jeroen C. J.; Heinrich, Johannes; Sillitoe, Nicolas; Douillet, Albane; Hilico, Laurent

    2016-06-01

    High-accuracy spectroscopy of hydrogen molecular ions has important applications for the metrology of fundamental constants and tests of fundamental theories. Up to now, the experimental resolution has not surpassed the part-per-billion range. We discuss two methods by which it could be improved by a huge factor. Firstly, the feasibility of Doppler-free quasidegenerate two-photon spectroscopy of trapped and sympathetically cooled ensembles of HD+ ions is discussed, and it is shown that rovibrational transitions may be detected with a good signal-to-noise ratio. Secondly, the performance of a molecular quantum-logic ion clock based on a single Be+-H2 + ion pair is analyzed in detail. Such a clock could allow testing the constancy of the proton-to-electron mass ratio at the 10-17/yr level.

  12. Prediction and measurement of the proportionality constant in statistical energy analysis of structures.

    NASA Technical Reports Server (NTRS)

    Lotz, R.; Crandall, S. H.

    1973-01-01

    The fundamental equation of statistical energy analysis (SEA) states that the average power flow between two coupled vibrating systems is proportional to the difference in their average modal energies. Under certain circumstances it is possible to estimate the proportionality constant by modifying system boundary conditions on the separated systems and calculating or measuring changes in the systems. Newland's estimate, based upon blocking part of the system, is reexamined, and limitations are discussed. Three alternative methods which circumvent blocking are presented. These were applied to predict power flow in experiments on coupled beams and on coupled plates wherein power flow through the coupling was measured directly as a product of force times velocity. The measurements support the fundamental SEA relation, including the null power point where the average modal energies are equal.

  13. Solar astrophysical fundamental parameters

    NASA Astrophysics Data System (ADS)

    Meftah, M.; Irbah, A.; Hauchecorne, A.

    2014-08-01

    The accurate determination of the solar photospheric radius has been an important problem in astronomy for many centuries. From the measurements made by the PICARD spacecraft during the transit of Venus in 2012, we obtained a solar radius of 696,156±145 kilometres. This value is consistent with recent measurements carried out atmosphere. This observation leads us to propose a change of the canonical value obtained by Arthur Auwers in 1891. An accurate value for total solar irradiance (TSI) is crucial for the Sun-Earth connection, and represents another solar astrophysical fundamental parameter. Based on measurements collected from different space instruments over the past 35 years, the absolute value of the TSI, representative of a quiet Sun, has gradually decreased from 1,371W.m-2 in 1978 to around 1,362W.m-2 in 2013, mainly due to the radiometers calibration differences. Based on the PICARD data and in agreement with Total Irradiance Monitor measurements, we predicted the TSI input at the top of the Earth's atmosphere at a distance of one astronomical unit (149,597,870 kilometres) from the Sun to be 1,362±2.4W.m-2, which may be proposed as a reference value. To conclude, from the measurements made by the PICARD spacecraft, we obtained a solar photospheric equator-to-pole radius difference value of 5.9±0.5 kilometres. This value is consistent with measurements made by different space instruments, and can be given as a reference value.

  14. TASI Lectures on the cosmological constant

    SciTech Connect

    Bousso, Raphael; Bousso, Raphael

    2007-08-30

    The energy density of the vacuum, Lambda, is at least 60 orders of magnitude smaller than several known contributions to it. Approaches to this problem are tightly constrained by data ranging from elementary observations to precision experiments. Absent overwhelming evidence to the contrary, dark energy can only be interpreted as vacuum energy, so the venerable assumption that Lambda=0 conflicts with observation. The possibility remains that Lambda is fundamentally variable, though constant over large spacetime regions. This can explain the observed value, but only in a theory satisfying a number of restrictive kinematic and dynamical conditions. String theory offers a concrete realization through its landscape of metastable vacua.

  15. Topological Quantization in Units of the Fine Structure Constant

    SciTech Connect

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H.Dennis; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC

    2011-11-11

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant {alpha} = e{sup 2}/{h_bar}c. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  16. Gravitational clock: A proposed experiment for the measurement of the gravitational constant G

    NASA Technical Reports Server (NTRS)

    Smalley, L. L.

    1975-01-01

    The increased importance and the fundamental significance of accurately measuring the gravitational constant G are discussed along with recent or proposed experimental measurements of G. The method of using mutually gravitating bodies in the clock mode in a drag-free satellite is described. A satellite experiment consisting of the flat-plate spherical mass oscillator proposed combines the mathematical and experimental conveniences most simply. It is estimated that accuracies of 1 part in 1,000,000 are easily obtainable by careful fabrication of parts. The use of cryogenic techniques, thin films, and superconductors allows increased accuracies of two or three orders of magnitude or better. These measurements can be increased to the level of 1 part in 10 to the 11th power at which time-variations, and other variations, in G can be observed.

  17. Fundamentals of Space Medicine

    NASA Astrophysics Data System (ADS)

    Clément, Gilles

    2005-03-01

    A total of more than 240 human space flights have been completed to date, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This readable text presents the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardio-vascular, bone, and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated, and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination of both the

  18. Fundamentals of Space Medicine

    NASA Astrophysics Data System (ADS)

    Clément, G.

    2003-10-01

    As of today, a total of more than 240 human space flights have been completed, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This book presents in a readable text the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardiovascular, bone and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination

  19. Beyond the Hubble Constant

    NASA Astrophysics Data System (ADS)

    1995-08-01

    about the distances to galaxies and thereby about the expansion rate of the Universe. A simple way to determine the distance to a remote galaxy is by measuring its redshift, calculate its velocity from the redshift and divide this by the Hubble constant, H0. For instance, the measured redshift of the parent galaxy of SN 1995K (0.478) yields a velocity of 116,000 km/sec, somewhat more than one-third of the speed of light (300,000 km/sec). From the universal expansion rate, described by the Hubble constant (H0 = 20 km/sec per million lightyears as found by some studies), this velocity would indicate a distance to the supernova and its parent galaxy of about 5,800 million lightyears. The explosion of the supernova would thus have taken place 5,800 million years ago, i.e. about 1,000 million years before the solar system was formed. However, such a simple calculation works only for relatively ``nearby'' objects, perhaps out to some hundred million lightyears. When we look much further into space, we also look far back in time and it is not excluded that the universal expansion rate, i.e. the Hubble constant, may have been different at earlier epochs. This means that unless we know the change of the Hubble constant with time, we cannot determine reliable distances of distant galaxies from their measured redshifts and velocities. At the same time, knowledge about such change or lack of the same will provide unique information about the time elapsed since the Universe began to expand (the ``Big Bang''), that is, the age of the Universe and also its ultimate fate. The Deceleration Parameter q0 Cosmologists are therefore eager to determine not only the current expansion rate (i.e., the Hubble constant, H0) but also its possible change with time (known as the deceleration parameter, q0). Although a highly accurate value of H0 has still not become available, increasing attention is now given to the observational determination of the second parameter, cf. also the Appendix at the

  20. Astronomers Gain Clues About Fundamental Physics

    NASA Astrophysics Data System (ADS)

    2005-12-01

    An international team of astronomers has looked at something very big -- a distant galaxy -- to study the behavior of things very small -- atoms and molecules -- to gain vital clues about the fundamental nature of our entire Universe. The team used the National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) to test whether the laws of nature have changed over vast spans of cosmic time. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) "The fundamental constants of physics are expected to remain fixed across space and time; that's why they're called constants! Now, however, new theoretical models for the basic structure of matter indicate that they may change. We're testing these predictions." said Nissim Kanekar, an astronomer at the National Radio Astronomy Observatory (NRAO), in Socorro, New Mexico. So far, the scientists' measurements show no change in the constants. "We've put the most stringent limits yet on some changes in these constants, but that's not the end of the story," said Christopher Carilli, another NRAO astronomer. "This is the exciting frontier where astronomy meets particle physics," Carilli explained. The research can help answer fundamental questions about whether the basic components of matter are tiny particles or tiny vibrating strings, how many dimensions the Universe has, and the nature of "dark energy." The astronomers were looking for changes in two quantities: the ratio of the masses of the electron and the proton, and a number physicists call the fine structure constant, a combination of the electron charge, the speed of light and the Planck constant. These values, considered fundamental physical constants, once were "taken as time independent, with values given once and forever" said German particle physicist Christof Wetterich. However, Wetterich explained, "the viewpoint of modern particle theory has changed in recent years," with ideas such as

  1. Improving Estimated Optical Constants With MSTM and DDSCAT Modeling

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Wolff, M. J.

    2015-12-01

    We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long

  2. Can the imaginary part of permeability be negative?

    PubMed

    Markel, Vadim A

    2008-08-01

    When new composite optical materials are developed experimentally or studied in numerical simulations, it is essential to have a set of fundamental constraints that the optical constants of such materials must satisfy. In this paper I argue that positivity of the imaginary part of the magnetic permeability may not be one of such constraints, particularly in naturally occurring diamagnetics and in artificial materials that exhibit diamagnetic response to low-frequency or static magnetic fields. PMID:18850963

  3. Open inflation, the four form and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Turok, Neil; Hawking, S. W.

    1998-07-01

    Fundamental theories of quantum gravity such as supergravity include a four form field strength which contributes to the cosmological constant. The inclusion of such a field into our theory of open inflation [S.W. Hawking, N. Turok, Phys. Lett. B 425 (1998) 25] allows an anthropic solution to the cosmological constant problem in which cosmological constant gives a small but non-negligible contribution to the density of today's universe. We include a discussion of the role of the singularity in our solution and a reply to Vilenkin's recent criticism.

  4. 15 CFR 734.8 - Information resulting from fundamental research.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... research. 734.8 Section 734.8 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... OF THE EXPORT ADMINISTRATION REGULATIONS § 734.8 Information resulting from fundamental research. (a) Fundamental research. Paragraphs (b) through (d) of this section and § 734.11 of this part provide...

  5. 15 CFR 734.8 - Information resulting from fundamental research.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... research. 734.8 Section 734.8 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... OF THE EXPORT ADMINISTRATION REGULATIONS § 734.8 Information resulting from fundamental research. (a) Fundamental research. Paragraphs (b) through (d) of this section and § 734.11 of this part provide...

  6. 15 CFR 734.8 - Information resulting from fundamental research.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... research. 734.8 Section 734.8 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... OF THE EXPORT ADMINISTRATION REGULATIONS § 734.8 Information resulting from fundamental research. (a) Fundamental research. Paragraphs (b) through (d) of this section and § 734.11 of this part provide...

  7. Discovering and Experiencing the Fundamental Theorem of Calculus.

    ERIC Educational Resources Information Center

    Rosenthal, Bill

    1992-01-01

    Offers calculus students and teachers the opportunity to motivate and discover the first Fundamental Theorem of Calculus (FTC) in an experimental, experiential, inductive, intuitive, vernacular-based manner. Starting from the observation that a distance traveled at a constant speed corresponds to the area inside a rectangle, the FTC is discovered,…

  8. Fundamentals and Techniques of Nonimaging

    SciTech Connect

    O'Gallagher, J. J.; Winston, R.

    2003-07-10

    This is the final report describing a long term basic research program in nonimaging optics that has led to major advances in important areas, including solar energy, fiber optics, illumination techniques, light detectors, and a great many other applications. The term ''nonimaging optics'' refers to the optics of extended sources in systems for which image forming is not important, but effective and efficient collection, concentration, transport, and distribution of light energy is. Although some of the most widely known developments of the early concepts have been in the field of solar energy, a broad variety of other uses have emerged. Most important, under the auspices of this program in fundamental research in nonimaging optics established at the University of Chicago with support from the Office of Basic Energy Sciences at the Department of Energy, the field has become very dynamic, with new ideas and concepts continuing to develop, while applications of the early concepts continue to be pursued. While the subject began as part of classical geometrical optics, it has been extended subsequently to the wave optics domain. Particularly relevant to potential new research directions are recent developments in the formalism of statistical and wave optics, which may be important in understanding energy transport on the nanoscale. Nonimaging optics permits the design of optical systems that achieve the maximum possible concentration allowed by physical conservation laws. The earliest designs were constructed by optimizing the collection of the extreme rays from a source to the desired target: the so-called ''edge-ray'' principle. Later, new concentrator types were generated by placing reflectors along the flow lines of the ''vector flux'' emanating from lambertian emitters in various geometries. A few years ago, a new development occurred with the discovery that making the design edge-ray a functional of some other system parameter permits the construction of whole

  9. Fundamentals of ICF Hohlraums

    SciTech Connect

    Rosen, M D

    2005-09-30

    On the Nova Laser at LLNL, we demonstrated many of the key elements required for assuring that the next laser, the National Ignition Facility (NIF) will drive an Inertial Confinement Fusion (ICF) target to ignition. The indirect drive (sometimes referred to as ''radiation drive'') approach converts laser light to x-rays inside a gold cylinder, which then acts as an x-ray ''oven'' (called a hohlraum) to drive the fusion capsule in its center. On Nova we've demonstrated good understanding of the temperatures reached in hohlraums and of the ways to control the uniformity with which the x-rays drive the spherical fusion capsules. In these lectures we will be reviewing the physics of these laser heated hohlraums, recent attempts at optimizing their performance, and then return to the ICF problem in particular to discuss scaling of ICF gain with scale size, and to compare indirect vs. direct drive gains. In ICF, spherical capsules containing Deuterium and Tritium (DT)--the heavy isotopes of hydrogen--are imploded, creating conditions of high temperature and density similar to those in the cores of stars required for initiating the fusion reaction. When DT fuses an alpha particle (the nucleus of a helium atom) and a neutron are created releasing large amount amounts of energy. If the surrounding fuel is sufficiently dense, the alpha particles are stopped and can heat it, allowing a self-sustaining fusion burn to propagate radially outward and a high gain fusion micro-explosion ensues. To create those conditions the outer surface of the capsule is heated (either directly by a laser or indirectly by laser produced x-rays) to cause rapid ablation and outward expansion of the capsule material. A rocket-like reaction to that outward flowing heated material leads to an inward implosion of the remaining part of the capsule shell. The pressure generated on the outside of the capsule can reach nearly 100 megabar (100 million times atmospheric pressure [1b = 10{sup 6} cgs

  10. Constant crunch coordinates for black hole simulations

    NASA Astrophysics Data System (ADS)

    Gentle, Adrian P.; Holz, Daniel E.; Kheyfets, Arkady; Laguna, Pablo; Miller, Warner A.; Shoemaker, Deirdre M.

    2001-03-01

    We reinvestigate the utility of time-independent constant mean curvature foliations for the numerical simulation of a single spherically symmetric black hole. Each spacelike hypersurface of such a foliation is endowed with the same constant value of the trace of the extrinsic curvature tensor K. Of the three families of K-constant surfaces possible (classified according to their asymptotic behaviors), we single out a subfamily of singularity-avoiding surfaces that may be particularly useful, and provide an analytic expression for the closest approach such surfaces make to the singularity. We then utilize a nonzero shift to yield families of K-constant surfaces which (1) avoid the black hole singularity, and thus the need to excise the singularity, (2) are asymptotically null, aiding in gravity wave extraction, (3) cover the physically relevant part of the spacetime, (4) are well behaved (regular) across the horizon, and (5) are static under evolution, and therefore have no ``grid stretching/ sucking'' pathologies. Preliminary numerical runs demonstrate that we can stably evolve a single spherically symmetric static black hole using this foliation. We wish to emphasize that this coordinatization produces K-constant surfaces for a single black hole spacetime that are regular, static, and stable throughout their evolution.

  11. QCD coupling constants and VDM

    SciTech Connect

    Erkol, G.; Ozpineci, A.; Zamiralov, V. S.

    2012-10-23

    QCD sum rules for coupling constants of vector mesons with baryons are constructed. The corresponding QCD sum rules for electric charges and magnetic moments are also derived and with the use of vector-meson-dominance model related to the coupling constants. The VDM role as the criterium of reciprocal validity of the sum rules is considered.

  12. Ablative Thermal Protection System Fundamentals

    NASA Technical Reports Server (NTRS)

    Beck, Robin A. S.

    2013-01-01

    This is the presentation for a short course on the fundamentals of ablative thermal protection systems. It covers the definition of ablation, description of ablative materials, how they work, how to analyze them and how to model them.

  13. Precision measurement of the Newtonian gravitational constant using cold atoms.

    PubMed

    Rosi, G; Sorrentino, F; Cacciapuoti, L; Prevedelli, M; Tino, G M

    2014-06-26

    About 300 experiments have tried to determine the value of the Newtonian gravitational constant, G, so far, but large discrepancies in the results have made it impossible to know its value precisely. The weakness of the gravitational interaction and the impossibility of shielding the effects of gravity make it very difficult to measure G while keeping systematic effects under control. Most previous experiments performed were based on the torsion pendulum or torsion balance scheme as in the experiment by Cavendish in 1798, and in all cases macroscopic masses were used. Here we report the precise determination of G using laser-cooled atoms and quantum interferometry. We obtain the value G = 6.67191(99) × 10(-11) m(3) kg(-1) s(-2) with a relative uncertainty of 150 parts per million (the combined standard uncertainty is given in parentheses). Our value differs by 1.5 combined standard deviations from the current recommended value of the Committee on Data for Science and Technology. A conceptually different experiment such as ours helps to identify the systematic errors that have proved elusive in previous experiments, thus improving the confidence in the value of G. There is no definitive relationship between G and the other fundamental constants, and there is no theoretical prediction for its value, against which to test experimental results. Improving the precision with which we know G has not only a pure metrological interest, but is also important because of the key role that G has in theories of gravitation, cosmology, particle physics and astrophysics and in geophysical models. PMID:24965653

  14. Precision atomic mass spectrometry with applications to fundamental constants, neutrino physics, and physical chemistry

    NASA Astrophysics Data System (ADS)

    Mount, Brianna J.; Redshaw, Matthew; Myers, Edmund G.

    2011-07-01

    We present a summary of precision atomic mass measurements of stable isotopes carried out at Florida State University. These include the alkalis 6Li, 23Na, 39,41K, 85,87Rb, 133Cs; the rare gas isotopes 84,86Kr and 129,130,132,136Xe; 17,18O, 19F, 28Si, 31P, 32S; and various isotope pairs of importance to neutrino physics, namely 74,76Se/74,76Ge, 130Xe/130Te, and 115In/115Sn. We also summarize our Penning trap measurements of the dipole moments of PH + and HCO + .

  15. Fundamental mechanisms of micromachine reliability

    SciTech Connect

    DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.

    2000-01-01

    Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is independent of

  16. Fundamental performance differences between CMOS and CCD imagers: Part II

    NASA Astrophysics Data System (ADS)

    Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff

    2007-09-01

    A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.

  17. Fundamentals of Physics, Part 3 (Chapters 22-33)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-03-01

    Chapter 21. Electric Charge. Why do video monitors in surgical rooms increase the risk of bacterial contamination? 21-1 What Is Physics? 21-2 Electric Charge. 21-3 Conductors and Insulators. 21-4 Coulomb's Law. 21-5 Charge Is Quantized. 21-6 Charge Is Conserved. Review & Summary. Questions. Problems. Chapter 22. Electric Fields. What causes sprites, those brief .ashes of light high above lightning storms? 22-1 What Is Physics? 22-2 The Electric Field. 22-3 Electric Field Lines. 22-4 The Electric Field Due to a Point Charge. 22-5 The Electric Field Due to an Electric Dipole. 22-6 The Electric Field Due to a Line of Charge. 22-7 The Electric Field Due to a Charged Disk. 22-8 A Point Charge in an Electric Field. 22-9 A Dipole in an Electric Field. Review & Summary. Questions. Problems. Chapter 23. Gauss' Law. How can lightning harm you even if it do es not strike you? 23-1 What Is Physics? 23-2 Flux. 23-3 Flux of an Electric Field. 23-4 Gauss' Law. 23-5 Gauss' Law and Coulomb's Law. 23-6 A Charged Isolated Conductor. 23-7 Applying Gauss' Law: Cylindrical Symmetry. 23-8 Applying Gauss' Law: Planar Symmetry. 23-9 Applying Gauss' Law: Spherical Symmetry. Review & Summary. Questions. Problems. Chapter 24. Electric Potential. What danger does a sweater pose to a computer? 24-1 What Is Physics? 24-2 Electric Potential Energy. 24-3 Electric Potential. 24-4 Equipotential Surfaces. 24-5 Calculating the Potential from the Field. 24-6 Potential Due to a Point Charge. 24-7 Potential Due to a Group of Point Charges. 24-8 Potential Due to an Electric Dipole. 24-9 Potential Due to a Continuous Charge Distribution. 24-10 Calculating the Field from the Potential. 24-11 Electric Potential Energy of a System of Point Charges. 24-12 Potential of a Charged Isolated Conductor. Review & Summary. Questions. Problems. Chapter 25. Capacitance. How did a fire start in a stretcher being withdrawn from an oxygen chamber? 25-1 What Is Physics? 25-2 Capacitance. 25-3 Calculating the Capacitance. 25-4 Capacitors in Parallel and in Series. 25-5 Energy Stored in an Electric Field. 25-6 Capacitor with a Dielectric. 25-7 Dielectrics: An Atomic View. 25-8 Dielectrics and Gauss' Law. Review & Summary. Questions. Problems. Chapter 26. Current and Resistance. What precaution should you take if caught outdoors during a lightning storm? 26-1 What Is Physics? 26-2 Electric Current. 26-3 Current Density. 26-4 Resistance and Resistivity. 26-5 Ohm's Law. 26-6 A Microscopic View of Ohm's Law. 26-7 Power in Electric Circuits. 26-8 Semiconductors. 26-9 Superconductors. Review & Summary. Questions. Problems. Chapter 27. Circuits. How can a pit crew avoid a fire while fueling a charged race car? 27-1 What Is Physics? 27-2 "Pumping" Charges. 27-3 Work, Energy, and Emf. 27-4 Calculating the Current in a Single-Loop Circuit. 27-5 Other Single-Loop Circuits. 27-6 Potential Difference Between Two Points. 27-7 Multiloop Circuits. 27-8 The Ammeter and the Voltmeter. 27-9 RC Circuits. Review & Summary. Questions. Problems. Chapter 28. Magnetic Fields. How can a beam of fast neutrons, which are electrically neutral, be produced in a hospital to treat cancer patients? 28-1 What Is Physics? 28-2 What Produces a Magnetic Field? 28-3 The Definition of 736 :B. 28-4 Crossed Fields: Discovery of the Electron . 28-5 Crossed Fields: The Hall Effect. 28-6 A Circulating Charged Particle. 28-7 Cyclotrons and Synchrotrons. 28-8 Magnetic Force on a Current-Carrying Wire. 28-9 Torque on a Current Loop. 28-10 The Magnetic Dipole Moment. Review & Summary. Questions. Problems. Chapter 29. Magnetic Fields Due to Currents. How can the human brain produce a detectable magnetic field without any magnetic material? 29-1 What Is Physics? 29-2 Calculating the Magnetic Field Due to a Current. 29-3 Force Between Two Parallel Currents. 29-4 Ampere's Law. 29-5 Solenoids and Toroids. 29-6 A Current-Carrying Coil as a Magnetic Dipole. Review & Summary. Questions. Problems. Chapter 30. Induction and Inductance. How can the magnetic .eld used in an MRI scan cause a pati

  18. Fundamental ignition study for material fire safety improvement, part 1

    NASA Technical Reports Server (NTRS)

    Paciorek, K. L.; Zung, L. B.

    1970-01-01

    The investigation of preignition, ignition, and combustion characteristics of Delrin (acetate terminated polyformaldehyde) and Teflon (polytetrafluoroethylene) resins in air and oxygen are presented. The determination of ignition limits and their dependence on temperature and the oxidizing media, as well as the analyses of the volatiles produced, were studied. Tests were conducted in argon, an inert medium in which only purely pyrolytic reactions can take place, using the stagnation burner arrangement designed and constructed for this purpose. A theoretical treatment of the ignition and combination phenomena was devised. In the case of Delrin the ignition and ignition delays are apparently independent of the gas (air, oxygen) temperatures. The results indicate that hydrogen is the ignition triggering agent. Teflon ignition limits were established in oxygen only.

  19. Fundamental ignition study for material fire safety improvement, part 2

    NASA Technical Reports Server (NTRS)

    Paciorek, K. L.; Kratzer, R. H.; Kaufman, J.

    1971-01-01

    The autoignition behavior of polymeric compositions in oxidizing media was investigated as well as the nature and relative concentration of the volatiles produced during oxidative decomposition culminating in combustion. The materials investigated were Teflon, Fluorel KF-2140 raw gum and its compounded versions Refset and Ladicote, 45B3 intumenscent paint, and Ames isocyanurate foam. The majority of the tests were conducted using a stagnation burner arrangement which provided a laminar gas flow and allowed the sample block and gas temperatures to be varied independently. The oxidizing atmospheres were essentially air and oxygen, although in the case of the Fluorel family of materials, due to partial blockage of the gas inlet system, some tests were performed unintentionally in enriched air (not oxygen). The 45B3 paint was not amenable to sampling in a dynamic system, due to its highly intumescent nature. Consequently, selected experiments were conducted using a sealed tube technique both in air and oxygen media.

  20. Fundamentals of Physics, Part 2 (Chapters 12-20)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    Chapter 12 Equilibrium and Elasticity. What injury can occur to a rock climber hanging by a crimp hold? 12-1 What Is Physics? 12-2 Equilibrium. 12-3 The Requirements of Equilibrium. 12-4 The Center of Gravity. 12-5 Some Examples of Static Equilibrium. 12-6 Indeterminate Structures. 12-7 Elasticity. Review & Summary Questions Problems. Chapter 13 Gravitation. What lies at the center of our Milky Way galaxy? 13-1 What Is Physics? 13-2 Newton's Law of Gravitation. 13-3 Gravitation and the Principle of Superposition. 13-4 Gravitation Near Earth's Surface. 13-5 Gravitation Inside Earth. 13-6 Gravitational Potential Energy. 13-7 Planets and Satellites: Kepler's Laws. 13-8 Satellites: Orbits and Energy. 13-9 Einstein and Gravitation. Review & Summary Questions Problems. Chapter 14 Fluids. What causes ground effect in race car driving? 14-1 What Is Physics? 14-2 What Is a Fluid? 14-3 Density and Pressure. 14-4 Fluids at Rest. 14-5 Measuring Pressure. 14-6 Pascal's Principle. 14-7 Archimedes' Principle. 14-8 Ideal Fluids in Motion. 14-9 The Equation of Continuity. 14-10 Bernoulli's Equation. Review & SummaryQuestionsProblems. Chapter 15 Oscillations. What is the "secret" of a skilled diver's high catapult in springboard diving? 15-1 What Is Physics? 15-2 Simple Harmonic Motion. 15-3 The Force Law for Simple Harmonic Motion. 15-4 Energy in Simple Harmonic Motion. 15-5 An Angular Simple Harmonic Oscillator. 15-6 Pendulums. 15-7 Simple Harmonic Motion and Uniform Circular Motion. 15-8 Damped Simple Harmonic Motion. 15-9 Forced Oscillations and Resonance. Review & Summary Questions Problems. Chapter 16 Waves--I. How can a submarine wreck be located by distant seismic stations? 16-1 What Is Physics? 16-2 Types of Waves. 16-3 Transverse and Longitudinal Waves. 16-4 Wavelength and Frequency. 16-5 The Speed of a Traveling Wave. 16-6 Wave Speed on a Stretched String. 16-7 Energy and Power of a Wave Traveling Along a String. 16-8 The Wave Equation. 16-9 The Principle of Superposition for Waves. 16-10 Interference of Waves. 16-11 Phasors. 16-12 Standing Waves. 16-13 Standing Waves and Resonance. Review & Summary Questions Problems. Chapter 17 Waves--II. How can an emperor penguin .nd its mate among thousands of huddled penguins? 17-1 What Is Physics? 17-2 Sound Waves. 17-3 The Speed of Sound. 17-4 Traveling Sound Waves. 17-5 Interference. 17-6 Intensity and Sound Level. 17-7 Sources of Musical Sound. 17-8 Beats. 17-9 The Doppler Effect. 17-10 Supersonic Speeds, Shock Waves. Review & Summary Questions Problems. Chapter 18 Temperature, Heat, and the First Law of Thermodynamics. How can a dead rattlesnake detect and strike a reaching hand? 18-1 What Is Physics?. 18-2 Temperature. 18-3 The Zeroth Law of Thermodynamics. 18-4 Measuring Temperature. 18-5 The Celsius and Fahrenheit Scales. 18-6 Thermal Expansion. 18-7 Temperature and Heat. 18-8 The Absorption of Heat by Solids and Liquids. 18-9 A Closer Look at Heat and Work. 18-10 The First Law of Thermodynamics. 18-11 Some Special Cases of the First Law of Thermodynamics. 18-12 Heat Transfer Mechanisms. Review & Summary Questions Problems. Chapter 19 The Kinetic Theory of Gases. How can cooling steam inside a railroad tank car cause the car to be crushed? 19-1 What Is Physics? 19-2 Avogadro's Number. 19-3 Ideal Gases. 19-4 Pressure, Temperature, and RMS Speed. 19-5 Translational Kinetic Energy. 19-6 Mean Free Path. 19-7 The Distribution of Molecular Speeds. 19-8 The Molar Speci.c Heats of an Ideal Gas. 19-9 Degrees of Freedom and Molar Speci.c Heats. 19-10 A Hint of Quantum Theory. 19-11 The Adiabatic Expansion of an Ideal Gas. Review & Summary Questions Problems. Chapter 20 Entropy and the Second Law of Thermodynamics. Why is the popping of popcorn irreversible? 20-1 What Is Physics? 20-2 Irreversible Processes and Entropy. 20-3 Change in Entropy. 20-4 The Second Law of Thermodynamics. 20-5 Entropy in the Real World: Engines. 20-6 Entropy in the Real World: Refrigerators. 20-7 The Ef.ciencies of Real Engines. 20-8 A Statistical View of Entropy. Review &#

  1. ESTIMATION OF IONIZATION CONSTANTS OF AZO DYES AND RELATED AROMATIC AMINES: ENVIRONMENTAL IMPLICATIONS

    EPA Science Inventory

    Ionization constants for 214 dye molecules were calculated from molecular structures using the chemical reactivity models developed in SPARC (SPARC Performs Automated Reasoning in Chemistry). hese models used fundamental chemical structure theory to predict chemical reactivities ...

  2. The fundamental plane correlations for globular clusters

    NASA Technical Reports Server (NTRS)

    Djorgovski, S.

    1995-01-01

    In the parameter space whose axes include a radius (core, or half-light), a surface brightness (central, or average within the half-light radius), and the central projected velocity dispersion, globular clusters lie on a two-dimensional surface (a plane, if the logarithmic quantities are used). This is analogous to the 'fundamental plane' of elliptical galaxies. The implied bivariate correlations are the best now known for globular clusters. The derived scaling laws for the core properties imply that cluster cores are fully virialized, homologous systems, with a constant (M/L) ratio. The corresponding scaling laws on the half-light scale are differrent, but are nearly identical to those derived from the 'fundamental plane' of ellipticals. This may be due to the range of cluster concentrations, which are correlated with other parameters. A similar explanation for elliptical galaxies may be viable. These correlations provide new empirical constraints for models of globular cluster formation and evolution, and may also be usable as rough distance-indicator relations for globular clusters.

  3. Geophysics Fatally Flawed by False Fundamental Philosophy

    NASA Astrophysics Data System (ADS)

    Myers, L. S.

    2004-05-01

    For two centuries scientists have failed to realize Laplace's nebular hypothesis \\(1796\\) of Earth's creation is false. As a consequence, geophysicists today are misinterpreting and miscalculating many fundamental aspects of the Earth and Solar System. Why scientists have deluded themselves for so long is a mystery. The greatest error is the assumption Earth was created 4.6 billion years ago as a molten protoplanet in its present size, shape and composition. This assumption ignores daily accretion of more than 200 tons/day of meteorites and dust, plus unknown volumes of solar insolation that created coal beds and other biomass that increased Earth's mass and diameter over time! Although the volume added daily is minuscule compared with Earth's total mass, logic and simple addition mandates an increase in mass, diameter and gravity. Increased diameter from accretion is proved by Grand Canyon stratigraphy that shows a one kilometer increase in depth and planetary radius at a rate exceeding three meters \\(10 ft\\) per Ma from start of the Cambrian \\(540 Ma\\) to end of the Permian \\(245 Ma\\)-each layer deposited onto Earth's surface. This is unequivocal evidence of passive external growth by accretion, part of a dual growth and expansion process called "Accreation" \\(creation by accretion\\). Dynamic internal core expansion, the second stage of Accreation, did not commence until the protoplanet reached spherical shape at 500-600 km diameter. At that point, gravity-powered compressive heating initiated core melting and internal expansion. Expansion quickly surpassed the external accretion growth rate and produced surface volcanoes to relieve explosive internal tectonic pressure and transfer excess mass (magma)to the surface. Then, 200-250 Ma, expansion triggered Pangaea's breakup, first sundering Asia and Australia to form the Pacific Ocean, followed by North and South America to form the Atlantic Ocean, by the mechanism of midocean ridges, linear underwater

  4. Simplified fundamental force and mass measurements

    NASA Astrophysics Data System (ADS)

    Robinson, I. A.

    2016-08-01

    The watt balance relates force or mass to the Planck constant h, the metre and the second. It enables the forthcoming redefinition of the unit of mass within the SI by measuring the Planck constant in terms of mass, length and time with an uncertainty of better than 2 parts in 108. To achieve this, existing watt balances require complex and time-consuming alignment adjustments limiting their use to a few national metrology laboratories. This paper describes a simplified construction and operating principle for a watt balance which eliminates the need for the majority of these adjustments and is readily scalable using either electromagnetic or electrostatic actuators. It is hoped that this will encourage the more widespread use of the technique for a wide range of measurements of force or mass. For example: thrust measurements for space applications which would require only measurements of electrical quantities and velocity/displacement.

  5. Effective cosmological constant induced by stochastic fluctuations of Newton's constant

    NASA Astrophysics Data System (ADS)

    de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi

    2016-09-01

    We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.

  6. Effect of Fundamental Frequency on Judgments of Electrolaryngeal Speech

    ERIC Educational Resources Information Center

    Nagle, Kathy F.; Eadie, Tanya L.; Wright, Derek R.; Sumida, Yumi A.

    2012-01-01

    Purpose: To determine (a) the effect of fundamental frequency (f0) on speech intelligibility, acceptability, and perceived gender in electrolaryngeal (EL) speakers, and (b) the effect of known gender on speech acceptability in EL speakers. Method: A 2-part study was conducted. In Part 1, 34 healthy adults provided speech recordings using…

  7. Environmental Law: Fundamentals for Schools.

    ERIC Educational Resources Information Center

    Day, David R.

    This booklet outlines the environmental problems most likely to arise in schools. An overview provides a fundamental analysis of environmental issues rather than comprehensive analysis and advice. The text examines the concerns that surround superfund cleanups, focusing on the legal framework, and furnishes some practical pointers, such as what to…

  8. Fundamental Cycles of Cognitive Growth.

    ERIC Educational Resources Information Center

    Pegg, John

    Over recent years, various theories have arisen to explain and predict cognitive development in mathematics education. We focus on an underlying theme that recurs throughout such theories: a fundamental cycle of growth in the learning of specific concepts, which we frame within broader global theories of individual cognitive growth. Our purpose is…

  9. Fundamentals of the Slide Library.

    ERIC Educational Resources Information Center

    Boerner, Susan Zee

    This paper is an introduction to the fundamentals of the art (including architecture) slide library, with some emphasis on basic procedures of the science slide library. Information in this paper is particularly relevant to the college, university, and museum slide library. Topics addressed include: (1) history of the slide library; (2) duties of…

  10. Fundamentals of Environmental Education. Report.

    ERIC Educational Resources Information Center

    1976

    An outline of fundamental definitions, relationships, and human responsibilities related to environment provides a basis from which a variety of materials, programs, and activities can be developed. The outline can be used in elementary, secondary, higher education, or adult education programs. The framework is based on principles of the science…

  11. Lighting Fundamentals. Monograph Number 13.

    ERIC Educational Resources Information Center

    Locatis, Craig N.; Gerlach, Vernon S.

    Using an accompanying, specified film that consists of 10-second pictures separated by blanks, the learner can, with the 203-step, self-correcting questions and answers provided in this program, come to understand the fundamentals of lighting in photography. The learner should, by the end of the program, be able to describe and identify the…

  12. Fundamentals of Microelectronics Processing (VLSI).

    ERIC Educational Resources Information Center

    Takoudis, Christos G.

    1987-01-01

    Describes a 15-week course in the fundamentals of microelectronics processing in chemical engineering, which emphasizes the use of very large scale integration (VLSI). Provides a listing of the topics covered in the course outline, along with a sample of some of the final projects done by students. (TW)

  13. Brake Fundamentals. Automotive Articulation Project.

    ERIC Educational Resources Information Center

    Cunningham, Larry; And Others

    Designed for secondary and postsecondary auto mechanics programs, this curriculum guide contains learning exercises in seven areas: (1) brake fundamentals; (2) brake lines, fluid, and hoses; (3) drum brakes; (4) disc brake system and service; (5) master cylinder, power boost, and control valves; (6) parking brakes; and (7) trouble shooting. Each…

  14. Museum Techniques in Fundamental Education.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    Some museum techniques and methods can be used in fundamental educational programs without elaborate buildings or equipment; exhibitions should be based on valid presumptions and should take into account the "common sense" beliefs of people for whom the exhibit is designed. They can be used profitably in the economic development of local cultural…

  15. Fundamentals of Welding. Teacher Edition.

    ERIC Educational Resources Information Center

    Fortney, Clarence; And Others

    These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…

  16. Status of Fundamental Physics Program

    NASA Technical Reports Server (NTRS)

    Lee, Mark C.

    2003-01-01

    Update of the Fundamental Physics Program. JEM/EF Slip. 2 years delay. Reduced budget. Community support and advocacy led by Professor Nick Bigelow. Reprogramming led by Fred O Callaghan/JPL team. LTMPF M1 mission (DYNAMX and SUMO). PARCS. Carrier re baselined on JEM/EF.

  17. Light as a Fundamental Particle

    ERIC Educational Resources Information Center

    Weinberg, Steven

    1975-01-01

    Presents two arguments concerning the role of the photon. One states that the photon is just another particle distinguished by a particular value of charge, spin, mass, lifetime, and interaction properties. The second states that the photon plays a fundamental role with a deep relation to ultimate formulas of physics. (GS)

  18. Cosmologies with variable gravitational constant

    NASA Astrophysics Data System (ADS)

    Narlikar, J. V.

    1983-03-01

    In 1937 Dirac presented an argument, based on the socalled large dimensionless numbers, which led him to the conclusion that the Newtonian gravitational constant G changes with epoch. Towards the end of the last century Ernst Mach had given plausible arguments to link the property of inertia of matter to the large scale structure of the universe. Mach's principle also leads to cosmological models with a variable gravitational constant. Three cosmologies which predict a variable G are discussed in this paper both from theoretical and observational points of view.

  19. On Determination of the Geometric Cosmological Constant from the Opera Experiment of Superluminal Neutrinos

    NASA Astrophysics Data System (ADS)

    Yan, Mu-Lin; Hu, Sen; Huang, Wei; Xiao, Neng-Chao

    The recent OPERA experiment of superluminal neutrinos has deep consequences in cosmology. In cosmology a fundamental constant is the cosmological constant. From observations one can estimate the effective cosmological constant Λeff which is the sum of the quantum zero point energy Λdark energy and the geometric cosmological constant Λ. The OPERA experiment can be applied to determine the geometric cosmological constant Λ. It is the first study to distinguish the contributions of Λ and Λdark energy from each other by experiment. The determination is based on an explanation of the OPERA experiment in the framework of Special Relativity with de Sitter spacetime symmetry.

  20. Fundamentals of Managing Reference Collections

    ERIC Educational Resources Information Center

    Singer, Carol A.

    2012-01-01

    Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…

  1. Determination of the Vibrational Constants of Some Diatomic Molecules: A Combined Infrared Spectroscopic and Quantum Chemical Third Year Chemistry Project.

    ERIC Educational Resources Information Center

    Ford, T. A.

    1979-01-01

    In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…

  2. Constant-amplitude RC oscillator

    NASA Technical Reports Server (NTRS)

    Kerwin, W. J.; Westbrook, R. M.

    1970-01-01

    Sinusoidal oscillator has a frequency determined by resistance-capacitance /RC/ values of two charge control devices and a constant-amplitude voltage independent of frequency and RC values. RC elements provide either voltage-control, resistance-control, or capacitance-control of the frequency.

  3. Fundamental limits in heat-assisted magnetic recording and methods to overcome it with exchange spring structures

    NASA Astrophysics Data System (ADS)

    Suess, D.; Vogler, C.; Abert, C.; Bruckner, F.; Windl, R.; Breth, L.; Fidler, J.

    2015-04-01

    The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePt film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5-0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10-4 if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.

  4. DOE Fundamentals Handbook: Classical Physics

    SciTech Connect

    Not Available

    1992-06-01

    The Classical Physics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of physical forces and their properties. The handbook includes information on the units used to measure physical properties; vectors, and how they are used to show the net effect of various forces; Newton's Laws of motion, and how to use these laws in force and motion applications; and the concepts of energy, work, and power, and how to measure and calculate the energy involved in various applications. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility systems and equipment.

  5. Fundamental neutron physics at LANSCE

    SciTech Connect

    Greene, G.

    1995-10-01

    Modern neutron sources and science share a common origin in mid-20th-century scientific investigations concerned with the study of the fundamental interactions between elementary particles. Since the time of that common origin, neutron science and the study of elementary particles have evolved into quite disparate disciplines. The neutron became recognized as a powerful tool for studying condensed matter with modern neutron sources being primarily used (and justified) as tools for neutron scattering and materials science research. The study of elementary particles has, of course, led to the development of rather different tools and is now dominated by activities performed at extremely high energies. Notwithstanding this trend, the study of fundamental interactions using neutrons has continued and remains a vigorous activity at many contemporary neutron sources. This research, like neutron scattering research, has benefited enormously by the development of modern high-flux neutron facilities. Future sources, particularly high-power spallation sources, offer exciting possibilities for continuing this research.

  6. The 1% concordance Hubble constant

    SciTech Connect

    Bennett, C. L.; Larson, D.; Weiland, J. L.; Hinshaw, G.

    2014-10-20

    The determination of the Hubble constant has been a central goal in observational astrophysics for nearly a hundred years. Extraordinary progress has occurred in recent years on two fronts: the cosmic distance ladder measurements at low redshift and cosmic microwave background (CMB) measurements at high redshift. The CMB is used to predict the current expansion rate through a best-fit cosmological model. Complementary progress has been made with baryon acoustic oscillation (BAO) measurements at relatively low redshifts. While BAO data do not independently determine a Hubble constant, they are important for constraints on possible solutions and checks on cosmic consistency. A precise determination of the Hubble constant is of great value, but it is more important to compare the high and low redshift measurements to test our cosmological model. Significant tension would suggest either uncertainties not accounted for in the experimental estimates or the discovery of new physics beyond the standard model of cosmology. In this paper we examine in detail the tension between the CMB, BAO, and cosmic distance ladder data sets. We find that these measurements are consistent within reasonable statistical expectations and we combine them to determine a best-fit Hubble constant of 69.6 ± 0.7 km s{sup –1} Mpc{sup –1}. This value is based upon WMAP9+SPT+ACT+6dFGS+BOSS/DR11+H {sub 0}/Riess; we explore alternate data combinations in the text. The combined data constrain the Hubble constant to 1%, with no compelling evidence for new physics.

  7. Dielectric Constant of Suspensions of Blood Cells

    NASA Astrophysics Data System (ADS)

    Mendelson, Kenneth; Ackmann, James

    1996-03-01

    Measurements of the complex dielectric constant of suspensions of blood cells have recently been reported by Ackmann, et al.(J. J. Ackmann, et al., Ann. Biomed. Eng. 24), 58 (1996). At frequencies below 100 kHz, the real part of the dielectric constant (ɛ') goes through a maximum at a blood cell volume fraction of about 70%. Effective medium approximations do not agree well with this behavior. As a more realistic model, we are studying the grain consolidation model of Roberts and Schwartz(J. N. Roberts and L. M. Schwartz, Phys. Rev. B 31), 5990 (1985). We have used a finite element method to calculate the dielectric constant of this model for a cubic array of spheres. The simulations agree remarkably well with experiment. They suggest, however, that ɛ' may be showing oscillations rather than a simple maximum. Comparison of the simulated and experimental points suggests that this is not an artifact of the periodic array used in the model. Furthermore the simulations indicate that the maximum (or oscillations) disappears at low conductivities of the suspending fluid.

  8. Fundamentals of gas measurement I

    SciTech Connect

    Dodds, D.E.

    1995-12-01

    To truly understand gas measurement, a person must understand gas measurement fundamentals. This includes the units of measurement, the behavior of the gas molecule, the property of gases, the gas laws, and the methods and means of measuring gas. Since the quality of gas is often the responsibility of the gas measurement technician, it is important that he or she have a knowledge of natural gas chemistry.

  9. The spectroscopic constants and anharmonic force field of AgSH: An ab initio study.

    PubMed

    Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang

    2016-07-01

    The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH. PMID:27085293

  10. The spectroscopic constants and anharmonic force field of AgSH: An ab initio study

    NASA Astrophysics Data System (ADS)

    Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang

    2016-07-01

    The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH.

  11. The Not so Constant Gravitational "Constant" G as a Function of Quantum Vacuum

    NASA Astrophysics Data System (ADS)

    Maxmilian Caligiuri, Luigi

    Gravitation is still the less understood among the fundamental forces of Nature. The ultimate physical origin of its ruling constant G could give key insights in this understanding. According to the Einstein's Theory of General Relativity, a massive body determines a gravitational potential that alters the speed of light, the clock's rate and the particle size as a function of the distance from its own center. On the other hand, it has been shown that the presence of mass determines a modification of Zero-Point Field (ZPF) energy density within its volume and in the space surrounding it. All these considerations strongly suggest that also the constant G could be expressed as a function of quantum vacuum energy density somehow depending on the distance from the mass whose presence modifies the ZPF energy structure. In this paper, starting from a constitutive medium-based picture of space, it has been formulated a model of gravitational constant G as a function of Planck's time and Quantum Vacuum energy density in turn depending on the radial distance from center of the mass originating the gravitational field, supposed as spherically symmetric. According to this model, in which gravity arises from the unbalanced physical vacuum pressure, gravitational "constant" G is not truly unchanging but slightly varying as a function of the distance from the mass source of gravitational potential itself. An approximate analytical form of such dependence has been discussed. The proposed model, apart from potentially having deep theoretical consequences on the commonly accepted picture of physical reality (from cosmology to matter stability), could also give the theoretical basis for unthinkable applications related, for example, to the field of gravity control and space propulsion.

  12. Electrochemical metallization memories—fundamentals, applications, prospects

    NASA Astrophysics Data System (ADS)

    Valov, Ilia; Waser, Rainer; Jameson, John R.; Kozicki, Michael N.

    2011-06-01

    This review focuses on electrochemical metallization memory cells (ECM), highlighting their advantages as the next generation memories. In a brief introduction, the basic switching mechanism of ECM cells is described and the historical development is sketched. In a second part, the full spectra of materials and material combinations used for memory device prototypes and for dedicated studies are presented. In a third part, the specific thermodynamics and kinetics of nanosized electrochemical cells are described. The overlapping of the space charge layers is found to be most relevant for the cell properties at rest. The major factors determining the functionality of the ECM cells are the electrode reaction and the transport kinetics. Depending on electrode and/or electrolyte material electron transfer, electro-crystallization or slow diffusion under strong electric fields can be rate determining. In the fourth part, the major device characteristics of ECM cells are explained. Emphasis is placed on switching speed, forming and SET/RESET voltage, RON to ROFF ratio, endurance and retention, and scaling potentials. In the last part, circuit design aspects of ECM arrays are discussed, including the pros and cons of active and passive arrays. In the case of passive arrays, the fundamental sneak path problem is described and as well as a possible solution by two anti-serial (complementary) interconnected resistive switches per cell. Furthermore, the prospects of ECM with regard to further scalability and the ability for multi-bit data storage are addressed.

  13. Molecular structure, spectral constants, and fermi resonances in chlorine nitrate

    NASA Astrophysics Data System (ADS)

    Petkie, Douglas T.; Butler, Rebecca A. H.; Helminger, Paul; De Lucia, Frank C.

    2004-06-01

    Chlorine nitrate has two low-lying vibrational modes that lead to a series of Fermi resonances in the 9 υ97 υ7 family of levels that include the 9 2⇔7 1 and 9 3⇔7 19 1 dyads and the 9 4⇔9 27 1⇔7 2 and 9 5⇔9 37 1⇔9 17 2 triads. These states, along with the ground and 9 1 vibrational states, have been previously analyzed with millimeter and submillimeter wave spectroscopy and provide a substantial body of data for the investigation of these resonances and their impact on calculated spectroscopic constants and structural parameters. Due to fitting indeterminacies, these previous analyses did not include the main Fermi resonance interaction term. Consequently, the fitted rotational constants are linear combinations of the unmixed rotational constants of the basis vibrational states. In this paper, we have calculated the contributions of the Fermi resonances to the observed rotational constants in a model that determines the vibrational-rotational constants, the Fermi term and the mixing between interacting vibrational states, the cubic potential constant ( φ997) that connects interacting levels through a Fermi resonance, and the inertial defects. These results agree with predictions from ab initio and harmonic force field calculations and provide further experimental information for the determination of the fundamental molecular properties of chlorine nitrate.

  14. Low uncertainty Boltzmann constant determinations and the kelvin redefinition.

    PubMed

    Fischer, J

    2016-03-28

    At its 25th meeting, the General Conference on Weights and Measures (CGPM) approved Resolution 1 'On the future revision of the International System of Units, the SI', which sets the path towards redefinition of four base units at the next CGPM in 2018. This constitutes a decisive advance towards the formal adoption of the new SI and its implementation. Kilogram, ampere, kelvin and mole will be defined in terms of fixed numerical values of the Planck constant, elementary charge, Boltzmann constant and Avogadro constant, respectively. The effect of the new definition of the kelvin referenced to the value of the Boltzmann constant k is that the kelvin is equal to the change of thermodynamic temperature T that results in a change of thermal energy kT by 1.380 65×10(-23) J. A value of the Boltzmann constant suitable for defining the kelvin is determined by fundamentally different primary thermometers such as acoustic gas thermometers, dielectric constant gas thermometers, noise thermometers and the Doppler broadening technique. Progress to date of the measurements and further perspectives are reported. Necessary conditions to be met before proceeding with changing the definition are given. The consequences of the new definition of the kelvin on temperature measurement are briefly outlined. PMID:26903108

  15. How does Planck’s constant influence the macroscopic world?

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2016-09-01

    In physics, Planck’s constant is a fundamental physical constant accounting for the energy-quantization phenomenon in the microscopic world. The value of Planck’s constant also determines in which length scale the quantum phenomenon will become conspicuous. Some students think that if Planck’s constant were to have a larger value than it has now, the quantum effect would only become observable in a world with a larger size, whereas the macroscopic world might remain almost unchanged. After reasoning from some basic physical principles and theories, we found that doubling Planck’s constant might result in a radical change on the geometric sizes and apparent colors of macroscopic objects, the solar spectrum and luminosity, the climate and gravity on Earth, as well as energy conversion between light and materials such as the efficiency of solar cells and light-emitting diodes. From the discussions in this paper, students can appreciate how Planck’s constant affects various aspects of the world in which we are living now.

  16. Quaternions as astrometric plate constants

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.

    1987-01-01

    A new method for solving problems in relative astrometry is proposed. In it, the relationship between the measured quantities and the components of the position vector of a star is modeled using quaternions, in effect replacing the plate constants of a standard four-plate-constant solution with the four components of a quaternion. The method allows a direct solution for the position vectors of the stars, and hence for the equatorial coordinates. Distortions, magnitude, and color effects are readily incorporated into the formalism, and the method is directly applicable to overlapping-plate problems. The advantages of the method include the simplicity of the resulting equations, their freedom from singularities, and the fact that trigonometric functions and tangential point transformations are not needed to model the plate material. A global solution over the entire sky is possible.

  17. Confinement from constant field condensates

    NASA Astrophysics Data System (ADS)

    Gaete, Patricio; Guendelman, Eduardo; Spallucci, Euro

    2007-01-01

    For (2 + 1)- and (3 + 1)-dimensional reformulated SU (2) Yang-Mills theory, we compute the interaction potential within the framework of the gauge-invariant but path-dependent variables formalism. This reformulation is due to the presence of a constant gauge field condensate. Our results show that the interaction energy contains a linear term leading to the confinement of static probe charges. This result is equivalent to that of the massive Schwinger model.

  18. Why Do We Need to Study the Fundamentals of Care?

    PubMed

    Kitson, Alison

    2016-01-01

    This paper makes the case for revisiting our understanding and valuing of basic or fundamental nursing care. Despite the interest in movements such as the person-centred or patient-centred care agenda, there continues to be concern about patient safety, quality of experience and getting the simple things right. Part of this debate is around whether meeting patients' fundamental care needs (such as personal hygiene, elimination and eating and drinking) within acute care settings constitutes legitimate nursing responsibilities or whether these needs ought to become part of "hotel services" executed by care assistants with elementary training or, as in many lower-income health systems, undertaken by relatives. PMID:27309637

  19. Fundamental Limits to Cellular Sensing

    NASA Astrophysics Data System (ADS)

    ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew

    2016-03-01

    In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade

  20. Solid Lubrication Fundamentals and Applications

    NASA Technical Reports Server (NTRS)

    Miyoshi, Kazuhisa

    2001-01-01

    Solid Lubrication Fundamentals and Applications description of the adhesion, friction, abrasion, and wear behavior of solid film lubricants and related tribological materials, including diamond and diamond-like solid films. The book details the properties of solid surfaces, clean surfaces, and contaminated surfaces as well as discussing the structures and mechanical properties of natural and synthetic diamonds; chemical-vapor-deposited diamond film; surface design and engineering toward wear-resistant, self-lubricating diamond films and coatings. The author provides selection and design criteria as well as applications for synthetic and natural coatings in the commercial, industrial and aerospace industries..

  1. Reconstruction of fundamental SUSY parameters

    SciTech Connect

    P. M. Zerwas et al.

    2003-09-25

    We summarize methods and expected accuracies in determining the basic low-energy SUSY parameters from experiments at future e{sup +}e{sup -} linear colliders in the TeV energy range, combined with results from LHC. In a second step we demonstrate how, based on this set of parameters, the fundamental supersymmetric theory can be reconstructed at high scales near the grand unification or Planck scale. These analyses have been carried out for minimal supergravity [confronted with GMSB for comparison], and for a string effective theory.

  2. Are the Truly Constant Constants of Nature? How is the Real Material Space and its Structure?

    SciTech Connect

    Luz Montero Garcia, Jose de la; Novoa Blanco, Jesus Francisco

    2007-04-28

    In a concise and simplified way, some matters of authors' theories -Unified Theory of the Physical and Mathematical Universal Constants and Quantum Cellular Structural Geometry-, an only one theoretical main body MN2. This investigation has as objective the search of the last cells that base the existence, unicity and harmony of matter, as well as its structural-formal and dynamic-functional diversity. The quantitative hypothesis is demonstrated that 'World is one, is one; but it is one Arithmetic-Geometric-Topological-Dimensional and Structural-Cellular-Dynamic one, simultaneously'. In the Frontiers of Fundamental Physics such last cells are the cells of own Real Material Space of whose whole accretion, interactive and staggered all the existing one at all the hierarchic levels arises, cells these below which make no sense to speak of structure and, therefore, of existence. The cells of the Real Material Space are its 'Atoms'. Law of Planetary Systems or '4th Kepler's Law'.

  3. Reflectance Spectra and Optical Constants of Iron Sulfates For Mars

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Noe Dobrea, E. Z.; Jamieson, C. S.; Dalton, J. B.; Abbey, W. J.

    2012-12-01

    In this work, we present visible and near-infrared (VNIR, λ=0.35 - 5 μm) laboratory reflectance spectra obtained at Mars-relevant temperatures and corresponding optical constants (real and imaginary refractive indices) for iron sulfates that have been observed on Mars, e.g., via Mars Reconnaissance Orbiter CRISM and Mars Express OMEGA spectrometers. Fe-sulfates have also been found by the MER rovers in a variety of forms in Meridiani Planum and Gusev Crater, suggesting acidic aqueous, evaporation, and dessication processes were at work in these locations. We focus first on the Fe-sulfates szomolnokite and natural samples of jarosite, which have been found as distinct layers within polyhydrated non-Fe sulfate material at Columbus Crater on Mars and as outcrops at Mawrth Vallis. We also present data on five of the following Fe-sulfates in our library: butlerite, copiapite, coquimbite, ferricopiapite, melanterite, parabutlerite, rozenite, and rhomboclase. Determining the exact type of Mars sulfates (Fe- vs. Mg-rich) may lead to more information on the epoch of formation or humidity conditions on Mars during their formation. Therefore, these data will help to fully distinguish between and constrain the abundance and distribution of sulfates on the martian surface, which will lead to improvements in understanding the pressure, temperature, and humidity conditions and how active frost, groundwater, and atmospheric processes once were on Mars. This work was supported by NASA's Mars Fundamental Research Program (NNX10AP78G: PI Pitman) and partly performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.

  4. Constants of motion of the four-particle Calogero model

    SciTech Connect

    Saghatelian, A.

    2012-10-15

    We present the explicit expressions of the complete set of constants of motion of four-particle Calogero model with excluded center of mass, i.e. of the A{sub 3} rational Calogero model. Then we find the constants of motion of its spherical part, defining two-dimensional 12-center spherical oscillator, with the force centers located at the vertexes of cuboctahedron.

  5. Highly precise clocks to test fundamental physics

    NASA Astrophysics Data System (ADS)

    Bize, S.; Wolf, P.

    2012-12-01

    Highly precise atomic clocks and precision oscillators are excellent tools to test founding principles, such as the Equivalence Principle, which are the basis of modern physics. A large variety of tests are possible, including tests of Local Lorentz Invariance, of Local Position Invariance like, for example, tests of the variability of natural constants with time and with gravitation potential, tests of isotropy of space, etc. Over several decades, SYRTE has developed an ensemble of highly accurate atomic clocks and oscillators using a large diversity of atomic species and methods. The SYRTE clock ensemble comprises hydrogen masers, Cs and Rb atomic fountain clocks, Sr and Hg optical lattice clocks, as well as ultra stable oscillators both in the microwave domain (cryogenic sapphire oscillator) and in the optical domain (Fabry-Perot cavity stabilized ultra stable lasers) and means to compare these clocks locally or remotely (fiber links in the RF and the optical domain, femtosecond optical frequency combs, satellite time and frequency transfer methods). In this paper, we list the fundamental physics tests that have been performed over the years with the SYRTE clock ensemble. Several of these tests are done thanks to the collaboration with partner institutes including the University of Western Australia, the Max Planck Institut für Quantenoptik in Germany, and others.

  6. Dielectric constant of liquid alkanes and hydrocarbon mixtures

    NASA Technical Reports Server (NTRS)

    Sen, A. D.; Anicich, V. G.; Arakelian, T.

    1992-01-01

    The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.

  7. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  8. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  9. An Alcohol Test for Drifting Constants

    NASA Astrophysics Data System (ADS)

    Jansen, P.; Bagdonaite, J.; Ubachs, W.; Bethlem, H. L.; Kleiner, I.; Xu, L.-H.

    2013-06-01

    The Standard Model of physics is built on the fundamental constants of nature, however without providing an explanation for their values, nor requiring their constancy over space and time. Molecular spectroscopy can address this issue. Recently, we found that microwave transitions in methanol are extremely sensitive to a variation of the proton-to-electron mass ratio μ, due to a fortuitous interplay between classically forbidden internal rotation and rotation of the molecule as a whole. In this talk, we will explain the origin of this effect and how the sensitivity coefficients in methanol are calculated. In addition, we set a limit on a possible cosmological variation of μ by comparing transitions in methanol observed in the early Universe with those measured in the laboratory. Based on radio-astronomical observations of PKS1830-211, we deduce a constraint of Δμ/μ=(0.0± 1.0)× 10^{-7} at redshift z = 0.89, corresponding to a look-back time of 7 billion years. While this limit is more constraining and systematically more robust than previous ones, the methanol method opens a new search territory for probing μ-variation on cosmological timescales. P. Jansen, L.-H. Xu, I. Kleiner, W. Ubachs, and H.L. Bethlem Phys. Rev. Lett. {106}(100801) 2011. J. Bagdonaite, P. Jansen, C. Henkel, H.L. Bethlem, K.M. Menten, and W. Ubachs Science {339}(46) 2013.

  10. Two Fundamental Principles of Nature's Interactions

    NASA Astrophysics Data System (ADS)

    Ma, Tian; Wang, Shouhong

    2014-03-01

    In this talk, we present two fundamental principles of nature's interactions, the principle of interaction dynamics (PID) and the principle of representation invariance (PRI). Intuitively, PID takes the variation of the action functional under energy-momentum conservation constraint. PID offers a completely different and natural way of introducing Higgs fields. PRI requires that physical laws be independent of representations of the gauge groups. These two principles give rise to a unified field model for four interactions, which can be naturally decoupled to study individual interactions. With these two principles, we are able to derive 1) a unified theory for dark matter and dark energy, 2) layered strong and weak interaction potentials, and 3) the energy levels of subatomic particles. Supported in part by NSF, ONR and Chinese NSF.

  11. Henry's law constants of polyols

    NASA Astrophysics Data System (ADS)

    Compernolle, S.; Müller, J.-F.

    2014-05-01

    Henry's law constants (HLC) are derived for several polyols bearing between 2 and 6 hydroxyl groups, based on literature data for water activity, vapour pressure and/or solubility. Depending on the case, infinite dilution activity coefficients (IDACs), solid state pressures or activity coefficient ratios are obtained as intermediary results. For most compounds, these are the first values reported, while others compare favourably with literature data in most cases. Using these values and those from a previous work (Compernolle and Müller, 2014), an assessment is made on the partitioning of polyols, diacids and hydroxy acids to droplet and aqueous aerosol.

  12. Markov constant and quantum instabilities

    NASA Astrophysics Data System (ADS)

    Pelantová, Edita; Starosta, Štěpán; Znojil, Miloslav

    2016-04-01

    For a qualitative analysis of spectra of certain two-dimensional rectangular-well quantum systems several rigorous methods of number theory are shown productive and useful. These methods (and, in particular, a generalization of the concept of Markov constant known in Diophantine approximation theory) are shown to provide a new mathematical insight in the phenomenologically relevant occurrence of anomalies in the spectra. Our results may inspire methodical innovations ranging from the description of the stability properties of metamaterials and of certain hiddenly unitary quantum evolution models up to the clarification of the mechanisms of occurrence of ghosts in quantum cosmology.

  13. The constant-V vortex

    NASA Astrophysics Data System (ADS)

    Faller, Alan J.

    2001-05-01

    It has been found that the generation of swirl by a continuous rotary oscillation of a right-circular cylinder partially filled with water can leave a vortex with a radially constant tangential velocity, V, i.e. [partial partial differential]V/[partial partial differential]r = 0, excepting a small central core and the sidewall boundary layer. This vortex maintains [partial partial differential]V/[partial partial differential]r = 0 during viscous decay by the turbulent bottom boundary layer, a fact that suggests that [partial partial differential]V/[partial partial differential]r = 0 is a stable condition for a decaying vortex.

  14. Fundamental base closure environmental principles

    SciTech Connect

    Yim, R.A.

    1994-12-31

    Military base closures present a paradox. The rate, scale and timing of military base closures is historically unique. However, each base itself typically does not present unique problems. Thus, the challenge is to design innovative solutions to base redevelopment and remediation issues, while simultaneously adopting common, streamlined or pre-approved strategies to shared problems. The author presents six environmental principles that are fundamental to base closure. They are: remediation not clean up; remediation will impact reuse; reuse will impact remediation; remediation and reuse must be coordinated; environmental contamination must be evaluated as any other initial physical constraint on development, not as an overlay after plans are created; and remediation will impact development, financing and marketability.

  15. Fundamentals of air quality systems

    SciTech Connect

    Noll, K.E.

    1999-08-01

    The book uses numerous examples to demonstrate how basic design concepts can be applied to the control of air emissions from industrial sources. It focuses on the design of air pollution control devices for the removal of gases and particles from industrial sources, and provides detailed, specific design methods for each major air pollution control system. Individual chapters provide design methods that include both theory and practice with emphasis on the practical aspect by providing numerous examples that demonstrate how air pollution control devices are designed. Contents include air pollution laws, air pollution control devices; physical properties of air, gas laws, energy concepts, pressure; motion of airborne particles, filter and water drop collection efficiency; fundamentals of particulate emission control; cyclones; fabric filters; wet scrubbers; electrostatic precipitators; control of volatile organic compounds; adsorption; incineration; absorption; control of gaseous emissions from motor vehicles; practice problems (with solutions) for the P.E. examination in environmental engineering. Design applications are featured throughout.

  16. Understand vacuum-system fundamentals

    SciTech Connect

    Martin, G.R. ); Lines, J.R. ); Golden, S.W. )

    1994-10-01

    Crude vacuum unit heavy vacuum gas-oil (HVGO) yield is significantly impacted by ejector-system performance, especially at conditions below 20 mmHg absolute pressure. A deepcut vacuum unit, to reliably meet the yields, calls for proper design of all the major pieces of equipment. Ejector-system performance at deepcut vacuum column pressures may be independently or concurrently affected by: atmospheric column overflash, stripper performance or cutpoint; vacuum column top temperature and heat balance; light vacuum gas-oil (LVGO) pumparound entrainment to the ejector system; cooling-water temperature; motive steam pressure; non-condensible loading, either air leakage or cracked light-end hydrocarbons; condensible hydrocarbons; intercondenser or aftercondenser fouling ejector internal erosion or product build-up; and system vent back pressure. The paper discusses gas-oil yields; ejector-system fundamentals; condensers; vacuum-system troubleshooting; process operations; and a case study of deepcut operations.

  17. Fundamental reaction pathways during coprocessing

    SciTech Connect

    Stock, L.M.; Gatsis, J.G.

    1992-12-01

    The objective of this research was to investigate the fundamental reaction pathways in coal petroleum residuum coprocessing. Once the reaction pathways are defined, further efforts can be directed at improving those aspects of the chemistry of coprocessing that are responsible for the desired results such as high oil yields, low dihydrogen consumption, and mild reaction conditions. We decided to carry out this investigation by looking at four basic aspects of coprocessing: (1) the effect of fossil fuel materials on promoting reactions essential to coprocessing such as hydrogen atom transfer, carbon-carbon bond scission, and hydrodemethylation; (2) the effect of varied mild conditions on the coprocessing reactions; (3) determination of dihydrogen uptake and utilization under severe conditions as a function of the coal or petroleum residuum employed; and (4) the effect of varied dihydrogen pressure, temperature, and residence time on the uptake and utilization of dihydrogen and on the distribution of the coprocessed products. Accomplishments are described.

  18. [INFORMATION, A FUNDAMENTAL PATIENT RIGHT?].

    PubMed

    Mémeteau, Gérard

    2015-03-01

    Although expressed before the "Lambert" case, which has led us to think about refusal and assent in the context of internal rights, conventional rights--and in the context of the patient's bed!--these simple remarks present the patient's right to medical information as a so-called fundamental right. But it can only be understood with a view to a treatment or other medical act; otherwise it has no reason to be and is only an academic exercise, however exciting, but not much use by itself. What if we reversed the terms of the problem: the right of the doctor to information? (The beautiful thesis of Ph. Gaston, Paris 8, 2 December 2014). PMID:26606765

  19. Fundamental Travel Demand Model Example

    NASA Technical Reports Server (NTRS)

    Hanssen, Joel

    2010-01-01

    Instances of transportation models are abundant and detailed "how to" instruction is available in the form of transportation software help documentation. The purpose of this paper is to look at the fundamental inputs required to build a transportation model by developing an example passenger travel demand model. The example model reduces the scale to a manageable size for the purpose of illustrating the data collection and analysis required before the first step of the model begins. This aspect of the model development would not reasonably be discussed in software help documentation (it is assumed the model developer comes prepared). Recommendations are derived from the example passenger travel demand model to suggest future work regarding the data collection and analysis required for a freight travel demand model.

  20. Holographic viscosity of fundamental matter.

    PubMed

    Mateos, David; Myers, Robert C; Thomson, Rowan M

    2007-03-01

    A holographic dual of a finite-temperature SU(Nc) gauge theory with a small number of flavors Nf or =1/4pi. Given the known results for the entropy density, the contribution of the fundamental matter eta fund is therefore enhanced at strong 't Hooft coupling lambda; for example, eta fund approximately lambda NcNfT3 in four dimensions. Other transport coefficients are analogously enhanced. These results hold with or without a baryon number chemical potential. PMID:17358523

  1. Cognition is … Fundamentally Cultural

    PubMed Central

    Bender, Andrea; Beller, Sieghard

    2013-01-01

    A prevailing concept of cognition in psychology is inspired by the computer metaphor. Its focus on mental states that are generated and altered by information input, processing, storage and transmission invites a disregard for the cultural dimension of cognition, based on three (implicit) assumptions: cognition is internal, processing can be distinguished from content, and processing is independent of cultural background. Arguing against each of these assumptions, we point out how culture may affect cognitive processes in various ways, drawing on instances from numerical cognition, ethnobiological reasoning, and theory of mind. Given the pervasive cultural modulation of cognition—on all of Marr’s levels of description—we conclude that cognition is indeed fundamentally cultural, and that consideration of its cultural dimension is essential for a comprehensive understanding. PMID:25379225

  2. Fundamental enabling issues in nanotechnology :

    SciTech Connect

    Floro, Jerrold Anthony; Foiles, Stephen Martin; Hearne, Sean Joseph; Hoyt, Jeffrey John; Seel, Steven Craig; Webb, Edmund Blackburn,; Morales, Alfredo Martin; Zimmerman, Jonathan A.

    2007-10-01

    To effectively integrate nanotechnology into functional devices, fundamental aspects of material behavior at the nanometer scale must be understood. Stresses generated during thin film growth strongly influence component lifetime and performance; stress has also been proposed as a mechanism for stabilizing supported nanoscale structures. Yet the intrinsic connections between the evolving morphology of supported nanostructures and stress generation are still a matter of debate. This report presents results from a combined experiment and modeling approach to study stress evolution during thin film growth. Fully atomistic simulations are presented predicting stress generation mechanisms and magnitudes during all growth stages, from island nucleation to coalescence and film thickening. Simulations are validated by electrodeposition growth experiments, which establish the dependence of microstructure and growth stresses on process conditions and deposition geometry. Sandia is one of the few facilities with the resources to combine experiments and modeling/theory in this close a fashion. Experiments predicted an ongoing coalescence process that generates signficant tensile stress. Data from deposition experiments also supports the existence of a kinetically limited compressive stress generation mechanism. Atomistic simulations explored island coalescence and deposition onto surfaces intersected by grain boundary structures to permit investigation of stress evolution during later growth stages, e.g. continual island coalescence and adatom incorporation into grain boundaries. The predictive capabilities of simulation permit direct determination of fundamental processes active in stress generation at the nanometer scale while connecting those processes, via new theory, to continuum models for much larger island and film structures. Our combined experiment and simulation results reveal the necessary materials science to tailor stress, and therefore performance, in

  3. Stability constant estimator user`s guide

    SciTech Connect

    Hay, B.P.; Castleton, K.J.; Rustad, J.R.

    1996-12-01

    The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.

  4. Lighting fundamentals handbook: Lighting fundamentals and principles for utility personnel

    SciTech Connect

    Eley, C.; Tolen, T. Associates, San Francisco, CA ); Benya, J.R. )

    1992-12-01

    Lighting accounts for approximately 30% of overall electricity use and demand in commercial buildings. This handbook for utility personnel provides a source of basic information on lighting principles, lighting equipment, and other considerations related to lighting design. The handbook is divided into three parts. Part One, Physics of Light, has chapters on light, vision, optics, and photometry. Part Two, Lighting Equipment and Technology, focuses on lamps, luminaires, and lighting controls. Part Three, Lighting Design Decisions, deals with the manner in which lighting design decisions are made and reviews relevant methods and issues. These include the quantity and quality of light needed for visual tasks, calculation methods for verifying that lighting needs are satisfied, lighting economics and methods for evaluating investments in efficient lighting systems, and miscellaneous design issues including energy codes, power quality, photobiology, and disposal of lighting equipment. The handbook contains a discussion of the role of the utility in promoting the use of energy-efficient lighting. The handbook also includes a lighting glossary and a list of references for additional information. This convenient and comprehensive handbook is designed to enable utility lighting personnel to assist their customers in developing high-quality, energy-efficient lighting systems. The handbook is not intended to be an up-to-date reference on lighting products and equipment.

  5. Fundamental plant biology enabled by the space shuttle.

    PubMed

    Paul, Anna-Lisa; Wheeler, Ray M; Levine, Howard G; Ferl, Robert J

    2013-01-01

    The relationship between fundamental plant biology and space biology was especially synergistic in the era of the Space Shuttle. While all terrestrial organisms are influenced by gravity, the impact of gravity as a tropic stimulus in plants has been a topic of formal study for more than a century. And while plants were parts of early space biology payloads, it was not until the advent of the Space Shuttle that the science of plant space biology enjoyed expansion that truly enabled controlled, fundamental experiments that removed gravity from the equation. The Space Shuttle presented a science platform that provided regular science flights with dedicated plant growth hardware and crew trained in inflight plant manipulations. Part of the impetus for plant biology experiments in space was the realization that plants could be important parts of bioregenerative life support on long missions, recycling water, air, and nutrients for the human crew. However, a large part of the impetus was that the Space Shuttle enabled fundamental plant science essentially in a microgravity environment. Experiments during the Space Shuttle era produced key science insights on biological adaptation to spaceflight and especially plant growth and tropisms. In this review, we present an overview of plant science in the Space Shuttle era with an emphasis on experiments dealing with fundamental plant growth in microgravity. This review discusses general conclusions from the study of plant spaceflight biology enabled by the Space Shuttle by providing historical context and reviews of select experiments that exemplify plant space biology science. PMID:23281389

  6. Holographic dark energy with cosmological constant

    NASA Astrophysics Data System (ADS)

    Hu, Yazhou; Li, Miao; Li, Nan; Zhang, Zhenhui

    2015-08-01

    Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ωhde are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ2min=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain -0.07<ΩΛ0<0.68 and correspondingly 0.04<Ωhde0<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.

  7. BOOK REVIEWS: Quantum Mechanics: Fundamentals

    NASA Astrophysics Data System (ADS)

    Whitaker, A.

    2004-02-01

    This review is of three books, all published by Springer, all on quantum theory at a level above introductory, but very different in content, style and intended audience. That of Gottfried and Yan is of exceptional interest, historical and otherwise. It is a second edition of Gottfried’s well-known book published by Benjamin in 1966. This was written as a text for a graduate quantum mechanics course, and has become one of the most used and respected accounts of quantum theory, at a level mathematically respectable but not rigorous. Quantum mechanics was already solidly established by 1966, but this second edition gives an indication of progress made and changes in perspective over the last thirty-five years, and also recognises the very substantial increase in knowledge of quantum theory obtained at the undergraduate level. Topics absent from the first edition but included in the second include the Feynman path integral, seen in 1966 as an imaginative but not very useful formulation of quantum theory. Feynman methods were given only a cursory mention by Gottfried. Their practical importance has now been fully recognised, and a substantial account of them is provided in the new book. Other new topics include semiclassical quantum mechanics, motion in a magnetic field, the S matrix and inelastic collisions, radiation and scattering of light, identical particle systems and the Dirac equation. A topic that was all but totally neglected in 1966, but which has flourished increasingly since, is that of the foundations of quantum theory. John Bell’s work of the mid-1960s has led to genuine theoretical and experimental achievement, which has facilitated the development of quantum optics and quantum information theory. Gottfried’s 1966 book played a modest part in this development. When Bell became increasingly irritated with the standard theoretical approach to quantum measurement, Viki Weisskopf repeatedly directed him to Gottfried’s book. Gottfried had devoted a

  8. Gravitational collapse and the cosmological constant

    SciTech Connect

    Deshingkar, S. S.; Jhingan, S.; Chamorro, A.; Joshi, P. S.

    2001-06-15

    We consider here the effects of a nonvanishing cosmological term on the final fate of a spherical inhomogeneous collapsing dust cloud. It is shown that, depending on the nature of the initial data from which the collapse evolves, and for a positive value of the cosmological constant, we can have a globally regular evolution where a bounce develops within the cloud. We characterize precisely the initial data causing such a bounce in terms of the initial density and velocity profiles for the collapsing cloud. In the cases otherwise, the result of collapse is either the formation of a black hole or a naked singularity resulting as the end state of collapse. We also show here that a positive cosmological term can cover a part of the singularity spectrum which is visible in the corresponding dust collapse models for the same initial data.

  9. Henry's law constants of polyols

    NASA Astrophysics Data System (ADS)

    Compernolle, S.; Müller, J.-F.

    2014-12-01

    Henry's law constants (HLC) are derived for several polyols bearing between 2 and 6 hydroxyl groups, based on literature data for water activity, vapour pressure and/or solubility. While deriving HLC and depending on the case, also infinite dilution activity coefficients (IDACs), solid state vapour pressures or activity coefficient ratios are obtained as intermediate results. An error analysis on the intermediate quantities and the obtained HLC is included. For most compounds, these are the first values reported, while others compare favourably with literature data in most cases. Using these values and those from a previous work (Compernolle and Müller, 2014), an assessment is made on the partitioning of polyols, diacids and hydroxy acids to droplet and aqueous aerosol.

  10. Philicities, Fugalities, and Equilibrium Constants.

    PubMed

    Mayr, Herbert; Ofial, Armin R

    2016-05-17

    The mechanistic model of Organic Chemistry is based on relationships between rate and equilibrium constants. Thus, strong bases are generally considered to be good nucleophiles and poor nucleofuges. Exceptions to this rule have long been known, and the ability of iodide ions to catalyze nucleophilic substitutions, because they are good nucleophiles as well as good nucleofuges, is just a prominent example for exceptions from the general rule. In a reaction series, the Leffler-Hammond parameter α = δΔG(⧧)/δΔG° describes the fraction of the change in the Gibbs energy of reaction, which is reflected in the change of the Gibbs energy of activation. It has long been considered as a measure for the position of the transition state; thus, an α value close to 0 was associated with an early transition state, while an α value close to 1 was considered to be indicative of a late transition state. Bordwell's observation in 1969 that substituent variation in phenylnitromethanes has a larger effect on the rates of deprotonation than on the corresponding equilibrium constants (nitroalkane anomaly) triggered the breakdown of this interpretation. In the past, most systematic investigations of the relationships between rates and equilibria of organic reactions have dealt with proton transfer reactions, because only for few other reaction series complementary kinetic and thermodynamic data have been available. In this Account we report on a more general investigation of the relationships between Lewis basicities, nucleophilicities, and nucleofugalities as well as between Lewis acidities, electrophilicities, and electrofugalities. Definitions of these terms are summarized, and it is suggested to replace the hybrid terms "kinetic basicity" and "kinetic acidity" by "protophilicity" and "protofugality", respectively; in this way, the terms "acidity" and "basicity" are exclusively assigned to thermodynamic properties, while "philicity" and "fugality" refer to kinetics

  11. Constant magnification optical tracking system

    NASA Technical Reports Server (NTRS)

    Frazer, R. E. (Inventor)

    1982-01-01

    A constant magnification optical tracking system for continuously tracking of a moving object is described. In the tracking system, a traveling objective lens maintains a fixed relationship with an object to be optically tracked. The objective lens was chosen to provide a collimated light beam oriented in the direction of travel of the moving object. A reflective surface is attached to the traveling objective lens for reflecting an image of the moving object. The object to be tracked is a free-falling object which is located at the focal point of the objective lens for at least a portion of its free-fall path. A motor and control means is provided for mantaining the traveling objective lens in a fixed relationship relative to the free-falling object, thereby keeping the free-falling object at the focal point and centered on the axis of the traveling objective lens throughout its entire free-fall path.

  12. Generalized methods and solvers for noise removal from piecewise constant signals. I. Background theory

    PubMed Central

    Little, Max A.; Jones, Nick S.

    2011-01-01

    Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play. PMID:22003312

  13. Fundamental principles of robot vision

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    1993-08-01

    Robot vision is a specialty of intelligent machines which describes the interaction between robotic manipulators and machine vision. Early robot vision systems were built to demonstrate that a robot with vision could adapt to changes in its environment. More recently attention is being directed toward machines with expanded adaptation and learning capabilities. The use of robot vision for automatic inspection and recognition of objects for manipulation by an industrial robot or for guidance of a mobile robot are two primary applications. Adaptation and learning characteristics are often lacking in industrial automation and if they can be added successfully, result in a more robust system. Due to a real time requirement, the robot vision methods that have proven most successful have been ones which could be reduced to a simple, fast computation. The purpose of this paper is to discuss some of the fundamental concepts in sufficient detail to provide a starting point for the interested engineer or scientist. A detailed example of a camera system viewing an object and for a simple, two dimensional robot vision system is presented. Finally, conclusions and recommendations for further study are presented.

  14. Gas cell neutralizers (Fundamental principles)

    SciTech Connect

    Fuehrer, B.

    1985-06-01

    Neutralizing an ion-beam of the size and energy levels involved in the neutral-particle-beam program represents a considerable extension of the state-of-the-art of neutralizer technology. Many different mediums (e.g., solid, liquid, gas, plasma, photons) can be used to strip the hydrogen ion of its extra electron. A large, multidisciplinary R and D effort will no doubt be required to sort out all of the ''pros and cons'' of these various techniques. The purpose of this particular presentation is to discuss some basic configurations and fundamental principles of the gas type of neutralizer cell. Particular emphasis is placed on the ''Gasdynamic Free-Jet'' neutralizer since this configuration has the potential of being much shorter than other type of gas cells (in the beam direction) and it could operate in nearly a continuous mode (CW) if necessary. These were important considerations in the ATSU design which is discussed in some detail in the second presentation entitled ''ATSU Point Design''.

  15. Fundamentals of the DIGES code

    SciTech Connect

    Simos, N.; Philippacopoulos, A.J.

    1994-08-01

    Recently the authors have completed the development of the DIGES code (Direct GEneration of Spectra) for the US Nuclear Regulatory Commission. This paper presents the fundamental theoretical aspects of the code. The basic modeling involves a representation of typical building-foundation configurations as multi degree-of-freedom dynamic which are subjected to dynamic inputs in the form of applied forces or pressure at the superstructure or in the form of ground motions. Both the deterministic as well as the probabilistic aspects of DIGES are described. Alternate ways of defining the seismic input for the estimation of in-structure spectra and their consequences in terms of realistically appraising the variability of the structural response is discussed in detaiL These include definitions of the seismic input by ground acceleration time histories, ground response spectra, Fourier amplitude spectra or power spectral densities. Conversions of one of these forms to another due to requirements imposed by certain analysis techniques have been shown to lead, in certain cases, in controversial results. Further considerations include the definition of the seismic input as the excitation which is directly applied at the foundation of a structure or as the ground motion of the site of interest at a given point. In the latter case issues related to the transferring of this motion to the foundation through convolution/deconvolution and generally through kinematic interaction approaches are considered.

  16. Fundamental studies of fusion plasmas

    SciTech Connect

    Aamodt, R.E.; Catto, P.J.; D'Ippolito, D.A.; Myra, J.R.; Russell, D.A.

    1992-05-26

    The major portion of this program is devoted to critical ICH phenomena. The topics include edge physics, fast wave propagation, ICH induced high frequency instabilities, and a preliminary antenna design for Ignitor. This research was strongly coordinated with the world's experimental and design teams at JET, Culham, ORNL, and Ignitor. The results have been widely publicized at both general scientific meetings and topical workshops including the speciality workshop on ICRF design and physics sponsored by Lodestar in April 1992. The combination of theory, empirical modeling, and engineering design in this program makes this research particularly important for the design of future devices and for the understanding and performance projections of present tokamak devices. Additionally, the development of a diagnostic of runaway electrons on TEXT has proven particularly useful for the fundamental understanding of energetic electron confinement. This work has led to a better quantitative basis for quasilinear theory and the role of magnetic vs. electrostatic field fluctuations on electron transport. An APS invited talk was given on this subject and collaboration with PPPL personnel was also initiated. Ongoing research on these topics will continue for the remainder fo the contract period and the strong collaborations are expected to continue, enhancing both the relevance of the work and its immediate impact on areas needing critical understanding.

  17. Technological fundamentals of endoscopic haemostasis.

    PubMed

    Reidenbach, H D

    1992-01-01

    In order to perform endoscopic haemostasis there exist several different mechanical, biochemical and thermal methods, which may be applied together with rigid or fully flexible endoscopes in different situations. The technological fundamentals of convective, conductive and radiative heat transfer, the irradiation with coherent electromagnetic waves like microwaves and laser radiation and the resistive heating by RF-current are described. A review of the state of the art of haemostatic coagulation by laser radiation (photocoagulation) and radio-frequency currents (surgical diathermy, high-frequency coagulation) is given. The wavelength-dependent interactions of coherent light waves are compared especially for the three mainly different laser types, i.e., carbon-dioxide-, neodymium-YAG- and argon-ion-laser. The well-known disadvantages of the conventional RF-coagulation are overcome by the so-called electrohydrothermosation (EHT), i.e. the liquid-assisted application of resistive heating of biological tissues to perform haemostasis. Different technological solutions for bipolar RF-coagulation probes including ball-tips and forceps are shown and the first experimental results are discussed in comparison. PMID:1595405

  18. Review of receptor model fundamentals

    NASA Astrophysics Data System (ADS)

    Henry, Ronald C.; Lewis, Charles W.; Hopke, Philip K.; Williamson, Hugh J.

    There are several broad classes of mathematical models used to apportion the aerosol measured at a receptor site to its likely sources. This paper surveys the two types applied in exercises for the Mathematical and Empirical Receptor Models Workshop (Quail Roost II): chemical mass balance models and multivariate models. The fundamental principles of each are reviewed. Also considered are the specific models available within each class. These include: tracer element, linear programming, ordinary linear least-squares, effective variance least-squares and ridge regression (all solutions to the chemical mass balance equation), and factor analysis, target transformation factor analysis, multiple linear regression and extended Q-mode factor analysis (all multivariate models). In practical application of chemical mass balance models, a frequent problem is the presence of two or more emission sources whose signatures are very similar. Several techniques to reduce the effects of such multicollinearity are discussed. The propagation of errors for source contribution estimates, another practical concern, also is given special attention.

  19. Do goldfish miss the fundamental?

    NASA Astrophysics Data System (ADS)

    Fay, Richard R.

    2003-10-01

    The perception of harmonic complexes was studied in goldfish using classical respiratory conditioning and a stimulus generalization paradigm. Groups of animals were initially conditioned to several harmonic complexes with a fundamental frequency (f0) of 100 Hz. ln some cases the f0 component was present, and in other cases, the f0 component was absent. After conditioning, animals were tested for generalization to novel harmonic complexes having different f0's, some with f0 present and some with f0 absent. Generalization gradients always peaked at 100 Hz, indicating that the pitch value of the conditioning complexes was consistent with the f0, whether or not f0 was present in the conditioning or test complexes. Thus, goldfish do not miss the fundmental with respect to a pitch-like perceptual dimension. However, generalization gradients tended to have different skirt slopes for the f0-present and f0-absent conditioning and test stimuli. This suggests that goldfish distinguish between f0 present/absent stimuli, probably on the basis of a timbre-like perceptual dimension. These and other results demonstrate that goldfish respond to complex sounds as if they possessed perceptual dimensions similar to pitch and timbre as defined for human and other vertebrate listeners. [Work supported by NIH/NIDCD.

  20. Simulating Supercapacitors: Can We Model Electrodes As Constant Charge Surfaces?

    PubMed

    Merlet, Céline; Péan, Clarisse; Rotenberg, Benjamin; Madden, Paul A; Simon, Patrice; Salanne, Mathieu

    2013-01-17

    Supercapacitors based on an ionic liquid electrolyte and graphite or nanoporous carbon electrodes are simulated using molecular dynamics. We compare a simplified electrode model in which a constant, uniform charge is assigned to each carbon atom with a realistic model in which a constant potential is applied between the electrodes (the carbon charges are allowed to fluctuate). We show that the simulations performed with the simplified model do not provide a correct description of the properties of the system. First, the structure of the adsorbed electrolyte is partly modified. Second, dramatic differences are observed for the dynamics of the system during transient regimes. In particular, upon application of a constant applied potential difference, the increase in the temperature, due to the Joule effect, associated with the creation of an electric current across the cell follows Ohm's law, while unphysically high temperatures are rapidly observed when constant charges are assigned to each carbon atom. PMID:26283432

  1. On the fundamental role of dynamics in quantum physics

    NASA Astrophysics Data System (ADS)

    Hofmann, Holger F.

    2016-05-01

    Quantum theory expresses the observable relations between physical properties in terms of probabilities that depend on the specific context described by the "state" of a system. However, the laws of physics that emerge at the macroscopic level are fully deterministic. Here, it is shown that the relation between quantum statistics and deterministic dynamics can be explained in terms of ergodic averages over complex valued probabilities, where the fundamental causality of motion is expressed by an action that appears as the phase of the complex probability multiplied with the fundamental constant ħ. Importantly, classical physics emerges as an approximation of this more fundamental theory of motion, indicating that the assumption of a classical reality described by differential geometry is merely an artefact of an extrapolation from the observation of macroscopic dynamics to a fictitious level of precision that does not exist within our actual experience of the world around us. It is therefore possible to completely replace the classical concepts of trajectories with the more fundamental concept of action phase probabilities as a universally valid description of the deterministic causality of motion that is observed in the physical world.

  2. Searching for space-time variation of the fine structure constant using QSO spectra: overview and future prospects

    NASA Astrophysics Data System (ADS)

    Berengut, J. C.; Dzuba, V. A.; Flambaum, V. V.; King, J. A.; Kozlov, M. G.; Murphy, M. T.; Webb, J. K.

    2010-11-01

    Current theories that seek to unify gravity with the other fundamental interactions suggest that spatial and temporal variation of fundamental constants is a possibility, or even a necessity, in an expanding Universe. Several studies have tried to probe the values of constants at earlier stages in the evolution of the Universe, using tools such as big-bang nucleosynthesis, the Oklo natural nuclear reactor, quasar absorption spectra, and atomic clocks (see, e.g. Flambaum & Berengut (2009)).

  3. Is There a Cosmological Constant?

    NASA Technical Reports Server (NTRS)

    Kochanek, Christopher; Oliversen, Ronald J. (Technical Monitor)

    2002-01-01

    The grant contributed to the publication of 18 refereed papers and 5 conference proceedings. The primary uses of the funding have been for page charges, travel for invited talks related to the grant research, and the support of a graduate student, Charles Keeton. The refereed papers address four of the primary goals of the proposal: (1) the statistics of radio lenses as a probe of the cosmological model (#1), (2) the role of spiral galaxies as lenses (#3), (3) the effects of dust on statistics of lenses (#7, #8), and (4) the role of groups and clusters as lenses (#2, #6, #10, #13, #15, #16). Four papers (#4, #5, #11, #12) address general issues of lens models, calibrations, and the relationship between lens galaxies and nearby galaxies. One considered cosmological effects in lensing X-ray sources (#9), and two addressed issues related to the overall power spectrum and theories of gravity (#17, #18). Our theoretical studies combined with the explosion in the number of lenses and the quality of the data obtained for them is greatly increasing our ability to characterize and understand the lens population. We can now firmly conclude both from our study of the statistics of radio lenses and our survey of extinctions in individual lenses that the statistics of optically selected quasars were significantly affected by extinction. However, the limits on the cosmological constant remain at lambda < 0.65 at a 2-sigma confidence level, which is in mild conflict with the results of the Type la supernova surveys. We continue to find that neither spiral galaxies nor groups and clusters contribute significantly to the production of gravitational lenses. The lack of group and cluster lenses is strong evidence for the role of baryonic cooling in increasing the efficiency of galaxies as lenses compared to groups and clusters of higher mass but lower central density. Unfortunately for the ultimate objective of the proposal, improved constraints on the cosmological constant, the next

  4. The dependency of timbre on fundamental frequency

    NASA Astrophysics Data System (ADS)

    Marozeau, Jeremy; de Cheveigné, Alain; McAdams, Stephen; Winsberg, Suzanne

    2003-11-01

    The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0's, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0's.

  5. Fundamental Principles of Proper Space Kinematics

    NASA Astrophysics Data System (ADS)

    Wade, Sean

    It is desirable to understand the movement of both matter and energy in the universe based upon fundamental principles of space and time. Time dilation and length contraction are features of Special Relativity derived from the observed constancy of the speed of light. Quantum Mechanics asserts that motion in the universe is probabilistic and not deterministic. While the practicality of these dissimilar theories is well established through widespread application inconsistencies in their marriage persist, marring their utility, and preventing their full expression. After identifying an error in perspective the current theories are tested by modifying logical assumptions to eliminate paradoxical contradictions. Analysis of simultaneous frames of reference leads to a new formulation of space and time that predicts the motion of both kinds of particles. Proper Space is a real, three-dimensional space clocked by proper time that is undergoing a densification at the rate of c. Coordinate transformations to a familiar object space and a mathematical stationary space clarify the counterintuitive aspects of Special Relativity. These symmetries demonstrate that within the local universe stationary observers are a forbidden frame of reference; all is in motion. In lieu of Quantum Mechanics and Uncertainty the use of the imaginary number i is restricted for application to the labeling of mass as either material or immaterial. This material phase difference accounts for both the perceived constant velocity of light and its apparent statistical nature. The application of Proper Space Kinematics will advance more accurate representations of microscopic, oscopic, and cosmological processes and serve as a foundation for further study and reflection thereafter leading to greater insight.

  6. Fundamental structures of dynamic social networks.

    PubMed

    Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune

    2016-09-01

    Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision. PMID:27555584

  7. Fundamental Mechanisms of Interface Roughness

    SciTech Connect

    Randall L. Headrick

    2009-01-06

    Publication quality results were obtained for several experiments and materials systems including: (i) Patterning and smoothening of sapphire surfaces by energetic Ar+ ions. Grazing Incidence Small Angle X-ray Scattering (GISAXS) experiments were performed in the system at the National Synchrotron Light Source (NSLS) X21 beamline. Ar+ ions in the energy range from 300 eV to 1000 eV were used to produce ripples on the surfaces of single-crystal sapphire. It was found that the ripple wavelength varies strongly with the angle of incidence of the ions, which increase significantly as the angle from normal is varied from 55° to 35°. A smooth region was found for ion incidence less than 35° away from normal incidence. In this region a strong smoothening mechanism with strength proportional to the second derivative of the height of the surface was found to be responsible for the effect. The discovery of this phase transition between stable and unstable regimes as the angle of incidence is varied has also stimulated new work by other groups in the field. (ii) Growth of Ge quantum dots on Si(100) and (111). We discovered the formation of quantum wires on 4° misoriented Si(111) using real-time GISAXS during the deposition of Ge. The results represent the first time-resolved GISAXS study of Ge quantum dot formation. (iii) Sputter deposition of amorphous thin films and multilayers composed of WSi2 and Si. Our in-situ GISAXS experiments reveal fundamental roughening and smoothing phenomena on surfaces during film deposition. The main results of this work is that the WSi2 layers actually become smoother during deposition due to the smoothening effect of energetic particles in the sputter deposition process.

  8. Multidimensional Cosmology, Constants and Transition to New SI Units

    NASA Astrophysics Data System (ADS)

    Melnikov, V. N.

    2011-06-01

    Main current problems of physics, gravitation and cosmology in particular are analyzed. Special attention is paid to results of the theory with extra dimensions and variations of fundamental physical constants. As an example the family of spherically symmetric solutions with horizon with multi-component anisotropic fluid is presented. The metrics of solutions are defined on a manifold that contains a product of n - 1 Ricci-flat "internal" spaces. A simulation of black brane solutions is considered. For the solution with the fluid matter the post-Newtonian parameters β and γ corresponding to the 4-dimensional section of the metric are found.

  9. Multidimensional Cosmology, Constants and Transition to New SI Units

    NASA Astrophysics Data System (ADS)

    Melnikov, V. N.

    Main current problems of physics, gravitation and cosmology in particular are analyzed. Special attention is paid to results of the theory with extra dimensions and variations of fundamental physical constants. As an example the family of spherically symmetric solutions with horizon with multi-component anisotropic fluid is presented. The metrics of solutions are defined on a manifold that contains a product of n-1 Ricci-flat "internal" spaces. A simulation of black brane solutions is considered. For the solution with the fluid matter the post-Newtonian parameters β and γ corresponding to the 4-dimensional section of the metric are found.

  10. Ground-state rotational constants of 12CH 3D

    NASA Astrophysics Data System (ADS)

    Chackerian, C.; Guelachvili, G.

    1980-12-01

    An analysis of ground-state combination differences in the ν2( A1) fundamental band of 12CH 3D ( ν0 = 2200.03896 cm -1) has been made to yield values for the rotational constants B0, D0J, D0JK, H0JJJ, H0JJK, H0JKK, LJJJJ, L0JJJK, and order of magnitude values for L0JJKK and L0JKKK. These constants should be useful in assisting radio searches for this molecule in astrophysical sources. In addition, splittings of A1A2 levels ( J ≥ 17, K = 3) have been measured in both the ground and excited vibrational states of this band.

  11. The model of the variable speed constant frequency closed-loop system operating in generating state

    NASA Astrophysics Data System (ADS)

    Ding, Daohong

    1986-10-01

    The variable speed constant frequency (USCF) electrical power system is a new type of aircraft power supply, which contains an alternating generator and a cycloconverter. This sums up the work of the cycloconverter and obtains four fundamental classes of circuit construction of the closed-loop system, which have twelve operating models. A mathematical model for each fundamental class of the circuit construction is introduced. These mathematical models can be used in digital simulation.

  12. The Fine-Structure Constant and Wavelength Calibration

    NASA Astrophysics Data System (ADS)

    Whitmore, Jonathan

    The fine-structure constant is a fundamental constant of the universe--and widely thought to have an unchanging value. However, the past decade has witnessed a controversy unfold over the claimed detection that the fine-structure constant had a different value in the distant past. These astrophysical measurements were made with spectrographs at the world's largest optical telescopes. The spectrographs make precise measurements of the wavelength spacing of absorption lines in the metals in the gas between the quasar background source and our telescopes on Earth. The wavelength spacing gives a snapshot of the atomic physics at the time of the interaction. Whether the fine-structure constant has changed is determined by comparing the atomic physics in the distant past with the atomic physics of today. We present our contribution to the discussion by analyzing three nights data taken with the HIRES instrument (High Resolution Echelle Spectrograph) on the Keck telescope. We provide an independent measurement on the fine-structure constant from the Damped Lyman alpha system at a redshift of z =2.309 (10.8 billion years ago) quasar PHL957. We developed a new method for calibrating the wavelength scale of a quasar exposure to a much higher precision than previously achieved. In our subsequent analysis, we discovered unexpected wavelength calibration errors that has not been taken into account in the previously reported measurements. After characterizing the wavelength miscalibrations on the Keck-HIRES instrument, we obtained several nights of data from the main competing instrument, the VLT (Very Large Telescope) with UVES (Ultraviolet and Visual Echelle Spectrograph). We applied our new wavelength calibration method and uncovered similar in nature systematic errors as found on Keck-HIRES. Finally, we make a detailed Monte Carlo exploration of the effects that these miscalibrations have on making precision fine-structure constant measurements.

  13. High voltage compliance constant current ballast

    NASA Technical Reports Server (NTRS)

    Rosenthal, L. A.

    1976-01-01

    A ballast circuit employing a constant current diode and a vacuum tube that can provide a constant current over a voltage range of 1000 volts. The simple circuit can prove useful in studying voltage breakdown characteristics.

  14. ESR melting under constant voltage conditions

    SciTech Connect

    Schlienger, M.E.

    1997-02-01

    Typical industrial ESR melting practice includes operation at a constant current. This constant current operation is achieved through the use of a power supply whose output provides this constant current characteristic. Analysis of this melting mode indicates that the ESR process under conditions of constant current is inherently unstable. Analysis also indicates that ESR melting under the condition of a constant applied voltage yields a process which is inherently stable. This paper reviews the process stability arguments for both constant current and constant voltage operation. Explanations are given as to why there is a difference between the two modes of operation. Finally, constant voltage process considerations such as melt rate control, response to electrode anomalies and impact on solidification will be discussed.

  15. Capacitive Cells for Dielectric Constant Measurement

    ERIC Educational Resources Information Center

    Aguilar, Horacio Munguía; Maldonado, Rigoberto Franco

    2015-01-01

    A simple capacitive cell for dielectric constant measurement in liquids is presented. As an illustrative application, the cell is used for measuring the degradation of overheated edible oil through the evaluation of their dielectric constant.

  16. Astronomia Motivadora no Ensino Fundamental

    NASA Astrophysics Data System (ADS)

    Melo, J.; Voelzke, M. R.

    2008-09-01

    O objetivo principal deste trabalho é procurar desenvolver o interesse dos alunos pelas ciências através da Astronomia. Uma pesquisa com perguntas sobre Astronomia foi realizada junto a 161 alunos do Ensino Fundamental, com o intuito de descobrir conhecimentos prévios dos alunos sobre o assunto. Constatou-se, por exemplo, que 29,3% da 6ª série responderam corretamente o que é eclipse, 30,0% da 8ª série acertaram o que a Astronomia estuda, enquanto 42,3% dos alunos da 5ª série souberam definir o Sol. Pretende-se ampliar as turmas participantes e trabalhar, principalmente de forma prática com: dimensões e escalas no Sistema Solar, construção de luneta, questões como dia e noite, estações do ano e eclipses. Busca-se abordar, também, outros conteúdos de Física tais como a óptica na construção da luneta, e a mecânica no trabalho com escalas e medidas, e ao utilizar uma luminária para representar o Sol na questão do eclipse, e de outras disciplinas como a Matemática na transformação de unidades, regras de três; Artes na modelagem ou desenho dos planetas; a própria História com relação à busca pela origem do universo, e a Informática que possibilita a busca mais rápida por informações, além de permitir simulações e visualizações de imagens importantes. Acredita-se que a Astronomia é importante no processo ensino aprendizagem, pois permite a discussão de temas curiosos como, por exemplo, a origem do universo, viagens espaciais a existência ou não de vida em outros planetas, além de temas atuais como as novas tecnologias.

  17. Pure gravities via color-kinematics duality for fundamental matter

    NASA Astrophysics Data System (ADS)

    Johansson, Henrik; Ochirov, Alexander

    2015-11-01

    We give a prescription for the computation of loop-level scattering amplitudes in pure Einstein gravity, and four-dimensional pure supergravities, using the color-kinematics duality. Amplitudes are constructed using double copies of pure (super-)Yang-Mills parts and additional contributions from double copies of fundamental matter, which are treated as ghosts. The opposite-statistics states cancel the unwanted dilaton and axion in the bosonic theory, as well as the extra matter supermultiplets in the supergravity theories. As a spinoff, we obtain a prescription for obtaining amplitudes in supergravities with arbitrary non-self-interacting matter. As a prerequisite, we extend the color-kinematics duality from the adjoint to the fundamental representation of the gauge group. We explain the numerator relations that the fundamental kinematic Lie algebra should satisfy. We give nontrivial evidence supporting our construction using explicit tree and loop amplitudes, as well as more general arguments.

  18. Derivation of midinfrared (5-25 microns) optical constants of some silicates and palagonite

    NASA Technical Reports Server (NTRS)

    Roush, T.; Pollack, J.; Orenberg, J.

    1991-01-01

    The 5-25 micron real and imaginary refraction indices are presented for palagonite and the silicates pyrophyllite, kaolinite, serpentine, montmorillonite, saponite, and orthopyroxene. Optical constants in the region of the H2O-bending fundamental near 6 microns are obtained for saponite, montmorillonite, and palagonite. It is established that, if a pellet of pure material can be polished to a mirror finish, the optical constants of such noncohesive materials as clays are easily derivable.

  19. Elastic constants of Ultrasonic Additive Manufactured Al 3003-H18.

    PubMed

    Foster, D R; Dapino, M J; Babu, S S

    2013-01-01

    Ultrasonic Additive Manufacturing (UAM), also known as Ultrasonic Consolidation (UC), is a layered manufacturing process in which thin metal foils are ultrasonically bonded to a previously bonded foil substrate to create a net part. Optimization of process variables (amplitude, normal load and velocity) is done to minimize voids along the bonded interfaces. This work pertains to the evaluation of bonds in UAM builds through ultrasonic testing of a build's elastic constants. Results from ultrasonic testing on UAM parts indicate orthotropic material symmetry and a reduction of up to 48% in elastic constant values compared to a control sample. The reduction in elastic constant values is attributed to interfacial voids. In addition, the elastic constants in the plane of the Al foils have nearly the same value, while the constants normal to the foil direction have much different values. In contrast, measurements from builds made with Very High Power Ultrasonic Additive Manufacturing (VHP UAM) show a drastic improvement in elastic properties, approaching values similar to that of bulk aluminum. PMID:22939821

  20. Empirical Examination of Fundamental Indexation in the German Market

    NASA Astrophysics Data System (ADS)

    Mihm, Max; Locarek-Junge, Hermann

    Index Funds, Exchange Traded Funds and Derivatives give investors easy access to well diversified index portfolios. These index-based investment products exhibit low fees, which make them an attractive alternative to actively managed funds. Against this background, a new class of stock indices has been established based on the concept of “Fundamental Indexation”. The selection and weighting of index constituents is conducted by means of fundamental criteria like total assets, book value or number of employees. This paper examines the performance of fundamental indices in the German equity market. For this purpose, a backtest of five fundamental indices is conducted over the last 20 years. Furthermore the index returns are analysed under the assumption of an efficient as well as an inefficient market. Index returns in efficient markets are explained by applying the three factor model for stock returns of Fama and French (J Financ Econ 33(1):3-56, 1993). The results show that the outperformance of fundamental indices is partly due to a higher risk exposure, particularly to companies with a low price to book ratio. By relaxing the assumption of market efficiency, a return drag of capitalisation weighted indices can be deduced. Given a mean-reverting movement of prices, a direct connection between market capitalisation and index weighting leads to inferior returns.

  1. Relaxation creep rupture of heterogeneous material under constant strain.

    PubMed

    Hao, Sheng-Wang; Zhang, Bao-Ju; Tian, Ji-Feng

    2012-01-01

    We focus on a system consisting of an elastic part and a damageable part in series, to study the relaxation creep rupture of a heterogeneous system subjected to a uniaxial constant strain applied instantaneously. The viscoelastic behavior of the damageable part is modeled by a fiber bundle model consisting of Kelvin-Voigt elements and global load sharing is assumed for the redistribution of load following fiber breaking in the damageable part. Analytical and numerical calculations show that the global relaxation creep rupture appears if the elastic energy stored in the elastic part exceeded the fracture energy of the damageable part. The lifetime of the system strongly depends on the values of the applied external strain and the initial stiffness ratio k between the elastic part and the damageable part. We show that a higher stiffness ratio implies a more brittle system. Prior to complete failure, relaxation creep rupture exhibits a sequence of three stages, similar to creep rupture under constant stress, and the nominal force rate presents a power law singularity with a power index -1/2 near the global rupture time. PMID:22400604

  2. The Torsional Fundamental Band of Methylformate

    NASA Astrophysics Data System (ADS)

    Tudorie, M.; Ilyushin, V.; Vander Auwera, J.; Pirali, O.; Roy, P.; Huet, T. R.

    2011-06-01

    Methylformate (HCOOCH_3) is one of the most important molecules in astrophysics, first observed in 1975. The rotational structure of its ground and first excited torsional states are well known from millimeter wave measurements. However, some of the torsional parameters are still not precisely determined because information on the torsional vibrational frequency v_t = 1-0 is missing. To overcome that problem, the far infrared spectrum of HCOOCH_3 was recorded with a 150 m optical path in a White cell and a Bruker IFS 125 HR Fourier transform spectrometer at the AILES beamline of the synchrotron SOLEIL facility. The analysis of the very weak fundamental torsional band v_t = 1-0 observed around 130 Cm-1 was carried out. It led to the first precise determination of the torsional barrier height and the dipole moment induced by the torsional motion. This work is partly supported by the "Programme National de Physico-Chimie du Milieu Interstellaire" (PCMI-CNRS) and by the contract ANR-BLAN-08-0054. R.D. Brown, J.G. Crofts, P.D. Godfrey, F.F. Gardner, B.J. Robinson, J.B. Whiteoak, Astrophys. J. 197 (1975) L29-L31. See V. Ilyushin, A. Kryvda, E. Alekseev, J. Mol. Spectrosc. 255 (2009) 32-38, and references therein.

  3. Fundamentals of materials accounting for nuclear safeguards

    SciTech Connect

    Pillay, K.K.S.

    1989-04-01

    Materials accounting is essential to providing the necessary assurance for verifying the effectiveness of a safeguards system. The use of measurements, analyses, records, and reports to maintain knowledge of the quantities of nuclear material present in a defined area of a facility and the use of physical inventories and materials balances to verify the presence of special nuclear materials are collectively known as materials accounting for nuclear safeguards. This manual, prepared as part of the resource materials for the Safeguards Technology Training Program of the US Department of Energy, addresses fundamental aspects of materials accounting, enriching and complementing them with the first-hand experiences of authors from varied disciplines. The topics range from highly technical subjects to site-specific system designs and policy discussions. This collection of papers is prepared by more than 25 professionals from the nuclear safeguards field. Representing research institutions, industries, and regulatory agencies, the authors create a unique resource for the annual course titled ''Materials Accounting for Nuclear Safeguards,'' which is offered at the Los Alamos National Laboratory.

  4. Ultralight porous metals: From fundamentals to applications

    NASA Astrophysics Data System (ADS)

    Tianjian, Lu

    2002-10-01

    Over the past few years a number of low cost metallic foams have been produced and used as the core of sandwich panels and net shaped parts. The main aim is to develop lightweight structures which are stiff, strong, able to absorb large amount of energy and cheap for application in the transport and construction industries. For example, the firewall between the engine and passenger compartment of an automobile must have adequate mechanical strength, good energy and sound absorbing properties, and adequate fire retardance. Metal foams provide all of these features, and are under serious consideration for this applications by a number of automobile manufacturers (e.g., BMW and Audi). Additional specialized applications for foam-cored sandwich panels range from heat sinks for electronic devices to crash barriers for automobiles, from the construction panels in lifts on aircraft carriers to the luggage containers of aircraft, from sound proofing walls along railway tracks and highways to acoustic absorbers in lean premixed combustion chambers. But there is a problem. Before metallic foams can find a widespread application, their basic properties must be measured, and ideally modeled as a function of microstructural details, in order to be included in a design. This work aims at reviewing the recent progress and presenting some new results on fundamental research regarding the micromechanical origins of the mechanical, thermal, and acoustic properties of metallic foams.

  5. Measurements of the dielectric constants for planetary volatiles

    NASA Technical Reports Server (NTRS)

    Anicich, Vincent G.; Huntress, Wesley T., Jr.

    1987-01-01

    The model of Titan at present has the surface temperature, pressure, and composition such that there is a possibility of a binary ethane-methane ocean. Proposed experiments for future Titan flybys include microwave mappers. Very little has been measured of the dielectric properties of the small hydrocarbons at these radar frequencies. An experiment was conducted utilizing a slotted line to measure the dielectric properties of the hydrocarbons, methane to heptane, from room temperature to -180 C. Measurements of the real part of the dielectric constants are accurate to + or - 0.006 and the imaginary part (the loss tangent) of the liquids studied is less than or equal to 0.001. In order to verify this low loss tangent, the real part of the dielectric constant of hexane at 25 C was studied as a function of the frequency range of the slotted line system used. The dielectric constant of hexane at room temperature, between 500 MHz and 3 MHz, is constant within experimental error.

  6. Fundamentals of metal oxide catalysis

    NASA Astrophysics Data System (ADS)

    Nair, Hari

    The properties of metal oxide catalysts and hence, catalytic activity are highly dependent on the composition and structure of these oxides. This dissertation has 3 parts -- all directed towards understanding relationships between structure, composition and activity in metal oxide catalysts. The first part of this dissertation focuses on supported metal oxide catalysts of tungsten, vanadium and molybdenum. Mechanisms are proposed for ethanol oxidative dehydrogenation which is used to probe the acidity and reducibility of these oxide catalysts. These studies are then used to develop a novel method to quantify active redox sites and determine the nature of the active site on these catalysts -- our results show that the intrinsic redox turn-over frequency is independent of the nature of the metal oxide and its loading and that the actual rate obtained over an oxide is only a function of the number of removable oxygen atoms linking the metal to the support. The extension of Ultraviolet-visible Diffuse Reflectance Spectroscopy (UV-vis DRS) to the study of active oxide domains in binary oxide catalysts is demonstrated for distinguishing between interacting and non-interacting domains in binary MoO x-WOx catalysts on alumina. We show also how the rigorous analysis of pre-edge features, absorption white-line intensity and the full width at half maximum of the white-line in X-ray Absorption Spectra provide determinants for metal atom coordination and domain size in supported metal oxide catalysts. The second part of this work looks at effects of structure variations on the activity of polyoxometalate catalysts that are promising for the production of Methacrylic Acid from Isobutane. The use of these catalysts is limited by structural changes that impact their performance -- an "activation" period is required before the catalysts become active for methacrylic acid production and structural changes also lead to degradation of the catalyst, which are also seen during thermal

  7. Moving-Gradient Furnace With Constant-Temperature Cold Zone

    NASA Technical Reports Server (NTRS)

    Gernert, Nelson J.; Shaubach, Robert M.

    1993-01-01

    Outer heat pipe helps in controlling temperature of cold zone of furnace. Part of heat-pipe furnace that includes cold zone surrounded by another heat pipe equipped with heater at one end and water cooling coil at other end. Temperature of heat pipe maintained at desired constant value by controlling water cooling. Serves as constant-temperature heat source or heat sink, as needed, for gradient of temperature as gradient region moved along furnace. Proposed moving-gradient heat-pipe furnace used in terrestrial or spaceborne experiments on directional solidification in growth of crystals.

  8. Solar Constant (SOLCON) Experiment: Ground Support Equipment (GSE) software development

    NASA Technical Reports Server (NTRS)

    Gibson, M. Alan; Thomas, Susan; Wilson, Robert

    1991-01-01

    The Solar Constant (SOLCON) Experiment, the objective of which is to determine the solar constant value and its variability, is scheduled for launch as part of the Space Shuttle/Atmospheric Laboratory for Application and Science (ATLAS) spacelab mission. The Ground Support Equipment (GSE) software was developed to monitor and analyze the SOLCON telemetry data during flight and to test the instrument on the ground. The design and development of the GSE software are discussed. The SOLCON instrument was tested during Davos International Solar Intercomparison, 1989 and the SOLCON data collected during the tests are analyzed to study the behavior of the instrument.

  9. Remote Sensing of Salinity: The Dielectric Constant of Sea Water

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Lang, R.; Utku, C.; Tarkocin, Y.

    2011-01-01

    Global monitoring of sea surface salinity from space requires an accurate model for the dielectric constant of sea water as a function of salinity and temperature to characterize the emissivity of the surface. Measurements are being made at 1.413 GHz, the center frequency of the Aquarius radiometers, using a resonant cavity and the perturbation method. The cavity is operated in a transmission mode and immersed in a liquid bath to control temperature. Multiple measurements are made at each temperature and salinity. Error budgets indicate a relative accuracy for both real and imaginary parts of the dielectric constant of about 1%.

  10. Fundamental Investigations of Airframe Noise

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.

    2004-01-01

    An extensive numerical and experimental study of airframe noise mechanisms associated with a subsonic high-lift system has been performed at NASA Langley Research Center (LaRC). Investigations involving both steady and unsteady computations and experiments on a small-scale, part-span flap model are presented. Both surface (steady and unsteady pressure measurements, hot films, oil flows, pressure sensitive paint) and off surface (5 hole-probe, particle-imaged velocimetry, laser velocimetry, laser light sheet measurements) were taken in the LaRC Quiet Flow Facility (QFF) and several hard-wall tunnels up to flight Reynolds number. Successful microphone array measurements were also taken providing both acoustic source maps on the model, and quantitative spectra. Critical directivity measurements were obtained in the QFF. NASA Langley unstructured and structured Reynolds- Averaged Navier-Stokes codes modeled the flap geometries excellent comparisons with surface and offsurface experimental data were obtained. Subsequently, these meanflow calculations were utilized in both linear stability and direct numerical simulations of the flap-edge flow field to calculate unsteady surface pressures and farfield acoustic spectra. Accurate calculations were critical in obtaining not only noise source characteristics, but shear layer correction data as well. Techniques utilized in these investigations as well as brief overviews of results will be given.

  11. Fundamental limits in heat-assisted magnetic recording and methods to overcome it with exchange spring structures

    SciTech Connect

    Suess, D.; Abert, C.; Bruckner, F.; Windl, R.; Vogler, C.; Breth, L.; Fidler, J.

    2015-04-28

    The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePt film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5–0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10{sup −4} if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.

  12. The relationship between the problems of cosmological constant and CP violation

    NASA Astrophysics Data System (ADS)

    Mok, Ho-Ming

    2012-06-01

    A solution of the cosmological constant problem, which is based on discrete spacetime at electroweak scale, agrees excellently with the cosmological observations. The GEO600 gravitational waves experiment may provide important experimental support to such proposed solution. It has been further shown that the phase factor associated with the particle field in discrete space-time could generate CP violation in quark mixing system. Such results reveal that the nature of cosmological constant and CP violation would be fundamentally related to the quantum nature of space-time and thus the development of quantum gravity theory plays an important role for the quest of cosmological constant and CP violation.

  13. Dimensionless constants, cosmology, and other dark matters

    SciTech Connect

    Tegmark, Max; Aguirre, Anthony; Rees, Martin J.; Wilczek, Frank

    2006-01-15

    We identify 31 dimensionless physical constants required by particle physics and cosmology, and emphasize that both microphysical constraints and selection effects might help elucidate their origin. Axion cosmology provides an instructive example, in which these two kinds of arguments must both be taken into account, and work well together. If a Peccei-Quinn phase transition occurred before or during inflation, then the axion dark matter density will vary from place to place with a probability distribution. By calculating the net dark matter halo formation rate as a function of all four relevant cosmological parameters and assessing other constraints, we find that this probability distribution, computed at stable solar systems, is arguably peaked near the observed dark matter density. If cosmologically relevant weakly interacting massive particle (WIMP) dark matter is discovered, then one naturally expects comparable densities of WIMPs and axions, making it important to follow up with precision measurements to determine whether WIMPs account for all of the dark matter or merely part of it.

  14. Fundamentals of fossil simulator instructor training

    SciTech Connect

    Not Available

    1984-01-01

    This single-volume, looseleaf text introduces the beginning instructor to fundamental instructor training principles, and then shows how to apply those principles to fossil simulator training. Topics include the fundamentals of classroom instruction, the learning process, course development, and the specifics of simulator training program development.

  15. Individual differences in fundamental social motives.

    PubMed

    Neel, Rebecca; Kenrick, Douglas T; White, Andrew Edward; Neuberg, Steven L

    2016-06-01

    Motivation has long been recognized as an important component of how people both differ from, and are similar to, each other. The current research applies the biologically grounded fundamental social motives framework, which assumes that human motivational systems are functionally shaped to manage the major costs and benefits of social life, to understand individual differences in social motives. Using the Fundamental Social Motives Inventory, we explore the relations among the different fundamental social motives of Self-Protection, Disease Avoidance, Affiliation, Status, Mate Seeking, Mate Retention, and Kin Care; the relationships of the fundamental social motives to other individual difference and personality measures including the Big Five personality traits; the extent to which fundamental social motives are linked to recent life experiences; and the extent to which life history variables (e.g., age, sex, childhood environment) predict individual differences in the fundamental social motives. Results suggest that the fundamental social motives are a powerful lens through which to examine individual differences: They are grounded in theory, have explanatory value beyond that of the Big Five personality traits, and vary meaningfully with a number of life history variables. A fundamental social motives approach provides a generative framework for considering the meaning and implications of individual differences in social motivation. (PsycINFO Database Record PMID:26371400

  16. Investigating the Fundamental Theorem of Calculus

    ERIC Educational Resources Information Center

    Johnson, Heather L.

    2010-01-01

    The fundamental theorem of calculus, in its simplified complexity, connects differential and integral calculus. The power of the theorem comes not merely from recognizing it as a mathematical fact but from using it as a systematic tool. As a high school calculus teacher, the author developed and taught lessons on this fundamental theorem that were…

  17. Fundamental Frequency Variation with an Electrolarynx Improves Speech Understanding: A Case Study

    ERIC Educational Resources Information Center

    Watson, Peter J.; Schlauch, Robert S.

    2009-01-01

    Purpose: This study examined the effect of fundamental frequency (F0) variation on the intelligibility of speech in an alaryngeal talker who used an electrolarynx (EL). Method: One experienced alaryngeal talker produced variable F0 and a constant F0 with his EL as he read sentences aloud. As a control, a group of sentences with variable F0 was…

  18. Impulsive Stimulated Light Scattering at High Pressure - Precise Determination of Elastic Constants of Opaque Materials

    SciTech Connect

    Crowhurst, J C; Zaug, J M; Abramson, E H; Brown, J M; Ahre, D W

    2002-08-22

    Impulsive stimulated light scattering has been used to measure interfacial wave propagation speeds and elastic constants under conditions of high pressure. Data obtained from single-crystal Ge and Fe, and from polycrystalline Ta is presented. The method is complementary to other techniques for obtaining this type of information. There appears no fundamental reason why it cannot be extended to the 1 Mbar regime.

  19. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    ERIC Educational Resources Information Center

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their application…

  20. The gaseous explosive reaction at constant pressure : the reaction order and reaction rate

    NASA Technical Reports Server (NTRS)

    Stevens, F W

    1931-01-01

    The data given in this report covers the explosive limits of hydrocarbon fuels. Incidental to the purpose of the investigation here reported, the explosive limits will be found to be expressed for the condition of constant pressure, in the fundamental terms of concentrations (partial pressures) of fuel and oxygen.

  1. Emergent cosmological constant from colliding electromagnetic waves

    SciTech Connect

    Halilsoy, M.; Mazharimousavi, S. Habib; Gurtug, O. E-mail: habib.mazhari@emu.edu.tr

    2014-11-01

    In this study we advocate the view that the cosmological constant is of electromagnetic (em) origin, which can be generated from the collision of em shock waves coupled with gravitational shock waves. The wave profiles that participate in the collision have different amplitudes. It is shown that, circular polarization with equal amplitude waves does not generate cosmological constant. We also prove that the generation of the cosmological constant is related to the linear polarization. The addition of cross polarization generates no cosmological constant. Depending on the value of the wave amplitudes, the generated cosmological constant can be positive or negative. We show additionally that, the collision of nonlinear em waves in a particular class of Born-Infeld theory also yields a cosmological constant.

  2. Constant voltage electro-slag remelting control

    DOEpatents

    Schlienger, Max E.

    1996-01-01

    A system for controlling electrode gap in an electro-slag remelt furnace has a constant regulated voltage and an eletrode which is fed into the slag pool at a constant rate. The impedance of the circuit through the slag pool is directly proportional to the gap distance. Because of the constant voltage, the system current changes are inversely proportional to changes in gap. This negative feedback causes the gap to remain stable.

  3. Constant voltage electro-slag remelting control

    DOEpatents

    Schlienger, M.E.

    1996-10-22

    A system for controlling electrode gap in an electro-slag remelt furnace has a constant regulated voltage and an electrode which is fed into the slag pool at a constant rate. The impedance of the circuit through the slag pool is directly proportional to the gap distance. Because of the constant voltage, the system current changes are inversely proportional to changes in gap. This negative feedback causes the gap to remain stable. 1 fig.

  4. Cosmological Constant and Axions in String Theory

    SciTech Connect

    Svrcek, Peter; /Stanford U., Phys. Dept. /SLAC

    2006-08-18

    String theory axions appear to be promising candidates for explaining cosmological constant via quintessence. In this paper, we study conditions on the string compactifications under which axion quintessence can happen. For sufficiently large number of axions, cosmological constant can be accounted for as the potential energy of axions that have not yet relaxed to their minima. In compactifications that incorporate unified models of particle physics, the height of the axion potential can naturally fall close to the observed value of cosmological constant.

  5. Dielectric constant microscopy for biological materials

    NASA Astrophysics Data System (ADS)

    Valavade, A. V.; Kothari, D. C.; Löbbe, C.

    2013-02-01

    This paper describes the work on the development of Dielectric Constant Microscopy for biological materials using double pass amplitude modulation method. The dielectric constant information can be obtained at nanometer scales using this technique. Electrostatic force microscopy images of biological materials are presented. The images obtained from the EFM technique mode clearly show inversion contrast and gives the spatial variation of tip-sample capacitance. The EFM images are further processed to obtain dielectric constant information at nanometer scales.

  6. Determination of the Avogadro constant by counting the atoms in a 28Si crystal.

    PubMed

    Andreas, B; Azuma, Y; Bartl, G; Becker, P; Bettin, H; Borys, M; Busch, I; Gray, M; Fuchs, P; Fujii, K; Fujimoto, H; Kessler, E; Krumrey, M; Kuetgens, U; Kuramoto, N; Mana, G; Manson, P; Massa, E; Mizushima, S; Nicolaus, A; Picard, A; Pramann, A; Rienitz, O; Schiel, D; Valkiers, S; Waseda, A

    2011-01-21

    The Avogadro constant links the atomic and the macroscopic properties of matter. Since the molar Planck constant is well known via the measurement of the Rydberg constant, it is also closely related to the Planck constant. In addition, its accurate determination is of paramount importance for a definition of the kilogram in terms of a fundamental constant. We describe a new approach for its determination by counting the atoms in 1 kg single-crystal spheres, which are highly enriched with the 28Si isotope. It enabled isotope dilution mass spectroscopy to determine the molar mass of the silicon crystal with unprecedented accuracy. The value obtained, NA = 6.022,140,78(18) × 10(23) mol(-1), is the most accurate input datum for a new definition of the kilogram. PMID:21405263

  7. Synchrotron infrared spectroscopy of the ν4, ν8, ν10, ν11 and ν14 fundamental bands of thiirane

    NASA Astrophysics Data System (ADS)

    Evans, Corey J.; Carter, Jason P.; Appadoo, Dominique R. T.; Wong, Andy; McNaughton, Don

    2015-10-01

    The high-resolution spectrum of thiirane has been recorded using the far-infrared beamline at the Australian synchrotron facility. Spectra have been recorded between 700 cm-1 and 1200 cm-1 and ro-vibrational transitions associated with four fundamental bands of thiirane have been observed and assigned. The effects of Coriolis coupling were observed in the upper energy levels associated with the ν4 (1024 cm-1) and the ν14 (1050 cm-1) fundamental bands as well as in the ν11 (825 cm-1) and the ν8 (895 cm-1) fundamental bands. The ν10 (945 cm-1) fundamental band was also observed and was found to have no significant perturbations associated with it. For each of the observed bands rotational and centrifugal distortion constants have been evaluated, while for all but the ν10 fundamental band, Coriolis interaction parameters have been determined for the upper states. The ground state constants have also been further refined.

  8. Fundamental physical theories: Mathematical structures grounded on a primitive ontology

    NASA Astrophysics Data System (ADS)

    Allori, Valia

    In my dissertation I analyze the structure of fundamental physical theories. I start with an analysis of what an adequate primitive ontology is, discussing the measurement problem in quantum mechanics and theirs solutions. It is commonly said that these theories have little in common. I argue instead that the moral of the measurement problem is that the wave function cannot represent physical objects and a common structure between these solutions can be recognized: each of them is about a clear three-dimensional primitive ontology that evolves according to a law determined by the wave function. The primitive ontology is what matter is made of while the wave function tells the matter how to move. One might think that what is important in the notion of primitive ontology is their three-dimensionality. If so, in a theory like classical electrodynamics electromagnetic fields would be part of the primitive ontology. I argue that, reflecting on what the purpose of a fundamental physical theory is, namely to explain the behavior of objects in three-dimensional space, one can recognize that a fundamental physical theory has a particular architecture. If so, electromagnetic fields play a different role in the theory than the particles and therefore should be considered, like the wave function, as part of the law. Therefore, we can characterize the general structure of a fundamental physical theory as a mathematical structure grounded on a primitive ontology. I explore this idea to better understand theories like classical mechanics and relativity, emphasizing that primitive ontology is crucial in the process of building new theories, being fundamental in identifying the symmetries. Finally, I analyze what it means to explain the word around us in terms of the notion of primitive ontology in the case of regularities of statistical character. Here is where the notion of typicality comes into play: we have explained a phenomenon if the typical histories of the primitive

  9. Modification of the characteristic gravitational constants

    NASA Astrophysics Data System (ADS)

    Vujičić, V. A.

    2006-08-01

    In the educational and scientific literature the numerical values of gravitational constants are seen as only approximately correct. The numerical values are different in work by various researchers, as also are the formulae and definitions of constants employed. In this paper, on the basis of Newton’s laws and Kepler’s laws we prove that it is necessary to modify the characteristic gravitational constants and their definitions. The formula for the geocentric gravitational constant of the satellites Kosmos N and the Moon are calculated.

  10. A natural cosmological constant from chameleons

    NASA Astrophysics Data System (ADS)

    Nastase, Horatiu; Weltman, Amanda

    2015-07-01

    We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru-Kallosh-Linde-Trivedi (KKLT)-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero) and the coincidence problem (why Λ is comparable to the matter density now).

  11. How universe evolves with cosmological and gravitational constants

    NASA Astrophysics Data System (ADS)

    Xue, She-Sheng

    2015-08-01

    With a basic varying space-time cutoff ℓ ˜, we study a regularized and quantized Einstein-Cartan gravitational field theory and its domains of ultraviolet-unstable fixed point gir ≳ 0 and ultraviolet-stable fixed point guv ≈ 4 / 3 of the gravitational gauge coupling g = (4 / 3) G /GNewton. Because the fundamental operators of quantum gravitational field theory are dimension-2 area operators, the cosmological constant is inversely proportional to the squared correlation length Λ ∝ξ-2. The correlation length ξ characterizes an infrared size of a causally correlate patch of the universe. The cosmological constant Λ and the gravitational constant G are related by a generalized Bianchi identity. As the basic space-time cutoff ℓ ˜ decreases and approaches to the Planck length ℓpl, the universe undergoes inflation in the domain of the ultraviolet-unstable fixed point gir, then evolves to the low-redshift universe in the domain of ultraviolet-stable fixed point guv. We give the quantitative description of the low-redshift universe in the scaling-invariant domain of the ultraviolet-stable fixed point guv, and its deviation from the ΛCDM can be examined by low-redshift (z ≲ 1) cosmological observations, such as supernova Type Ia.

  12. Constant cross section of loops in the solar corona

    NASA Astrophysics Data System (ADS)

    Peter, H.; Bingert, S.

    2012-12-01

    Context. The corona of the Sun is dominated by emission from loop-like structures. When observed in X-ray or extreme ultraviolet emission, these million K hot coronal loops show a more or less constant cross section. Aims: In this study we show how the interplay of heating, radiative cooling, and heat conduction in an expanding magnetic structure can explain the observed constant cross section. Methods: We employ a three-dimensional magnetohydrodynamics (3D MHD) model of the corona. The heating of the coronal plasma is the result of braiding of the magnetic field lines through footpoint motions and subsequent dissipation of the induced currents. From the model we synthesize the coronal emission, which is directly comparable to observations from, e.g., the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (AIA/SDO). Results: We find that the synthesized observation of a coronal loop seen in the 3D data cube does match actually observed loops in count rate and that the cross section is roughly constant, as observed. The magnetic field in the loop is expanding and the plasma density is concentrated in this expanding loop; however, the temperature is not constant perpendicular to the plasma loop. The higher temperature in the upper outer parts of the loop is so high that this part of the loop is outside the contribution function of the respective emission line(s). In effect, the upper part of the plasma loop is not bright and thus the loop actually seen in coronal emission appears to have a constant width. Conclusions: From this we can conclude that the underlying field-line-braiding heating mechanism provides the proper spatial and temporal distribution of the energy input into the corona - at least on the observable scales. Movies associated to Figs. 1 and 2 are available in electronic form at http://www.aanda.org

  13. BOOK REVIEWS: Quantum Mechanics: Fundamentals

    NASA Astrophysics Data System (ADS)

    Whitaker, A.

    2004-02-01

    This review is of three books, all published by Springer, all on quantum theory at a level above introductory, but very different in content, style and intended audience. That of Gottfried and Yan is of exceptional interest, historical and otherwise. It is a second edition of Gottfried’s well-known book published by Benjamin in 1966. This was written as a text for a graduate quantum mechanics course, and has become one of the most used and respected accounts of quantum theory, at a level mathematically respectable but not rigorous. Quantum mechanics was already solidly established by 1966, but this second edition gives an indication of progress made and changes in perspective over the last thirty-five years, and also recognises the very substantial increase in knowledge of quantum theory obtained at the undergraduate level. Topics absent from the first edition but included in the second include the Feynman path integral, seen in 1966 as an imaginative but not very useful formulation of quantum theory. Feynman methods were given only a cursory mention by Gottfried. Their practical importance has now been fully recognised, and a substantial account of them is provided in the new book. Other new topics include semiclassical quantum mechanics, motion in a magnetic field, the S matrix and inelastic collisions, radiation and scattering of light, identical particle systems and the Dirac equation. A topic that was all but totally neglected in 1966, but which has flourished increasingly since, is that of the foundations of quantum theory. John Bell’s work of the mid-1960s has led to genuine theoretical and experimental achievement, which has facilitated the development of quantum optics and quantum information theory. Gottfried’s 1966 book played a modest part in this development. When Bell became increasingly irritated with the standard theoretical approach to quantum measurement, Viki Weisskopf repeatedly directed him to Gottfried’s book. Gottfried had devoted a

  14. Variación temporal de las constantes fundamentales

    NASA Astrophysics Data System (ADS)

    Landau, S. J.; Vucetich, H.

    La variación temporal de las constantes fundamentales es un problema que ha motivado numerosos trabajos teóricos y experimentales desde la hipótesis de los grandes números de Dirac en 1937. Entre los métodos experimentales y observacionales para establecer restricciones sobre la variación de las constantes fundamentes es importante mencionar: comparación entre relojes atómicos[1], métodos geofísicos[2][3], análisis de sistemas de absorción en quasares[4][5][6] y cotas provenientes de la nucleosíntesis primordial[7]. En un trabajo reciente[5], se reportó una significativa variación en la constante de estructura fina. Intentos de unificar las cuatro interacciones fundamentales dieron como resultado teorías con múltiples dimensiones como las teorías de Kaluza-Klein y teorías de supercuerdas. Estas teorías proporcionan un marco teórico natural para el estudio de la variación temporal de las constantes fundamentales. A su vez, un modelo sencillo para estudiar la variación de la constante de estructura fina, fue propuesto en [8], a partir de premisas muy generales como ser covarianza, invarianza de gauge, causalidad y invarianza ante reversiones temporales en el electromagnetismo. Diferentes versiones de las teorías antes mencionadas coinciden en predecir variaciones temporales de las constantes fundamentales pero difieren en la forma de esta variación[9][10]. De esta manera, las restricciones establecidas experimentalmente sobre la variación de las constantes fundamentales pueden ser una herramienta importante para testear estas diferentes teorías. En este trabajo, utilizamos las cotas provenientes de diversas técnicas experimentales, para testear si las mismas son consistentes con alguna de las teorías antes mencionadas. En particular, establecemos cotas sobre la variación de los parámentros libres de las diferentes teorías como por ejemplo el radio de las dimensiones extras en las teorías tipo Kaluza-Klein.

  15. Tuning the Spring Constant of Cantilever-free Probe Arrays

    NASA Astrophysics Data System (ADS)

    Eichelsdoerfer, Daniel J.; Brown, Keith A.; Boya, Radha; Shim, Wooyoung; Mirkin, Chad A.

    2013-03-01

    The versatility of atomic force microscope (AFM) based techniques such as scanning probe lithography is due in part to the utilization of a cantilever that can be fabricated to match a desired application. In contrast, cantilever-free scanning probe lithography utilizes a low cost array of probes on a compliant backing layer that allows for high throughput nanofabrication but lacks the tailorability afforded by the cantilever in traditional AFM. Here, we present a method to measure and tune the spring constant of probes in a cantilever-free array by adjusting the mechanical properties of the underlying elastomeric layer. Using this technique, we are able to fabricate large-area silicon probe arrays with spring constants that can be tuned in the range from 7 to 150 N/m. This technique offers an advantage in that the spring constant depends linearly on the geometry of the probe, which is in contrast to traditional cantilever-based lithography where the spring constant varies as the cube of the beam width and thickness. To illustrate the benefit of utilizing a probe array with a lower spring constant, we pattern a block copolymer on a delicate 50 nm thick silicon nitride window.

  16. Modeling of fundamental phenomena in welds

    SciTech Connect

    Zacharia, T.; Vitek, J.M.; Goldak, J.A.; DebRoy, T.A.; Rappaz, M.; Bhadeshia, H.K.D.H.

    1993-12-31

    Recent advances in the mathematical modeling of fundamental phenomena in welds are summarized. State-of-the-art mathematical models, advances in computational techniques, emerging high-performance computers, and experimental validation techniques have provided significant insight into the fundamental factors that control the development of the weldment. The current status and scientific issues in the areas of heat and fluid flow in welds, heat source metal interaction, solidification microstructure, and phase transformations are assessed. Future research areas of major importance for understanding the fundamental phenomena in weld behavior are identified.

  17. Fundamentals of preparative and nonlinear chromatography

    SciTech Connect

    Guiochon, Georges A; Felinger, Attila; Katti, Anita; Shirazi, Dean G

    2006-02-01

    The second edition of Fundamentals of Preparative and Nonlinear Chromatography is devoted to the fundamentals of a new process of purification or extraction of chemicals or proteins widely used in the pharmaceutical industry and in preparative chromatography. This process permits the preparation of extremely pure compounds satisfying the requests of the US Food and Drug Administration. The book describes the fundamentals of thermodynamics, mass transfer kinetics, and flow through porous media that are relevant to chromatography. It presents the models used in chromatography and their solutions, discusses the applications made, describes the different processes used, their numerous applications, and the methods of optimization of the experimental conditions of this process.

  18. Vacuum energy and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bass, Steven D.

    2015-06-01

    The accelerating expansion of the Universe points to a small positive value for the cosmological constant or vacuum energy density. We discuss recent ideas that the cosmological constant plus Large Hadron Collider (LHC) results might hint at critical phenomena near the Planck scale.

  19. Cosmological constant from the emergent gravity perspective

    NASA Astrophysics Data System (ADS)

    Padmanabhan, T.; Padmanabhan, Hamsa

    2014-05-01

    Observations indicate that our universe is characterized by a late-time accelerating phase, possibly driven by a cosmological constant Λ, with the dimensionless parameter Λ {LP2} ˜= 10-122, where LP = (Għ/c3)1/2 is the Planck length. In this review, we describe how the emergent gravity paradigm provides a new insight and a possible solution to the cosmological constant problem. After reviewing the necessary background material, we identify the necessary and sufficient conditions for solving the cosmological constant problem. We show that these conditions are naturally satisfied in the emergent gravity paradigm in which (i) the field equations of gravity are invariant under the addition of a constant to the matter Lagrangian and (ii) the cosmological constant appears as an integration constant in the solution. The numerical value of this integration constant can be related to another dimensionless number (called CosMIn) that counts the number of modes inside a Hubble volume that cross the Hubble radius during the radiation and the matter-dominated epochs of the universe. The emergent gravity paradigm suggests that CosMIn has the numerical value 4π, which, in turn, leads to the correct, observed value of the cosmological constant. Further, the emergent gravity paradigm provides an alternative perspective on cosmology and interprets the expansion of the universe itself as a quest towards holographic equipartition. We discuss the implications of this novel and alternate description of cosmology.

  20. Performance of a constant torque pedal device.

    PubMed Central

    Sherwin, K.

    1979-01-01

    A constant-torque oscillatory pedal-crank device using vertical movement of the feet is described and its performance compared to a conventional rotational cycle. Using a generator to measure the power output the constant-torque device produced 33% less power and thus has no practical value as an alternative to the conventional pedal-crank system. Images Figure 3 PMID:526783

  1. Regularizing cosmological singularities by varying physical constants

    SciTech Connect

    Dąbrowski, Mariusz P.; Marosek, Konrad E-mail: k.marosek@wmf.univ.szczecin.pl

    2013-02-01

    Varying physical constant cosmologies were claimed to solve standard cosmological problems such as the horizon, the flatness and the Λ-problem. In this paper, we suggest yet another possible application of these theories: solving the singularity problem. By specifying some examples we show that various cosmological singularities may be regularized provided the physical constants evolve in time in an appropriate way.

  2. The method of constant stimuli is inefficient

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Fitzhugh, Andrew

    1990-01-01

    Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.

  3. Theoretical Analysis of One-Dimensional Pressure Diffusion from a Constant Upstream Pressure to a Constant Downstream Storage

    NASA Astrophysics Data System (ADS)

    Song, Insun

    2016-05-01

    The one-dimensional diffusion equation was solved to understand the pressure and flow behaviors along a cylindrical rock specimen for experimental boundary conditions of constant upstream pressure and constant downstream storage. The solution consists of a time-constant asymptotic part and a transient part that is a negative exponential function of time. This means that the transient flow exponentially decays with time and is eventually followed by a steady-state condition. For a given rock sample, the transient stage is shortest when the downstream storage is minimized. For this boundary condition, a simple equation was derived from the analytic solution to determine the hydraulic permeability from the initial flow rate during the transient stage. The specific storage of a rock sample can be obtained simply from the total flow into the sample during the entire transient stage if there is no downstream storage. In theory, both of these hydraulic properties could be obtained simultaneously from transient-flow stage measurements without a complicated curve fitting or inversion process. Sensitivity analysis showed that the derived permeability is more reliable for lower-permeability rock samples. In conclusion, the constant head method with no downstream storage might be more applicable to extremely low-permeability rocks if the upstream flow rate is measured precisely upstream.

  4. Air kerma rate constants for radionuclides.

    PubMed

    Wasserman, H; Groenewald, W

    1988-01-01

    Conversion to SI units requires that the exposure rate constant which was usually quoted in R.h-1.mCi-1.cm2 be replaced by the air kerma rate constant with units m2.Gy.Bq-1.s-1. The conversion factor is derived and air kerma rate constants for 30 radionuclides used in nuclear medicine and brachytherapy are listed. A table for calculation of air kerma rates for other radionuclides is also given. To calculate absorbed dose to tissue, the air kerma rate has to be multiplied by approximately 1.1. A dose equivalent rate constant is thus listed which allows direct calculation of dose equivalent rate to soft tissue without resorting to exposure rate constants tabulated in the special units R.m2.mCi-1.h-1 which should no longer be used. PMID:3208786

  5. Elastic constants of layers in isotropic laminates.

    PubMed

    Heyliger, Paul R; Ledbetter, Hassel; Kim, Sudook; Reimanis, Ivar

    2003-11-01

    The individual laminae elastic constants in multilayer laminates composed of dissimilar isotropic layers were determined using ultrasonic-resonance spectroscopy and the linear theory of elasticity. Ultrasonic resonance allows one to measure the free-vibration response spectrum of a traction-free solid under periodic vibration. These frequencies depend on pointwise density, laminate dimensions, layer thickness, and layer elastic constants. Given a material with known mass but unknown constitution, this method allows one to extract the elastic constants and density of the constituent layers. This is accomplished by measuring the frequencies and then minimizing the differences between these and those calculated using the theory of elasticity for layered media to select the constants that best replicate the frequency-response spectrum. This approach is applied to a three-layer, unsymmetric laminate of WpCu, and very good agreement is found with the elastic constants of the two constituent materials. PMID:14649998

  6. Reflectance Spectra and Optical Constants of Kieserite and Sulfate Mixtures For Mars

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Noe Dobrea, E. Z.; Dalton, J. B.; Jamieson, C. S.; Abbey, W. J.

    2011-12-01

    In this work, we present visible and near-infrared (VNIR, λ=0.35 - 5 μm) laboratory reflectance spectra obtained at Mars-relevant temperatures and corresponding optical constants (real and imaginary refractive indices) for hydrated sulfates that have been observed on Mars, e.g., via Mars Reconnaissance Orbiter CRISM and Mars Express OMEGA spectrometers. In the laboratory, we have successfully synthesized 5 hydration states of Mg-sulfates posited to exist on Mars: kieserite, sanderite, starkeyite, hexahydrite, and epsomite. Epsomite and hexahydrite salts are expected to form on Mars when MgSO4/(SO4+Cl)-rich solutions are concentrated. Amorphous derivatives of epsomite and hexahydrite or lower-order Mg-sulfate hydrates (starkeyite, sanderite, kieserite) are believed to be stable on Mars. Kieserite in particular has been positively identified on Mars in several locations (e.g., Meridiani Planum, Eastern Candor & Capri Chasma, Iani and Aram Chaos) with several mission datasets. Therefore, we present VNIR reflectance spectra and optical constants on kieserite in both its low and high humidity polymorphs, and similar data on mixtures of our hydrated sulfates with each other and with different Mars simulants: JSC Mars-1 (glassy, altered volcanic ash that is a good spectral analog for Mars dust and Mars bright regions) and Mars Mojave Simulant (a crystalline material analogous to Mars rocks). These data will help to fully distinguish between and constrain the abundance and distribution of hydrated sulfates on the martian surface, which will lead to improvements in understanding the pressure, temperature, and humidity conditions and how active frost, groundwater, and atmospheric processes once were on Mars. This work is supported by NASA's Mars Fundamental Research Program (NNX10AP78G: PI Pitman) and partly performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration. CSJ acknowledges

  7. Instructor Special Report: RIF (Reading Is FUNdamental)

    ERIC Educational Resources Information Center

    Instructor, 1976

    1976-01-01

    At a time when innovative programs of the sixties are quickly falling out of the picture, Reading Is FUNdamental, after ten years and five million free paperbacks, continues to expand and show results. (Editor)

  8. Fundamental Insights into Combustion Instability Predictions in Aerospace Propulsion

    NASA Astrophysics Data System (ADS)

    Huang, Cheng

    in conjunction with a Galerkin procedure to reduce the governing partial differential equation to an ordinary differential equation, which constitutes the ROM. Once the ROM is established, it can then be used as a lower-order test-bed to predict detailed results within certain parametric ranges at a fraction of the cost of solving the full governing equations. A detailed assessment is performed on the method in two parts. In part one, a one-dimensional scalar reaction-advection model equation is used for fundamental investigations, which include verification of the POD eigen-basis calculation and of the ROM development procedure. Moreover, certain criteria during ROM development are established: 1. a necessary number of POD modes that should be included to guarantee a stable ROM; 2. the need for the numerical discretization scheme to be consistent between the original CFD and the developed ROM. Furthermore, the predictive capabilities of the resulting ROM are evaluated to test its limits and to validate the values of applying broadband forcing in improving the ROM performance. In part two, the exploration is extended to a vector system of equations. Using the one-dimensional Euler equation is used as a model equation. A numerical stability issue is identified during the ROM development, the cause of which is further studied and attributed to the normalization methods implemented to generate coupled POD eigen-bases for vector variables. (Abstract shortened by UMI.).

  9. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants

    ERIC Educational Resources Information Center

    Vargas, Francisco M.

    2014-01-01

    The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…

  10. Precision spectroscopy of 2S-nP transitions in atomic hydrogen for a new determination of the Rydberg constant and the proton charge radius

    NASA Astrophysics Data System (ADS)

    Beyer, Axel; Maisenbacher, Lothar; Khabarova, Ksenia; Matveev, Arthur; Pohl, Randolf; Udem, Thomas; Hänsch, Theodor W.; Kolachevsky, Nikolai

    2015-10-01

    Precision measurements of transition frequencies in atomic hydrogen provide important input for a number of fundamental applications, such as stringent tests of QED and the extraction of fundamental constants. Here we report on precision spectroscopy of the 2S-4P transition in atomic hydrogen with a reproducibility of a few parts in 1012. Utilizing a cryogenic beam of hydrogen atoms in the metastable 2S state reduces leading order systematic effects of previous experiments of this kind. A number of different systematic effects, especially line shape modifications due to quantum interference in spontaneous emission, are currently under investigation. Once fully characterized, our measurement procedure can be applied to higher lying 2S-nP transitions (n=6,8,9,10) and we hope to contribute to an improved determination of the Rydberg constant and the proton root mean square charge radius by this series of experiments. Ultimately, this improved determination will give deeper insight into ‘the proton size puzzle’ from the electronic hydrogen side.

  11. Fundamental theories of waves and particles formulated without classical mass

    SciTech Connect

    Fry, J.L.; Musielak, Z.E.

    2010-12-15

    Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.

  12. Fundamental elements of vector enhanced multivariance product representation

    NASA Astrophysics Data System (ADS)

    Kalay, Berfin; Demiralp, Metin

    2012-09-01

    A new version of High Dimensional Model Representation (HDMR) is presented in this work. Vector HDMR has been quite recently developed to deal with the decomposition of vector valued multivariate functions. It was an extension from scalars to vectors by possibly using matrix weights. However, that expansion is based on an ascending multivariance starting from a constant term via a set of appropriately imposed conditions which can be related to orthogonality in a conveniently chosen Hilbert space. This work adds more flexibility by introducing certain matrix valued univariate support functions. We assume weight matrices proportional to unit matrices. This work covers only the basic issues related to the fundamental elements of the new approach.

  13. Challenging fundamental limits in the fabrication of vector vortex waveplates

    NASA Astrophysics Data System (ADS)

    Hakobyan, R. S.; Tabiryan, N. V.; Serabyn, E.

    Vector vortex waveplates (VVWs) are in the heart of vortex coronagraphs aimed at exoplanet detection close to bright stars. VVWs made of liquid crystal polymers (LCPs) provide structural continuity, opportunity of high order singularities, large area, and inexpensive manufacturing technology. However, to date, the performance of such devices is compromised by imperfections in singularity area that allow some residual starlight leakage. Reducing the singularity to subwavelength sizes increases the energy of elastic deformations of the LC. As a result, the azimuthally symmetric orientation pattern gives way to 3D deformations that reduce the elastic energy of the LC. The stability of radial orientation is determined by elastic constants of the LC, the thickness of the layer and the boundary conditions. In the current paper, we examin the role of those factors to determine the fundamental limits the singularity area could be reduced to for LCP VVWs.

  14. Direct expressions for magnetic anisotropy constants

    NASA Astrophysics Data System (ADS)

    Miura, Daisuke; Sasaki, Ryo; Sakuma, Akimasa

    2015-11-01

    Direct expressions for the magnetic anisotropy constants are given at a finite temperature from a microscopic viewpoint. The present derivation assumes that the Hamiltonian is a linear function with respect to the magnetization direction. We discuss in detail the first-order anisotropy constant K1 and show that our present results reproduce previous results. We applied our method to Nd2Fe14B compounds and confirmed that the present method can reproduce the temperature dependence of the magnetocrystalline anisotoropy constants K1, K2, and K3 well.

  15. Latest rocket measurements of the solar constant

    NASA Technical Reports Server (NTRS)

    Duncan, C. H.; Willson, R. C.; Kendall, J. M.; Harrison, R. G.; Hickey, J. R.

    1982-01-01

    Three rocket flights which carried a payload of absolute radiometers to measure the solar constant with an accuracy of plus or minus 0.5 per cent have been accomplished. Several of the rocket radiometers were duplicates of those aboard the Solar Maximum Mission and Nimbus spacecrafts. The values for the solar constant obtained by the rocket sensors for the three flight dates indicate an increase between the first and latter two flights approximately equivalent to the uncertainty of the measurements. The values for the solar constant for the three flights are 1367, 1372 and 1374 W/sq m.

  16. Emergent Physics on Vacuum Energy and Cosmological Constant

    SciTech Connect

    Volovik, G. E.

    2006-09-07

    The phenomenon of emergent physics in condensed-matter many-body systems has become the paradigm of modern physics, and can probably also be applied to high-energy physics and cosmology. This encouraging fact comes from the universal properties of the ground state (the analog of the quantum vacuum) in fermionic many-body systems, described in terms of the momentum-space topology. In one of the two generic universality classes of fermionic quantum vacua the gauge fields, chiral fermions, Lorentz invariance, gravity, relativistic spin, and other features of the Standard Model gradually emerge at low energy. The condensed-matter experience provides us with some criteria for selecting the proper theories in particle physics and gravity, and even suggests specific solutions to different fundamental problems. In particular, it provides us with a plausible mechanism for the solution of the cosmological constant problem, which I will discuss in some detail.

  17. Fundamental experiments on hydride reorientation in zircaloy

    NASA Astrophysics Data System (ADS)

    Colas, Kimberly B.

    In the current study, an in-situ X-ray diffraction technique using synchrotron radiation was used to follow directly the kinetics of hydride dissolution and precipitation during thermomechanical cycles. This technique was combined with conventional microscopy (optical, SEM and TEM) to gain an overall understanding of the process of hydride reorientation. Thus this part of the study emphasized the time-dependent nature of the process, studying large volume of hydrides in the material. In addition, a micro-diffraction technique was also used to study the spatial distribution of hydrides near stress concentrations. This part of the study emphasized the spatial variation of hydride characteristics such as strain and morphology. Hydrided samples in the shape of tensile dog-bones were used in the time-dependent part of the study. Compact tension specimens were used during the spatial dependence part of the study. The hydride elastic strains from peak shift and size and strain broadening were studied as a function of time for precipitating hydrides. The hydrides precipitate in a very compressed state of stress, as measured by the shift in lattice spacing. As precipitation proceeds the average shift decreases, indicating average stress is reduced, likely due to plastic deformation and morphology changes. When nucleation ends the hydrides follow the zirconium matrix thermal contraction. When stress is applied below the threshold stress for reorientation, hydrides first nucleate in a very compressed state similar to that of unstressed hydrides. After reducing the average strain similarly to unstressed hydrides, the average hydride strain reaches a constant value during cool-down to room temperature. This could be due to a greater ease of deforming the matrix due to the applied far-field strain which would compensate for the strains due to thermal contraction. Finally when hydrides reorient, the average hydride strains become tensile during the first precipitation regime and

  18. Marshak waves: Constant flux vs constant T-a (slight) paradigm shift

    SciTech Connect

    Rosen, M.D.

    1994-12-22

    We review the basic scaling laws for Marshak waves and point out the differences in results for wall loss, albedo, and Marshak depth when a constant absorbed flux is considered as opposed to a constant absorbed temperature. Comparisons with LASNEX simulations and with data are presented that imply that a constant absorbed flux is a more appropriate boundary condition.

  19. Chemical shifts and coupling constants of C8H10N4O2

    NASA Astrophysics Data System (ADS)

    Jain, M.

    This document is part of Subvolume D3 `Chemical Shifts and Coupling Constants for Carbon-13: Heterocycles' of Volume 35 `Nuclear Magnetic Resonance (NMR) Data' of Landolt-Börnstein Group III `Condensed Matter'

  20. Faculty beliefs on fundamental dimensions of scholarship

    NASA Astrophysics Data System (ADS)

    Finnegan, Brian

    scholarship, the policies, activities, and rewards of institutions must reflect a similar belief on the part of faculty. By understanding faculty beliefs on the fundamental dimensions of scholarship, an important step in building this new culture can be taken.

  1. The Cosmological Constant in Quantum Cosmology

    SciTech Connect

    Wu Zhongchao

    2008-10-10

    Hawking proposed that the cosmological constant is probably zero in quantum cosmology in 1984. By using the right configuration for the wave function of the universe, a complete proof is found very recently.

  2. The Solar Constant: A Take Home Lab

    ERIC Educational Resources Information Center

    Eaton, B. G.; And Others

    1977-01-01

    Describes a method that uses energy from the sun, absorbed by aluminum discs, to melt ice, and allows the determination of the solar constant. The take-home equipment includes Styrofoam cups, a plastic syringe, and aluminum discs. (MLH)

  3. How the cosmological constant affects gravastar formation

    SciTech Connect

    Chan, R.; Silva, M.F.A. da; Rocha, P. E-mail: mfasnic@gmail.com

    2009-12-01

    Here we generalized a previous model of gravastar consisted of an internal de Sitter spacetime, a dynamical infinitely thin shell with an equation of state, but now we consider an external de Sitter-Schwarzschild spacetime. We have shown explicitly that the final output can be a black hole, a ''bounded excursion'' stable gravastar, a stable gravastar, or a de Sitter spacetime, depending on the total mass of the system, the cosmological constants, the equation of state of the thin shell and the initial position of the dynamical shell. We have found that the exterior cosmological constant imposes a limit to the gravastar formation, i.e., the exterior cosmological constant must be smaller than the interior cosmological constant. Besides, we have also shown that, in the particular case where the Schwarzschild mass vanishes, no stable gravastar can be formed, but we still have formation of black hole.

  4. Constant-amplitude, frequency- independent phase shifter

    NASA Technical Reports Server (NTRS)

    Deboo, G. J.

    1971-01-01

    Electronic circuit using operational amplifiers provides output with constant phase shift amplitude, with respect to sinusoidal input, over wide range of frequencies. New circuit includes field effect transistor, Q, operational amplifiers, A1 and A2, and phase detector.

  5. The Rate Constant for Fluorescence Quenching

    ERIC Educational Resources Information Center

    Legenza, Michael W.; Marzzacco, Charles J.

    1977-01-01

    Describes an experiment that utilizes fluorescence intensity measurements from a Spectronic 20 to determine the rate constant for the fluorescence quenching of various aromatic hydrocarbons by carbon tetrachloride in an ethanol solvent. (MLH)

  6. Dielectric constant of water in the interface

    NASA Astrophysics Data System (ADS)

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V.

    2016-07-01

    We define the dielectric constant (susceptibility) that should enter the Maxwell boundary value problem when applied to microscopic dielectric interfaces polarized by external fields. The dielectric constant (susceptibility) of the interface is defined by exact linear-response equations involving correlations of statistically fluctuating interface polarization and the Coulomb interaction energy of external charges with the dielectric. The theory is applied to the interface between water and spherical solutes of altering size studied by molecular dynamics (MD) simulations. The effective dielectric constant of interfacial water is found to be significantly lower than its bulk value, and it also depends on the solute size. For TIP3P water used in MD simulations, the interface dielectric constant changes from 9 to 4 when the solute radius is increased from ˜5 to 18 Å.

  7. Dielectric constant of water in the interface.

    PubMed

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2016-07-01

    We define the dielectric constant (susceptibility) that should enter the Maxwell boundary value problem when applied to microscopic dielectric interfaces polarized by external fields. The dielectric constant (susceptibility) of the interface is defined by exact linear-response equations involving correlations of statistically fluctuating interface polarization and the Coulomb interaction energy of external charges with the dielectric. The theory is applied to the interface between water and spherical solutes of altering size studied by molecular dynamics (MD) simulations. The effective dielectric constant of interfacial water is found to be significantly lower than its bulk value, and it also depends on the solute size. For TIP3P water used in MD simulations, the interface dielectric constant changes from 9 to 4 when the solute radius is increased from ∼5 to 18 Å. PMID:27394114

  8. Measurements of the gravitational constant - why we need new ideas

    NASA Astrophysics Data System (ADS)

    Schlamminger, Stephan

    2016-03-01

    In this presentation, I will summarize measurements of the Newtonian constant of gravitation, big G, that have been carried out in the last 30 years. I will describe key techniques that were used by researchers around the world to determine G. Unfortunately, the data set is inconsistent with itself under the assumption that the gravitational constant does not vary in space or time, an assumption that has been tested by other experiments. Currently, several research groups have reported measurements with relative uncertainties below 2 ×10-5 , however, the relative difference between the smallest and largest reported number exceeds 5 ×10-4 . It is embarrassing that after over 200 years of measuring the gravitational constant, we do not have a better understanding of the numerical value of this constant. Clearly, we need new ideas to tackle this problem and now is the time to come forward with new ideas. The National Science Foundation is currently soliciting proposals for an Ideas Lab on measuring big G. In the second part of the presentation, I will introduce the Ideas Lab on big G and I am hoping to motivate the audience to think about new ideas to measure G and encourage them to apply to participate in the Ideas Lab.

  9. Holographic dark energy with varying gravitational constant

    NASA Astrophysics Data System (ADS)

    Jamil, Mubasher; Saridakis, Emmanuel N.; Setare, M. R.

    2009-08-01

    We investigate the holographic dark energy scenario with a varying gravitational constant, in flat and non-flat background geometry. We extract the exact differential equations determining the evolution of the dark energy density-parameter, which include G-variation correction terms. Performing a low-redshift expansion of the dark energy equation of state, we provide the involved parameters as functions of the current density parameters, of the holographic dark energy constant and of the G-variation.

  10. Simple constant-current-regulated power supply

    NASA Technical Reports Server (NTRS)

    Priebe, D. H. E.; Sturman, J. C.

    1977-01-01

    Supply incorporates soft-start circuit that slowly ramps current up to set point at turn-on. Supply consists of full-wave rectifier, regulating pass transistor, current feedback circuit, and quad single-supply operational-amplifier circuit providing control. Technique is applicable to any system requiring constant dc current, such as vacuum tube equipment, heaters, or battery charges; it has been used to supply constant current for instrument calibration.

  11. A model for solar constant secular changes

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.

    1988-01-01

    In this paper, contrast models for solar active region and global photospheric features are used to reproduce the observed Active Cavity Radiometer and Earth Radiation Budget secular trends in reasonably good fashion. A prediction for the next decade of solar constant variations is made using the model. Secular trends in the solar constant obtained from the present model support the view that the Maunder Minimum may be related to the Little Ice Age of the 17th century.

  12. Optical constants of concentrated aqueous ammonium sulfate.

    NASA Technical Reports Server (NTRS)

    Remsberg, E. E.

    1973-01-01

    Using experimental data obtained from applying spectroscopy to a 39-wt-% aqueous ammonium sulfate solution, it is shown that, even though specific aerosol optical constants appear quite accurate, spectral variations may exist as functions of material composition or concentration or both. Prudent users of optical constant data must then include liberal data error estimates when performing calculations or in interpreting spectroscopic surveys of collected aerosol material.

  13. Divergences and involution-dependent constants

    SciTech Connect

    Nagao, G.

    1989-01-01

    The authors show the cancellation of the dilation divergence in the 1-loop open bosonic string vacuum and N-tachyon scattering amplitude depends upon a set of involution-dependent constants. Such a set of constants exists at each loop level and thus provides a means with which to study the connection between the cancellation of divergences and anomalies for the gauge group SO(2/sup D/2/).

  14. Controlled Crystallinity and Fundamental Coupling Interactions in Nanocrystals

    NASA Astrophysics Data System (ADS)

    Ouyang, Min

    2009-03-01

    Metal and semiconductor nanocrystals show many unusual properties and functionalities, and can serve as model system to explore fundamental quantum and classical coupling interactions as well as building blocks of many practical applications. However, because of their small size, these nanoparticles typically exhibit different crystalline properties as compared with their bulk counterpart, and controlling crystallinity (and structural defects) within nanoparticles has posed significant technical challenges. In this talk, I will firstly apply silver metal nanoparticles as an example and present a novel chemical synthetic technique to achieve unprecedented crystallinity control at the nanoscale. This engineering of nanocrystallinity enables manipulation of intrinsic chemical functionalities, physical properties as well as nano-device performance [1]. For example, I will highlight that electron- phonon coupling constant can be significantly reduced by about four times and elastic modulus is increased ˜40% in perfect single crystalline silver nanoparticles as compared with those in disordered twinned nanoparticles. One important application of metal nanoparticles is nanoscale sensors. I will thus demonstrate that performance of nanoparticles based molecular sensing devices can be optimized with three times improvement of figure-of-merit if perfect single crystalline nanoparticles are applied. Lastly, I will present our related studies on semiconductor nanocrystals as well as their hybrid heterostructures. These discussions should offer important implications for our understanding of the fundamental properties at nanoscale and potential applications of metal nanoparticles. [4pt] [1] Yun Tang and Min Ouyang, Nature Materials, 6, 754, 2007.

  15. Traffic dynamics: Its impact on the Macroscopic Fundamental Diagram

    NASA Astrophysics Data System (ADS)

    Knoop, Victor L.; van Lint, Hans; Hoogendoorn, Serge P.

    2015-11-01

    Literature shows that-under specific conditions-the Macroscopic Fundamental Diagram (MFD) describes a crisp relationship between the average flow (production) and the average density in an entire network. The limiting condition is that traffic conditions must be homogeneous over the whole network. Recent works describe hysteresis effects: systematic deviations from the MFD as a result of loading and unloading. This article proposes a two dimensional generalization of the MFD, the so-called Generalized Macroscopic Fundamental Diagram (GMFD), which relates the average flow to both the average density and the (spatial) inhomogeneity of density. The most important contribution is that we show this is a continuous function, of which the MFD is a projection. Using the GMFD, we can describe the mentioned hysteresis patterns in the MFD. The underlying traffic phenomenon explaining the two dimensional surface described by the GMFD is that congestion concentrates (and subsequently spreads out) around the bottlenecks that oversaturate first. We call this the nucleation effect. Due to this effect, the network flow is not constant for a fixed number of vehicles as predicted by the MFD, but decreases due to local queueing and spill back processes around the congestion "nuclei". During this build up of congestion, the production hence decreases, which gives the hysteresis effects.

  16. Effect of speed matching on fundamental diagram of pedestrian flow

    NASA Astrophysics Data System (ADS)

    Fu, Zhijian; Luo, Lin; Yang, Yue; Zhuang, Yifan; Zhang, Peitong; Yang, Lizhong; Yang, Hongtai; Ma, Jian; Zhu, Kongjin; Li, Yanlai

    2016-09-01

    Properties of pedestrian may change along their moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study the speed matching effect (a pedestrian adjusts his velocity constantly to the average velocity of his neighbors) and its influence on the density-velocity relationship (a pedestrian adjust his velocity to the surrounding density), known as the fundamental diagram of the pedestrian flow. By the means of the cellular automaton, the simulation results fit well with the empirical data, indicating the great advance of the discrete model for pedestrian dynamics. The results suggest that the system velocity and flow rate increase obviously under a big noise, i.e., a diverse composition of pedestrian crowd, especially in the region of middle or high density. Because of the temporary effect, the speed matching has little influence on the fundamental diagram. Along the entire density, the relationship between the step length and the average pedestrian velocity is a piecewise function combined two linear functions. The number of conflicts reaches the maximum with the pedestrian density of 2.5 m-2, while decreases by 5.1% with the speed matching.

  17. Effective optical constants of anisotropic materials

    NASA Technical Reports Server (NTRS)

    Aronson, J. R.; Emslie, A. G.

    1980-01-01

    The applicability of a technique for determining the optical constants of soil or aerosol components on the basis of measurements of the reflectance or transmittance of inhomogeneous samples of component material is investigated. Optical constants for a sample of very pure quartzite were obtained by a specular reflection technique and line parameters were calculated by classical dispersion theory. Predictions of the reflectance of powdered quartz were then derived from optical constants measured for the anisotropic quartz and for pure quartz crystals, and compared with experimental measurements. The calculated spectra are found to resemble each other moderately well in shape, however the reflectance level calculated from the psuedo-optical constants (quartzite) is consistently below that calculated from quartz values. The spectrum calculated from the quartz optical constants is also shown to represent the experimental nonrestrahlen features more accurately. It is thus concluded that although optical constants derived from inhomogeneous materials may represent the spectral features of a powdered sample qualitatively a quantitative fit to observed data is not likely.

  18. RNA structure and scalar coupling constants

    SciTech Connect

    Tinoco, I. Jr.; Cai, Z.; Hines, J.V.; Landry, S.M.; SantaLucia, J. Jr.; Shen, L.X.; Varani, G.

    1994-12-01

    Signs and magnitudes of scalar coupling constants-spin-spin splittings-comprise a very large amount of data that can be used to establish the conformations of RNA molecules. Proton-proton and proton-phosphorus splittings have been used the most, but the availability of {sup 13}C-and {sup 15}N-labeled molecules allow many more coupling constants to be used for determining conformation. We will systematically consider the torsion angles that characterize a nucleotide unit and the coupling constants that depend on the values of these torsion angles. Karplus-type equations have been established relating many three-bond coupling constants to torsion angles. However, one- and two-bond coupling constants can also depend on conformation. Serianni and coworkers measured carbon-proton coupling constants in ribonucleosides and have calculated their values as a function of conformation. The signs of two-bond coupling can be very useful because it is easier to measure a sign than an accurate magnitude.

  19. Inflation with a constant rate of roll

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.

  20. A precise calculation of the nondimentional factor of the Wien constant

    NASA Astrophysics Data System (ADS)

    Geru, Ion I.; Geru Vitalie, I.

    It has been established that the precision of the calculation of the Wien fundamental constant b=hc/(kX1), (h-the Planck constant, c-the speed of the light), X1 -the solution of the equation Xe^X-5(e^X-1) is limited by the errors of the measurements of the constants k,h and c. The numerical values of X1 , obtained here by two various methods of calculation (the method of iterations and the method of the division of the inteval) give the same results. Using the IBM PCAT it has been established that the 10-th iteration of X1 become a constant with the numerical value X1= 4,9651142317442763, determined by a precision of 10^8 times greater than the speed of light.

  1. A Postulation of a Concept in Fundamental Physics

    NASA Astrophysics Data System (ADS)

    Goradia, Shantilal

    2006-10-01

    I am postulating that all fermions have a quantum mouth (Planck size) that radiates a flux density of gravitons as a function of the mass of the particle. Nucleons are not hard balls like light bulbs radiating photons challenging Newtonian concepts of centers and surfaces. The hardball analogy is implicit in coupling constants that compare strong force relative to gravity. The radiating mouth is not localized at the center like a hypothetical point size filament of a light bulb with a hard surface. A point invokes mass of zero volume. It is too precise, inconsistent and illogical. Nothing can be localized with more accuracy that Planck length. Substituting the hard glass bulb surface with flexible plastic surface would clearly make the interacting mouths of particles approach each other as close as possible, but no less than the quantum limit of Planck length. Therefore, surface distance in Newtonian gravity would be a close approximation at particle scale and fits Feynman's road map [1]. My postulation reflected by Fig. 2 of gr-qc/0507130 explains observations of increasing values of coupling constants resulting from decreasing values of Planck length (See physics/0210040 v1). Since Planck length is the fundamental unit of length of nature, its variation can impact our observation of the universe and the evolutionary process.

  2. Sensors, Volume 1, Fundamentals and General Aspects

    NASA Astrophysics Data System (ADS)

    Grandke, Thomas; Ko, Wen H.

    1996-12-01

    'Sensors' is the first self-contained series to deal with the whole area of sensors. It describes general aspects, technical and physical fundamentals, construction, function, applications and developments of the various types of sensors. This volume deals with the fundamentals and common principles of sensors and covers the wide areas of principles, technologies, signal processing, and applications. Contents include: Sensor Fundamentals, e.g. Sensor Parameters, Modeling, Design and Packaging; Basic Sensor Technologies, e.g. Thin and Thick Films, Integrated Magnetic Sensors, Optical Fibres and Intergrated Optics, Ceramics and Oxides; Sensor Interfaces, e.g. Signal Processing, Multisensor Signal Processing, Smart Sensors, Interface Systems; Sensor Applications, e.g. Automotive: On-board Sensors, Traffic Surveillance and Control, Home Appliances, Environmental Monitoring, etc. This volume is an indispensable reference work and text book for both specialits and newcomers, researchers and developers.

  3. Fundamental understanding of matter: an engineering viewpoint

    SciTech Connect

    Cullingford, H.S.; Cort, G.E.

    1980-01-01

    Fundamental understanding of matter is a continuous process that should produce physical data for use by engineers and scientists in their work. Lack of fundamental property data in any engineering endeavor cannot be mitigated by theoretical work that is not confirmed by physical experiments. An engineering viewpoint will be presented to justify the need for understanding of matter. Examples will be given in the energy engineering field to outline the importance of further understanding of material and fluid properties and behavior. Cases will be cited to show the effects of various data bases in energy, mass, and momentum transfer. The status of fundamental data sources will be discussed in terms of data centers, new areas of engineering, and the progress in measurement techniques. Conclusions and recommendations will be outlined to improve the current situation faced by engineers in carrying out their work. 4 figures.

  4. The Fundamental Neutron Physics Facilities at NIST

    PubMed Central

    Nico, J. S.; Arif, M.; Dewey, M. S.; Gentile, T. R.; Gilliam, D. M.; Huffman, P. R.; Jacobson, D. L.; Thompson, A. K.

    2005-01-01

    The program in fundamental neutron physics at the National Institute of Standards and Technology (NIST) began nearly two decades ago. The Neutron Interactions and Dosimetry Group currently maintains four neutron beam lines dedicated to studies of fundamental neutron interactions. The neutrons are provided by the NIST Center for Neutron Research, a national user facility for studies that include condensed matter physics, materials science, nuclear chemistry, and biological science. The beam lines for fundamental physics experiments include a high-intensity polychromatic beam, a 0.496 nm monochromatic beam, a 0.89 nm monochromatic beam, and a neutron interferometer and optics facility. This paper discusses some of the parameters of the beam lines along with brief presentations of some of the experiments performed at the facilities. PMID:27308110

  5. Fundamental Vocabulary Selection Based on Word Familiarity

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi; Kasahara, Kaname; Kanasugi, Tomoko; Amano, Shigeaki

    This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.

  6. Variations of a Constant -- On the History of Precession

    NASA Astrophysics Data System (ADS)

    Kokott, W.

    The precession of the equinoxes, the phenomenon which defines one of the fundamental constants of astronomy, has been with us for more than two millennia. Discovered by Hipparchos who did notice a systematic difference of his star positions as compared with older observations, subsequently adopted by Ptolemaios, its correct value became the object of prolonged controversy. The apparent variability of the precession led to the superimposition of a so-called ''trepidation``, an oscillation of typically +/- 9 deg amplitude and 7000 years period, over a linear precession of only 26 arcsec per annum. This construction, finalized in the Alfonsine Tables (ca. 1280), did work for less than two centuries. The motion of the vernal equinox, at 39 arcsec p.a. too small from the outset, decreases according to this theory to 34 arcsec in the year 1475, the first year covered by the printed version of Johannes Regiomontanus' Ephemerides. Regiomontanus had to re-adjust his longitudes to the real situation, but the difficulties caused by the apparent nonlinearity did persist, leading to a prolonged debate which was finally put to rest by Tycho Brahe. Subsequent to Edmond Halley's successful derivation of a modern value of the precessional constant, again by comparing contemporary star positions with the Almagest catalogue, and Bradley's discovery of the nutation, the last long-term comparison of modern with Ptolemaic coordinates was published by Bode (1795). Shortly after, the analytical theory of precession was established by Bessel in his Fundamenta Astronomiae (1818).

  7. The efficiency of combustion turbines with constant-pressure combustion

    NASA Technical Reports Server (NTRS)

    Piening, Werner

    1941-01-01

    Of the two fundamental cycles employed in combustion turbines, namely, the explosion (or constant-volume) cycle and the constant-pressure cycle, the latter is considered more in detail and its efficiency is derived with the aid of the cycle diagrams for the several cases with adiabatic and isothermal compression and expansion strokes and with and without utilization of the exhaust heat. Account is also taken of the separate efficiencies of the turbine and compressor and of the pressure losses and heat transfer in the piping. The results show that without the utilization of the exhaust heat the efficiencies for the two cases of adiabatic and isothermal compression is offset by the increase in the heat supplied. It may be seen from the curves that it is necessary to attain separate efficiencies of at least 80 percent in order for useful results to be obtained. There is further shown the considerable effect on the efficiency of pressure losses in piping or heat exchangers.

  8. Experimental rovibrational constants and equilibrium structure of phosphorus trifluoride

    NASA Astrophysics Data System (ADS)

    Najib, Hamid

    2014-11-01

    Thanks to recent high-resolution Fourier transform infrared (FTIR) and pure rotational (RF/CM/MMW) measurements, several experimental values of the rotation-vibration parameters of the oblate molecule PF3 have been extracted, contributing thus to the knowledge of the molecular potential of phosphorus trifluoride. The data used are those of the fundamental, overtone and combination bands studied in the 300-1500 cm-1 range. The new values are in good agreement with ones determined at low resolution, but significantly more accurate. The agreement is excellent with the available values determined by ab initio HF-SCF calculations employing the TZP/TZ2P triple-zeta basis. From the recent experimental rovibrational interaction constants αC and αB, new accurate equilibrium rotational constants Ce and Be have been derived for the symmetric top molecule PF3, which were used to derive the equilibrium geometry of this molecule: re(F-P) = 1.560986 (43) Å; θe(FPF) = 97.566657 (64)°.

  9. Vibrational levels and anharmonicity in SF 6—II. Anharmonic and potential constants

    NASA Astrophysics Data System (ADS)

    McDowell, Robin S.; Krohn, Burton J.

    Expressions for the vibrational energy levels of spherical-top molecules are reviewed. We show that even without a full analysis of the anharmonic splitting of higher vibrational states, the positions of many observed transitions can be adequately described with "effective" anharmonicity constants that combine some of the (unknown) splitting terms together with the "true" anharmonicity constants X ij. A notation is developed that distinguishes between the effective constants obtained from binary ( X' ij) and various types of ternary ( X'' ij, …) combinations. On this basis we analyze the data set of the previous paper on the vibrational levels of SF 6, based largely on FTIR spectra obtained at a resolution of 0.05 cm -1. Many of the anharmonicity constants determine the frequencies of several bands, and can be fitted to the data by least-squares regression with standard deviations of 0.001-0.05 cm -1. Of the 21 constants X ij, we obtain values for six, plus 12 X' ijs obtained from binary combinations and one ( X‴ 22) from ternary combinations. Of the two constants undetermined in this paper, a precise value of X33 is available from high-resolution spectroscopy of 3ν 3, and only X35 remains unknown. The fit also yields band origins for the i.r.-inactive bending fundamentals ν 5 and ν 6. Using these constants, we estimate the harmonic fundamental frequencies ω i. The general quadratic force field of SF 6 has been redetermined, using as constraints in the F1 u block the central-atom isotopic frequency shifts and the Coriolis constants ζ 3 and ζ 4.

  10. DOE Fundamentals Handbook: Electrical Science, Volume 4

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive transformers; and electrical test components; batteries; AC and DC voltage regulators; instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  11. DOE Fundamentals Handbook: Mathematics, Volume 2

    SciTech Connect

    Not Available

    1992-06-01

    The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.

  12. Fundamental monogamy relation between contextuality and nonlocality.

    PubMed

    Kurzyński, Paweł; Cabello, Adán; Kaszlikowski, Dagomir

    2014-03-14

    We show that the no-disturbance principle imposes a tradeoff between locally contextual correlations violating the Klyachko-Can-Biniciogˇlu-Shumovski inequality and spatially separated correlations violating the Clauser-Horne-Shimony-Holt inequality. The violation of one inequality forbids the violation of the other. We also obtain the corresponding monogamy relation imposed by quantum theory for a qutrit-qubit system. Our results show the existence of fundamental monogamy relations between contextuality and nonlocality that suggest that entanglement might be a particular form of a more fundamental resource. PMID:24679270

  13. Fundamental principals of battery design: Porous electrodes

    NASA Astrophysics Data System (ADS)

    Qu, Deyang

    2014-06-01

    The fundamental aspects of a porous electrode from electrochemistry and material chemistry standpoints are discussed in the light of battery engineering designs. For example, the ionic diffusion, the electrode-electrolyte interface, interfacial charge transfer and electrode catalytic processes are discussed. The discussion of such fundamental electrochemical aspects is in conjunction with the design of batteries, e.g. the electrochemical assessable surface area for porous electrode, electrode catalytic reactions. The porous electrodes used as a gas diffusion electrode and the electrode in a supercapacitor are discussed to demonstrate the application of electrochemical principals in battery design.

  14. Dark Energy: A Crisis for Fundamental Physics

    ScienceCinema

    Stubbs, Christopher [Harvard University, Cambridge, Massachusetts, USA

    2010-09-01

    Astrophysical observations provide robust evidence that our current picture of fundamental physics is incomplete. The discovery in 1998 that the expansion of the Universe is accelerating (apparently due to gravitational repulsion between regions of empty space!) presents us with a profound challenge, at the interface between gravity and quantum mechanics. This "Dark Energy" problem is arguably the most pressing open question in modern fundamental physics. The first talk will describe why the Dark Energy problem constitutes a crisis, with wide-reaching ramifications. One consequence is that we should probe our understanding of gravity at all accessible scales, and the second talk will present experiments and observations that are exploring this issue.

  15. DOE Fundamentals Handbook: Electrical Science, Volume 1

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  16. DOE Fundamentals Handbook: Electrical Science, Volume 3

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  17. Laser Wakefield Acceleration and Fundamental Physics

    SciTech Connect

    Tajima, Toshiki

    2011-06-20

    The laser wakefield acceleration (LWFA) along with the now available laser technology allows us to look at TeV physics both in leptons and hadrons. Near future proof-of-principle experiments for a collider as well as high energy frontier experiments without a collider paradigm are suggested. The intense laser can also contribute to other fundamental physics explorations such as those of dark matter and dark energy candidates. Finally the combination of intense laser and laser-accelerated particles (electrons, hadrons, gammas) provides a further avenue of fundamental research.

  18. Fundamentals of Pharmacogenetics in Personalized, Precision Medicine.

    PubMed

    Valdes, Roland; Yin, DeLu Tyler

    2016-09-01

    This article introduces fundamental principles of pharmacogenetics as applied to personalized and precision medicine. Pharmacogenetics establishes relationships between pharmacology and genetics by connecting phenotypes and genotypes in predicting the response of therapeutics in individual patients. We describe differences between precision and personalized medicine and relate principles of pharmacokinetics and pharmacodynamics to applications in laboratory medicine. We also review basic principles of pharmacogenetics, including its evolution, how it enables the practice of personalized therapeutics, and the role of the clinical laboratory. These fundamentals are a segue for understanding specific clinical applications of pharmacogenetics described in subsequent articles in this issue. PMID:27514461

  19. DOE Fundamentals Handbook: Electrical Science, Volume 2

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  20. Fundamental Monogamy Relation between Contextuality and Nonlocality

    NASA Astrophysics Data System (ADS)

    Kurzyński, Paweł; Cabello, Adán; Kaszlikowski, Dagomir

    2014-03-01

    We show that the no-disturbance principle imposes a tradeoff between locally contextual correlations violating the Klyachko-Can-Binicioǧlu-Shumovski inequality and spatially separated correlations violating the Clauser-Horne-Shimony-Holt inequality. The violation of one inequality forbids the violation of the other. We also obtain the corresponding monogamy relation imposed by quantum theory for a qutrit-qubit system. Our results show the existence of fundamental monogamy relations between contextuality and nonlocality that suggest that entanglement might be a particular form of a more fundamental resource.

  1. Radiation scales on which standard values of the solar constant and solar spectral irradiance are based

    NASA Technical Reports Server (NTRS)

    Thekaekara, M. P.

    1972-01-01

    The question of radiation scales is critically examined. There are two radiation scales which are of fundamental validity and there are several calibration standards and radiation scales which have been set up for practical convenience. The interrelation between these scales is investigated. It is shown that within the limits of accuracy of irradiance measurements in general and solar irradiance measurements in particular, the proposed standard values of the solar constant and solar spectrum should be considered to be on radiation scales of fundamental validity; those based on absolute electrical units and on the thermodynamic Kelvin temperature scale.

  2. Intrinsic fundamental frequency of vowels is moderated by regional dialect.

    PubMed

    Jacewicz, Ewa; Fox, Robert Allen

    2015-10-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  3. Intrinsic fundamental frequency of vowels is moderated by regional dialect

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  4. Exponential growth of bacteria: Constant multiplication through division

    NASA Astrophysics Data System (ADS)

    Hagen, Stephen J.

    2010-12-01

    The growth of a bacterial culture is one of the most familiar examples of exponential growth, with important consequences in biology and medicine. Bacterial growth involves more than just a rate constant. To sustain exponential growth, the cell must carefully coordinate the accumulation of mass, constant replication of the chromosome, and physical division. Hence, the growth rate is centrally important in any physical and chemical description of a bacterial cell. These aspects of bacterial growth can be described by empirical laws that suggest simple and intuitive models. Therefore, a quantitative discussion of bacterial growth could be a part of any undergraduate biophysics course. We present a general overview of some classic experimental studies and mathematical models of bacterial growth from a mostly physical perspective.

  5. Quintessence Field as a Perfect Cosmic Fluid of Constant Pressure

    NASA Astrophysics Data System (ADS)

    Liu, Wen-Zhong; Ouyang, Jun; Yang, Huan-Xiong

    2015-03-01

    We study the cosmology of a quintessence scalar field which is equivalent to a non-barotropic perfect fluid of constant pressure. The coincidence problem is alleviated by such a quintessence equation-of-state that interpolates between plateau of zero at large redshifts and plateau of minus one as the redshift approaches to zero. The quintessence field is neither a unified dark matter nor a mixture of cosmological constant and cold dark matter, relying on the facts that the quintessence density contrasts of sub-horizon modes would undergo a period of late-time decline and the squared sound speeds of quintessence perturbations do not vanish. What a role does the quintessence play is dynamic dark energy, its clustering could remarkably reduce the growth rate of the density perturbations of non-relativistic matters. Supported in part by National Natural Science Foundation of China under Grant No. 11235010

  6. Initial conditions of inhomogeneous universe and the cosmological constant problem

    NASA Astrophysics Data System (ADS)

    Totani, Tomonori

    2016-06-01

    Deriving the Einstein field equations (EFE) with matter fluid from the action principle is not straightforward, because mass conservation must be added as an additional constraint to make rest-frame mass density variable in reaction to metric variation. This can be avoided by introducing a constraint 0δ(√‑g) = to metric variations δ gμν, and then the cosmological constant Λ emerges as an integration constant. This is a removal of one of the four constraints on initial conditions forced by EFE at the birth of the universe, and it may imply that EFE are unnecessarily restrictive about initial conditions. I then adopt a principle that the theory of gravity should be able to solve time evolution starting from arbitrary inhomogeneous initial conditions about spacetime and matter. The equations of gravitational fields satisfying this principle are obtained, by setting four auxiliary constraints on δ gμν to extract six degrees of freedom for gravity. The cost of achieving this is a loss of general covariance, but these equations constitute a consistent theory if they hold in the special coordinate systems that can be uniquely specified with respect to the initial space-like hypersurface when the universe was born. This theory predicts that gravity is described by EFE with non-zero Λ in a homogeneous patch of the universe created by inflation, but Λ changes continuously across different patches. Then both the smallness and coincidence problems of the cosmological constant are solved by the anthropic argument. This is just a result of inhomogeneous initial conditions, not requiring any change of the fundamental physical laws in different patches.

  7. Athermal nonlinear elastic constants of amorphous solids.

    PubMed

    Karmakar, Smarajit; Lerner, Edan; Procaccia, Itamar

    2010-08-01

    We derive expressions for the lowest nonlinear elastic constants of amorphous solids in athermal conditions (up to third order), in terms of the interaction potential between the constituent particles. The effect of these constants cannot be disregarded when amorphous solids undergo instabilities such as plastic flow or fracture in the athermal limit; in such situations the elastic response increases enormously, bringing the system much beyond the linear regime. We demonstrate that the existing theory of thermal nonlinear elastic constants converges to our expressions in the limit of zero temperature. We motivate the calculation by discussing two examples in which these nonlinear elastic constants play a crucial role in the context of elastoplasticity of amorphous solids. The first example is the plasticity-induced memory that is typical to amorphous solids (giving rise to the Bauschinger effect). The second example is how to predict the next plastic event from knowledge of the nonlinear elastic constants. Using the results of our calculations we derive a simple differential equation for the lowest eigenvalue of the Hessian matrix in the external strain near mechanical instabilities; this equation predicts how the eigenvalue vanishes at the mechanical instability and the value of the strain where the mechanical instability takes place. PMID:20866874

  8. Athermal nonlinear elastic constants of amorphous solids

    NASA Astrophysics Data System (ADS)

    Karmakar, Smarajit; Lerner, Edan; Procaccia, Itamar

    2010-08-01

    We derive expressions for the lowest nonlinear elastic constants of amorphous solids in athermal conditions (up to third order), in terms of the interaction potential between the constituent particles. The effect of these constants cannot be disregarded when amorphous solids undergo instabilities such as plastic flow or fracture in the athermal limit; in such situations the elastic response increases enormously, bringing the system much beyond the linear regime. We demonstrate that the existing theory of thermal nonlinear elastic constants converges to our expressions in the limit of zero temperature. We motivate the calculation by discussing two examples in which these nonlinear elastic constants play a crucial role in the context of elastoplasticity of amorphous solids. The first example is the plasticity-induced memory that is typical to amorphous solids (giving rise to the Bauschinger effect). The second example is how to predict the next plastic event from knowledge of the nonlinear elastic constants. Using the results of our calculations we derive a simple differential equation for the lowest eigenvalue of the Hessian matrix in the external strain near mechanical instabilities; this equation predicts how the eigenvalue vanishes at the mechanical instability and the value of the strain where the mechanical instability takes place.

  9. [Reading Is Fundamental: Pamphlets and Newsletters].

    ERIC Educational Resources Information Center

    Smithsonian Institution, Washington, DC.

    These pamphlets and newsletters are products of the Reading Is Fundamental (RIF) program, which provides free and inexpensive books to children through a variety of community organizations throughout the country. The newsletter appears monthly and contains reports on specific programs, trends in the national program, RIF involvement with other…

  10. Fundamental studies on passivity and passivity breakdown

    SciTech Connect

    Macdonald, D.D.; Urquidi-Macdonald, M.

    1993-06-01

    Using photoelectrochemical impedance and admittance spectroscopies, a fundamental and quantitative understanding of the mechanisms for the growth and breakdown of passive films on metal and alloy surfaces in contact with aqueous environments is being developed. A point defect model has been extended to explain the breakdown of passive films, leading to pitting and crack growth and thus development of damage due to localized corrosion.

  11. Drafting Fundamentals. Drafting Module 1. Instructor's Guide.

    ERIC Educational Resources Information Center

    Missouri Univ., Columbia. Instructional Materials Lab.

    This Missouri Vocational Instruction Management System instructor's drafting guide has been keyed to the drafting competency profile developed by state industry and education professionals. The guide contains a cross-reference table of instructional materials. Ten units cover drafting fundamentals: (1) introduction to drafting; (2) general safety;…

  12. A Fundamental Theorem on Particle Acceleration

    SciTech Connect

    Xie, Ming

    2003-05-01

    A fundamental theorem on particle acceleration is derived from the reciprocity principle of electromagnetism and a rigorous proof of the theorem is presented. The theorem establishes a relation between acceleration and radiation, which is particularly useful for insightful understanding of and practical calculation about the first order acceleration in which energy gain of the accelerated particle is linearly proportional to the accelerating field.

  13. Fundamental Limitations in Advanced LC Schemes

    SciTech Connect

    Mikhailichenko, A. A.

    2010-11-04

    Fundamental limitations in acceleration gradient, emittance, alignment and polarization in acceleration schemes are considered in application for novel schemes of acceleration, including laser-plasma and structure-based schemes. Problems for each method are underlined whenever it is possible. Main attention is paid to the scheme with a tilted laser bunch.

  14. Fundamental issues on electromagnetic fields (EMF).

    PubMed

    Novini, A

    1993-01-01

    This paper will examine the fundamental principals of Electromagnetic Field Radiation. The discussion will include: The basic physical characteristics of magnetic and electric fields, the numerous sources of EMF in our everyday lives, ways to detect and measure EMF accurately, what to look for in EMF instruments, and the issues and misconceptions on shielding and exposure reduction. PMID:8098895

  15. Getting a Better Grasp on Flu Fundamentals

    MedlinePlus

    ... a Better Grasp on Flu Fundamentals Inside Life Science View All Articles | Inside Life Science Home Page Getting a Better Grasp on Flu ... Seasonal Flu Patterns? Forecasting Flu This Inside Life Science article also appears on LiveScience . Learn about related ...

  16. Areas and the Fundamental Theorem of Calculus

    ERIC Educational Resources Information Center

    Vajiac, A.; Vajiac, B.

    2008-01-01

    We present a concise, yet self-contained module for teaching the notion of area and the Fundamental Theorem of Calculus for different groups of students. This module contains two different levels of rigour, depending on the class it used for. It also incorporates a technological component. (Contains 6 figures.)

  17. Measurement and Fundamental Processes in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Jaeger, Gregg

    2015-07-01

    In the standard mathematical formulation of quantum mechanics, measurement is an additional, exceptional fundamental process rather than an often complex, but ordinary process which happens also to serve a particular epistemic function: during a measurement of one of its properties which is not already determined by a preceding measurement, a measured system, even if closed, is taken to change its state discontinuously rather than continuously as is usual. Many, including Bell, have been concerned about the fundamental role thus given to measurement in the foundation of the theory. Others, including the early Bohr and Schwinger, have suggested that quantum mechanics naturally incorporates the unavoidable uncontrollable disturbance of physical state that accompanies any local measurement without the need for an exceptional fundamental process or a special measurement theory. Disturbance is unanalyzable for Bohr, but for Schwinger it is due to physical interactions' being borne by fundamental particles having discrete properties and behavior which is beyond physical control. Here, Schwinger's approach is distinguished from more well known treatments of measurement, with the conclusion that, unlike most, it does not suffer under Bell's critique of quantum measurement. Finally, Schwinger's critique of measurement theory is explicated as a call for a deeper investigation of measurement processes that requires the use of a theory of quantum fields.

  18. Fundamental Concepts Bridging Education and the Brain

    ERIC Educational Resources Information Center

    Masson, Steve; Foisy, Lorie-Marlène Brault

    2014-01-01

    Although a number of papers have already discussed the relevance of brain research for education, the fundamental concepts and discoveries connecting education and the brain have not been systematically reviewed yet. In this paper, four of these concepts are presented and evidence concerning each one is reviewed. First, the concept of…

  19. Fundamentals of Library Science. Library Science 424.

    ERIC Educational Resources Information Center

    Foster, Donald L.

    An introductory letter, a list of general instructions on how to proceed with a correspondence course, a syllabus, and an examination request form are presented for a correspondence course in the fundamentals of library science offered by the University of New Mexico's Division of Continuing Education and Community Services. The course is a survey…

  20. Prequantum Classical Statistical Field Theory: Fundamentals

    SciTech Connect

    Khrennikov, Andrei

    2011-03-28

    We present fundamentals of a prequantum model with hidden variables of the classical field type. In some sense this is the comeback of classical wave mechanics. Our approach also can be considered as incorporation of quantum mechanics into classical signal theory. All quantum averages (including correlations of entangled systems) can be represented as classical signal averages and correlations.

  1. Mathematical Literacy--It's Become Fundamental

    ERIC Educational Resources Information Center

    McCrone, Sharon Soucy; Dossey, John A.

    2007-01-01

    The rising tide of numbers and statistics in daily life signals a need for a fundamental broadening of the concept of literacy: mathematical literacy assuming a coequal role in the curriculum alongside language-based literacy. Mathematical literacy is not about studying higher levels of formal mathematics, but about making math relevant and…

  2. Retention of Electronic Fundamentals: Differences Among Topics.

    ERIC Educational Resources Information Center

    Johnson, Kirk A.

    Criterion-referenced tests were used to measure the learning and retention of a sample of material taught by means of programed instruction in the Avionics Fundamentals Course, Class A. It was found that the students knew about 30 percent of the material before reading the programs, that mastery rose to a very high level on the immediate posttest,…

  3. Fundamental Theorems of Algebra for the Perplexes

    ERIC Educational Resources Information Center

    Poodiak, Robert; LeClair, Kevin

    2009-01-01

    The fundamental theorem of algebra for the complex numbers states that a polynomial of degree n has n roots, counting multiplicity. This paper explores the "perplex number system" (also called the "hyperbolic number system" and the "spacetime number system") In this system (which has extra roots of +1 besides the usual [plus or minus]1 of the…

  4. Course Objectives: Electronic Fundamentals, EL16.

    ERIC Educational Resources Information Center

    Wilson, David H.

    The general objective, recommended text, and specific objectives of a course titled "Electronic Fundamentals," as offered at St. Lawrence College of Applied Arts and Technology, are provided. The general objective of the course is "to acquire an understanding of diodes, transistors, and tubes, and so be able to analyze the operation of single…

  5. Uncovering Racial Bias in Nursing Fundamentals Textbooks.

    ERIC Educational Resources Information Center

    Byrne, Michelle M.

    2001-01-01

    The portrayal of African Americans in nursing fundamentals textbooks was analyzed, resulting in 11 themes in the areas of history, culture, and physical assessment. Few African American leaders were included, and racial bias and stereotyping were apparent. Differences were often discussed using Eurocentric norms, and language tended to minimize…

  6. Radio and Television Repairer Fundamentals. Student's Manual.

    ERIC Educational Resources Information Center

    Maul, Chuck

    This self-contained student manual on fundamentals of radio and television repair is designed to help trade and industrial students relate work experience on the job to information studied at school. Designed for individualized instruction under the supervision of a coordinator or instructor, the manual has 9 sections, each containing 2 to 10…

  7. Reversing: A Fundamental Idea in Computer Science

    ERIC Educational Resources Information Center

    Armoni, Michal; Ginat, David

    2008-01-01

    Reversing is the notion of thinking or working in reverse. Computer science textbooks and tutors recognize it primarily in the form of recursion. However, recursion is only one form of reversing. Reversing appears in the computer science curriculum in many other forms, at various intellectual levels, in a variety of fundamental courses. As such,…

  8. RLIN Product Batch: Fundamental Design Concepts.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1984-01-01

    Considers fundamental decisions that shaped the output products of Research Libraries Information Network. Product Batch was designed using single data definition (RMARC) combined with standard PL/I, modular programing techniques, program documentation. Choice of software and programing languages, other design aspects (accountability, count…

  9. Fundamental and Gradient Differences in Language Development

    ERIC Educational Resources Information Center

    Herschensohn, Julia

    2009-01-01

    This article reexamines Bley-Vroman's original (1990) and evolved fundamental difference hypothesis that argues that differences in path and endstate of first language acquisition and adult foreign language learning result from differences in the acquisition procedure (i.e., language faculty and cognitive strategies, respectively). The evolved…

  10. Solar Energy: Solar System Design Fundamentals.

    ERIC Educational Resources Information Center

    Knapp, Henry H., III

    This module on solar system design fundamentals is one of six in a series intended for use as supplements to currently available materials on solar energy and energy conservation. Together with the recommended texts and references (sources are identified), these modules provide an effective introduction to energy conservation and solar energy…

  11. Applied and fundamental aspects of fusion science

    NASA Astrophysics Data System (ADS)

    Melnikov, Alexander V.

    2016-05-01

    Fusion research is driven by the applied goal of energy production from fusion reactions. There is, however, a wealth of fundamental physics to be discovered and studied along the way. This Commentary discusses selected developments in diagnostics and present-day research topics in high-temperature plasma physics.

  12. Absolute radiometry and the solar constant

    NASA Technical Reports Server (NTRS)

    Willson, R. C.

    1974-01-01

    A series of active cavity radiometers (ACRs) are described which have been developed as standard detectors for the accurate measurement of irradiance in absolute units. It is noted that the ACR is an electrical substitution calorimeter, is designed for automatic remote operation in any environment, and can make irradiance measurements in the range from low-level IR fluxes up to 30 solar constants with small absolute uncertainty. The instrument operates in a differential mode by chopping the radiant flux to be measured at a slow rate, and irradiance is determined from two electrical power measurements together with the instrumental constant. Results are reported for measurements of the solar constant with two types of ACRs. The more accurate measurement yielded a value of 136.6 plus or minus 0.7 mW/sq cm (1.958 plus or minus 0.010 cal/sq cm per min).

  13. Clusters of Galaxies and the Hubble Constant

    NASA Astrophysics Data System (ADS)

    Falcon, N.

    2008-09-01

    The expansion rate, at height scale, of the Universe, is given for the value of the Hubble constant (H0). Several methods have used by determinations of the Hubble constant: CMB anisotropy's, Supernovae observation and AGN at height red-shift. In this work, we used the Grainge et al (3) method by estimated of the Hubble constant thought of the Sunyaev-Zel'dovich effect and the result of the VSA interferometer (Teide Observatory) and the X-ray data by ROSAT. We obtain, h ? 0,78, in accord with other report by cluster of galaxies (Mason et al, 2001) as higher than of the standard value h =0,71 obtain by other method. We discussed the systematic fount of error and possible discrepant by assumptions of the spheroid and isothermal in cluster and the Sunyaev- Zel'dovich Kinetic effect.

  14. Binary Solid Propellants for Constant Momentum Missions

    SciTech Connect

    Pakhomov, Andrew V.; Mahaffy, Kevin E.

    2008-04-28

    A constant momentum mission is achieved when the speed of the vehicle in the inertial frame of reference is equal to the speed of exhaust relative to the vehicle. Due to 100% propulsive efficiency such missions are superior to traditional constant specific impulse missions. A new class of solid binary propellants for constant momentum missions is under development. A typical propellant column is prepared as a solid solution of two components, with composition gradually changing from 100% of a propellant of high coupling coefficient (C{sub m}) to one which has high specific impulse (I{sub sp}). The high coupling component is ablated first, gradually giving way to the high I{sub sp} component, as the vehicle accelerates. This study opens new opportunities for further design of complex propellants for laser propulsion, providing variable C{sub m} and I{sub sp} during missions.

  15. Optimizing constant wavelength neutron powder diffractometers

    NASA Astrophysics Data System (ADS)

    Cussen, Leo D.

    2016-06-01

    This article describes an analytic method to optimize constant wavelength neutron powder diffractometers. It recasts the accepted mathematical description of resolution and intensity in terms of new variables and includes terms for vertical divergence, wavelength and some sample scattering effects. An undetermined multiplier method is applied to the revised equations to minimize the RMS value of resolution width at constant intensity and fixed wavelength. A new understanding of primary spectrometer transmission (presented elsewhere) can then be applied to choose beam elements to deliver an optimum instrument. Numerical methods can then be applied to choose the best wavelength.

  16. Environmental dependence of masses and coupling constants

    SciTech Connect

    Olive, Keith A.; Pospelov, Maxim

    2008-02-15

    We construct a class of scalar field models coupled to matter that lead to the dependence of masses and coupling constants on the ambient matter density. Such models predict a deviation of couplings measured on the Earth from values determined in low-density astrophysical environments, but do not necessarily require the evolution of coupling constants with the redshift in the recent cosmological past. Additional laboratory and astrophysical tests of {delta}{alpha} and {delta}(m{sub p}/m{sub e}) as functions of the ambient matter density are warranted.

  17. Dielectric constants of soils at microwave frequencies

    NASA Technical Reports Server (NTRS)

    Geiger, F. E.; Williams, D.

    1972-01-01

    A knowledge of the complex dielectric constant of soils is essential in the interpretation of microwave airborne radiometer data of the earth's surface. Measurements were made at 37 GHz on various soils from the Phoenix, Ariz., area. Extensive data have been obtained for dry soil and soil with water content in the range from 0.6 to 35 percent by dry weight. Measurements were made in a two arm microwave bridge and results were corrected for reflections at the sample interfaces by solution of the parallel dielectric plate problem. The maximum dielectric constants are about a factor of 3 lower than those reported for similar soils at X-band frequencies.

  18. TOPICAL REVIEW The cosmological constant puzzle

    NASA Astrophysics Data System (ADS)

    Bass, Steven D.

    2011-04-01

    The accelerating expansion of the Universe points to a small positive vacuum energy density and negative vacuum pressure. A strong candidate is the cosmological constant in Einstein's equations of general relativity. Possible contributions are zero-point energies and the condensates associated with spontaneous symmetry breaking. The vacuum energy density extracted from astrophysics is 1056 times smaller than the value expected from quantum fields and standard model particle physics. Is the vacuum energy density time dependent? We give an introduction to the cosmological constant puzzle and ideas how to solve it.

  19. Coulomb field in a constant electromagnetic background

    NASA Astrophysics Data System (ADS)

    Adorno, T. C.; Gitman, D. M.; Shabad, A. E.

    2016-06-01

    Nonlinear Maxwell equations are written up to the third-power deviations from a constant-field background, valid within any local nonlinear electrodynamics including QED with a Euler-Heisenberg (EH) effective Lagrangian. The linear electric response to an imposed static finite-sized charge is found in the vacuum filled by an arbitrary combination of constant and homogeneous electric and magnetic fields. The modified Coulomb field and corrections to the total charge and to the charge density are given in terms of derivatives of the effective Lagrangian with respect to the field invariants. These are specialized for the EH Lagrangian.

  20. Image segmentation via piecewise constant regression

    NASA Astrophysics Data System (ADS)

    Acton, Scott T.; Bovik, Alan C.

    1994-09-01

    We introduce a novel unsupervised image segmentation technique that is based on piecewise constant (PICO) regression. Given an input image, a PICO output image for a specified feature size (scale) is computed via nonlinear regression. The regression effectively provides the constant region segmentation of the input image that has a minimum deviation from the input image. PICO regression-based segmentation avoids the problems of region merging, poor localization, region boundary ambiguity, and region fragmentation. Additionally, our segmentation method is particularly well-suited for corrupted (noisy) input data. An application to segmentation and classification of remotely sensed imagery is provided.

  1. Atomic weights: no longer constants of nature

    USGS Publications Warehouse

    Coplen, Tyler B.; Holden, Norman E.

    2011-01-01

    Many of us were taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis have changed the way we view atomic weights and why they are no longer constants of nature

  2. Atomic Weights No Longer Constants of Nature

    SciTech Connect

    Coplen, T.B.; Holden, N.

    2011-03-01

    Many of us grew up being taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis has changed the way we view atomic weights and why they are no longer constants of nature.

  3. Microfabricated microengine with constant rotation rate

    DOEpatents

    Romero, Louis A.; Dickey, Fred M.

    1999-01-01

    A microengine uses two synchronized linear actuators as a power source and converts oscillatory motion from the actuators into constant rotational motion via direct linkage connection to an output gear or wheel. The microengine provides output in the form of a continuously rotating output gear that is capable of delivering drive torque at a constant rotation to a micromechanism. The output gear can have gear teeth on its outer perimeter for directly contacting a micromechanism requiring mechanical power. The gear is retained by a retaining means which allows said gear to rotate freely. The microengine is microfabricated of polysilicon on one wafer using surface micromachining batch fabrication.

  4. Cosmological constant in scale-invariant theories

    SciTech Connect

    Foot, Robert; Kobakhidze, Archil; Volkas, Raymond R.

    2011-10-01

    The incorporation of a small cosmological constant within radiatively broken scale-invariant models is discussed. We show that phenomenologically consistent scale-invariant models can be constructed which allow a small positive cosmological constant, providing certain relation between the particle masses is satisfied. As a result, the mass of the dilaton is generated at two-loop level. Another interesting consequence is that the electroweak symmetry-breaking vacuum in such models is necessarily a metastable ''false'' vacuum which, fortunately, is not expected to decay on cosmological time scales.

  5. Our Universe from the cosmological constant

    SciTech Connect

    Barrau, Aurélien; Linsefors, Linda E-mail: linda.linsefors@lpsc.in2p3.fr

    2014-12-01

    The issue of the origin of the Universe and of its contents is addressed in the framework of bouncing cosmologies, as described for example by loop quantum gravity. If the current acceleration is due to a true cosmological constant, this constant is naturally conserved through the bounce and the Universe should also be in a (contracting) de Sitter phase in the remote past. We investigate here the possibility that the de Sitter temperature in the contracting branch fills the Universe with radiation that causes the bounce and the subsequent inflation and reheating. We also consider the possibility that this gives rise to a cyclic model of the Universe and suggest some possible tests.

  6. Relation of the diffuse reflectance remission function to the fundamental optical parameters.

    NASA Technical Reports Server (NTRS)

    Simmons, E. L.

    1972-01-01

    The Kubelka-Munk equations describing the diffuse reflectance of a powdered sample were compared to equations obtained using a uniformly-sized rough-surfaced spherical particle model. The comparison resulted in equations relating the remission function and the Kubelka-Munk constants to the index of refraction, the absorption coefficient, and the average particle diameter of a powdered sample. Published experimental results were used to test the equation relating to the remission function to the fundamental optical parameters.

  7. Constant slip control of induction motor at light load

    SciTech Connect

    Feng Xiaogang; Chen Boshi

    1996-12-31

    The most widely used AC motor drives adopt Rated Flux Control (RFC) method. However, at light load condition, RFC causes excessive iron loss, thus the conversion efficiency of the drive system impaired. This paper introduces a new control approach--Constant Slip Control (CSC), which minimize the stator current at light load, so that the iron loss and reactive power consumption of the motor are decreased. Simulation results compare the power consumption of CSC with that of RFC in order to validate the theoretical development. In the last part, realization of CSC is discussed.

  8. Damping constant estimation in magnetoresistive readers

    SciTech Connect

    Stankiewicz, Andrzej Hernandez, Stephanie

    2015-05-07

    The damping constant is a key design parameter in magnetic reader design. Its value can be derived from bulk or sheet film ferromagnetic resonance (FMR) line width. However, dynamics of nanodevices is usually defined by presence of non-uniform modes. It triggers new damping mechanisms and produces stronger damping than expected from traditional FMR. This work proposes a device-level technique for damping evaluation, based on time-domain analysis of thermally excited stochastic oscillations. The signal is collected using a high bandwidth oscilloscope, by direct probing of a biased reader. Recorded waveforms may contain different noise signals, but free layer FMR is usually a dominating one. The autocorrelation function is a reflection of the damped oscillation curve, averaging out stochastic contributions. The damped oscillator formula is fitted to autocorrelation data, producing resonance frequency and damping constant values. Restricting lag range allows for mitigation of the impact of other phenomena (e.g., reader instability) on the damping constant. For a micromagnetically modeled reader, the technique proves to be much more accurate than the stochastic FMR line width approach. Application to actual reader waveforms yields a damping constant of ∼0.03.

  9. Teaching Nanochemistry: Madelung Constants of Nanocrystals

    ERIC Educational Resources Information Center

    Baker, Mark D.; Baker, A. David

    2010-01-01

    The Madelung constants for binary ionic nanoparticles are determined. The computational method described here sums the Coulombic interactions of each ion in the particle without the use of partial charges commonly used for bulk materials. The results show size-dependent lattice energies. This is a useful concept in teaching how properties such as…

  10. CONSTANT VOLUME SAMPLING SYSTEM WATER CONDENSATION

    EPA Science Inventory

    Combustion of organic motor vehicle fuels produces carbon dioxide and water (H2O) vapor (and also products of incomplete combustion, e.g. hydrocarbons and carbon monoxide, at lower concentrations). he Constant Volume Sampling (CVS) system, commonly used to condition auto exhaust ...

  11. Unified Technical Concepts. Module 12: Time Constants.

    ERIC Educational Resources Information Center

    Technical Education Research Center, Waco, TX.

    This concept module on time constants is one of thirteen modules that provide a flexible, laboratory-based physics instructional package designed to meet the specialized needs of students in two-year, postsecondary technical schools. Each of the thirteen concept modules discusses a single physics concept and how it is applied to each energy…

  12. Double well isomerization rate constants in solution

    NASA Astrophysics Data System (ADS)

    Zawadzki, Anthony G.; Hynes, James T.

    1985-02-01

    The rate constant k for a double well isomerization in solution is calculated over the entire friction range. The importance of frequency-dependent friction for both the vibrational energy transfer (VET) and barrier passage components of k is described. Rapid suppression of the VET transfer component with increasing degrees of freedom is discussed.

  13. Stokes constants for a singular wave equation

    SciTech Connect

    Linnaeus, Staffan

    2005-05-01

    The Stokes constants for arbitrary-order phase-integral approximations are calculated when the square of the wave number has either two simple zeros close to a second-order pole or one simple zero close to a first-order pole. The treatment is based on uniform approximations. All parameters may assume general complex values.

  14. Variations of the Solar Constant. [conference

    NASA Technical Reports Server (NTRS)

    Sofia, S. (Editor)

    1981-01-01

    The variations in data received from rocket-borne and balloon-borne instruments are discussed. Indirect techniques to measure and monitor the solar constant are presented. Emphasis is placed on the correlation of data from the Solar Maximum Mission and the Nimbus 7 satellites.

  15. The ideal Kolmogorov inertial range and constant

    NASA Technical Reports Server (NTRS)

    Zhou, YE

    1993-01-01

    The energy transfer statistics measured in numerically simulated flows are found to be nearly self-similar for wavenumbers in the inertial range. Using the measured self-similar form, an 'ideal' energy transfer function and the corresponding energy flux rate were deduced. From this flux rate, the Kolmogorov constant was calculated to be 1.5, in excellent agreement with experiments.

  16. Mars Pathfinder Project: Planetary Constants and Models

    NASA Technical Reports Server (NTRS)

    Lyons, D.; Vaughn, R.

    1999-01-01

    This document provides a common set of astrodynamic constants and planetary models for use by the Mars pathfinder Project. It attempts to collect in a single reference all the quantities and models in use across the project during development and for mission operations.

  17. Bouncing models with a cosmological constant

    NASA Astrophysics Data System (ADS)

    Maier, Rodrigo; Pereira, Stella; Pinto-Neto, Nelson; Siffert, Beatriz B.

    2012-01-01

    Bouncing models have been proposed by many authors as a completion of, or even as an alternative to, inflation for the description of the very early and dense Universe. However, most bouncing models contain a contracting phase from a very large and rarefied state, where dark energy might have had an important role as it has today in accelerating our large Universe. In that case, its presence can modify the initial conditions and evolution of cosmological perturbations, changing the known results already obtained in the literature concerning their amplitude and spectrum. In this paper, we assume the simplest and most appealing candidate for dark energy, the cosmological constant, and evaluate its influence on the evolution of cosmological perturbations during the contracting phase of a bouncing model, which also contains a scalar field with a potential allowing background solutions with pressure and energy density satisfying p=wɛ, w being a constant. An initial adiabatic vacuum state can be set at the end of domination by the cosmological constant, and an almost scale-invariant spectrum of perturbations is obtained for w≈0, which is the usual result for bouncing models. However, the presence of the cosmological constant induces oscillations and a running towards a tiny red-tilted spectrum for long-wavelength perturbations.

  18. Damping constant estimation in magnetoresistive readers

    NASA Astrophysics Data System (ADS)

    Stankiewicz, Andrzej; Hernandez, Stephanie

    2015-05-01

    The damping constant is a key design parameter in magnetic reader design. Its value can be derived from bulk or sheet film ferromagnetic resonance (FMR) line width. However, dynamics of nanodevices is usually defined by presence of non-uniform modes. It triggers new damping mechanisms and produces stronger damping than expected from traditional FMR. This work proposes a device-level technique for damping evaluation, based on time-domain analysis of thermally excited stochastic oscillations. The signal is collected using a high bandwidth oscilloscope, by direct probing of a biased reader. Recorded waveforms may contain different noise signals, but free layer FMR is usually a dominating one. The autocorrelation function is a reflection of the damped oscillation curve, averaging out stochastic contributions. The damped oscillator formula is fitted to autocorrelation data, producing resonance frequency and damping constant values. Restricting lag range allows for mitigation of the impact of other phenomena (e.g., reader instability) on the damping constant. For a micromagnetically modeled reader, the technique proves to be much more accurate than the stochastic FMR line width approach. Application to actual reader waveforms yields a damping constant of ˜0.03.

  19. Spray Gun With Constant Mixing Ratio

    NASA Technical Reports Server (NTRS)

    Simpson, William G.

    1987-01-01

    Conceptual mechanism mounted in handle of spray gun maintains constant ratio between volumetric flow rates in two channels leading to spray head. With mechanism, possible to keep flow ratio near 1:1 (or another desired ratio) over range of temperatures, orifice or channel sizes, or clogging conditions.

  20. FATE, THE ENVIRONMENTAL FATE CONSTANTS INFORMATION DATABASE

    EPA Science Inventory

    An online database, FATE, has been developed for the interactive retrieval of kinetic and equilibrium constants that are needed for assessing the fate of chemicals in the environment. he database contains values for up to 12 parameters for each chemical. s of December 1991, FATE ...