Science.gov

Sample records for fundamental constants part

  1. Fundamental Physical Constants

    National Institute of Standards and Technology Data Gateway

    SRD 121 CODATA Fundamental Physical Constants (Web, free access)   This site, developed in the Physics Laboratory at NIST, addresses three topics: fundamental physical constants, the International System of Units (SI), which is the modern metric system, and expressing the uncertainty of measurement results.

  2. Variation of Fundamental Constants

    NASA Astrophysics Data System (ADS)

    Flambaum, V. V.

    2006-11-01

    Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. The spatial variation can explain a fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence. We present a review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses. There are some hints for the variation in quasar absorption spectra. Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transition between accidentally degenerate atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on the ultraviolet transition between very low excited state and ground state in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Huge enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance.

  3. Are Fundamental Constants Really Constant?

    ERIC Educational Resources Information Center

    Swetman, T. P.

    1972-01-01

    Dirac's classical conclusions, that the values of e2, M and m are constants and the quantity of G decreases with time. Evoked considerable interest among researchers and traces historical development by which further experimental evidence points out that both e and G are constant values. (PS)

  4. Wall of fundamental constants

    SciTech Connect

    Olive, Keith A.; Peloso, Marco; Uzan, Jean-Philippe

    2011-02-15

    We consider the signatures of a domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of the fine-structure constant, {alpha}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in {alpha} relative to the terrestrial value, depending on our relative position with respect to the wall. This wall could resolve the contradiction between claims of a variation of {alpha} based on Keck/Hires data and of the constancy of {alpha} based on Very Large Telescope data. We derive the properties of the wall and the parameters of the underlying microscopic model required to reproduce the possible spatial variation of {alpha}. We discuss the constraints on the existence of the low-energy domain wall and describe its observational implications concerning the variation of the fundamental constants.

  5. Variation of fundamental constants: theory

    NASA Astrophysics Data System (ADS)

    Flambaum, Victor

    2008-05-01

    Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. There are some hints for the variation of different fundamental constants in quasar absorption spectra and Big Bang nucleosynthesis data. A large number of publications (including atomic clocks) report limits on the variations. We want to study the variation of the main dimensionless parameters of the Standard Model: 1. Fine structure constant alpha (combination of speed of light, electron charge and Plank constant). 2. Ratio of the strong interaction scale (LambdaQCD) to a fundamental mass like electron mass or quark mass which are proportional to Higgs vacuum expectation value. The proton mass is propotional to LambdaQCD, therefore, the proton-to-electron mass ratio comes into this second category. We performed necessary atomic, nuclear and QCD calculations needed to study variation of the fundamental constants using the Big Bang Nucleosynthsis, quasar spectra, Oklo natural nuclear reactor and atomic clock data. The relative effects of the variation may be enhanced in transitions between narrow close levels in atoms, molecules and nuclei. If one will study an enhanced effect, the relative value of systematic effects (which are not enhanced) may be much smaller. Note also that the absolute magnitude of the variation effects in nuclei (e.g. in very narrow 7 eV transition in 229Th) may be 5 orders of magnitude larger than in atoms. A different possibility of enhancement comes from the inversion transitions in molecules where splitting between the levels is due to the quantum tunneling amplitude which has strong, exponential dependence on the electron to proton mass ratio. Our study of NH3 quasar spectra has already given the best limit on the variation of electron to proton mass ratio.

  6. Direct numerical simulation of ignition front propagation in a constant volume with temperature inhomogeneities, part I : Fundamental analysis and diagnostics.

    SciTech Connect

    Sankaran, Ramanan; Mason, Scott D.; Chen, Jacqueline H.; Hawkes, Evatt R.; Im, Hong G.

    2005-01-01

    The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry. Parametric studies on the effect of the initial amplitude of the temperature fluctuations, the initial length scales of the temperature and velocity fluctuations, and the turbulence intensity are performed. The combustion mode is characterized using the diagnostic measures developed in Part I of this study. Specifically, the ignition front speed and the scalar mixing timescales are used to identify the roles of molecular diffusion and heat conduction in each case. Predictions from a multizone model initialized from the DNS fields are presented and differences are explained using the diagnostic tools developed.

  7. Time-Varying Fundamental Constants

    NASA Astrophysics Data System (ADS)

    Olive, Keith

    2003-04-01

    Recent data from quasar absorption systems can be interpreted as arising from a time variation in the fine-structure constant. However, there are numerous cosmological, astro-physical, and terrestrial bounds on any such variation. These includes bounds from Big Bang Nucleosynthesis (from the ^4He abundance), the Oklo reactor (from the resonant neutron capture cross-section of Sm), and from meteoretic lifetimes of heavy radioactive isotopes. The bounds on the variation of the fine-structure constant are significantly strengthened in models where all gauge and Yukawa couplings vary in a dependent manner, as would be expected in unified theories. Models which are consistent with all data are severly challenged when Equivalence Principle constraints are imposed.

  8. New Quasar Studies Keep Fundamental Physical Constant Constant

    NASA Astrophysics Data System (ADS)

    2004-03-01

    Very Large Telescope sets stringent limit on possible variation of the fine-structure constant over cosmological time Summary Detecting or constraining the possible time variations of fundamental physical constants is an important step toward a complete understanding of basic physics and hence the world in which we live. A step in which astrophysics proves most useful. Previous astronomical measurements of the fine structure constant - the dimensionless number that determines the strength of interactions between charged particles and electromagnetic fields - suggested that this particular constant is increasing very slightly with time. If confirmed, this would have very profound implications for our understanding of fundamental physics. New studies, conducted using the UVES spectrograph on Kueyen, one of the 8.2-m telescopes of ESO's Very Large Telescope array at Paranal (Chile), secured new data with unprecedented quality. These data, combined with a very careful analysis, have provided the strongest astronomical constraints to date on the possible variation of the fine structure constant. They show that, contrary to previous claims, no evidence exist for assuming a time variation of this fundamental constant. PR Photo 07/04: Relative Changes with Redshift of the Fine Structure Constant (VLT/UVES) A fine constant To explain the Universe and to represent it mathematically, scientists rely on so-called fundamental constants or fixed numbers. The fundamental laws of physics, as we presently understand them, depend on about 25 such constants. Well-known examples are the gravitational constant, which defines the strength of the force acting between two bodies, such as the Earth and the Moon, and the speed of light. One of these constants is the so-called "fine structure constant", alpha = 1/137.03599958, a combination of electrical charge of the electron, the Planck constant and the speed of light. The fine structure constant describes how electromagnetic forces hold

  9. New Quasar Studies Keep Fundamental Physical Constant Constant

    NASA Astrophysics Data System (ADS)

    2004-03-01

    fundamental constant at play here, alpha. However, the observed distribution of the elements is consistent with calculations assuming that the value of alpha at that time was precisely the same as the value today. Over the 2 billion years, the change of alpha has therefore to be smaller than about 2 parts per 100 millions. If present at all, this is a rather small change indeed. But what about changes much earlier in the history of the Universe? To measure this we must find means to probe still further into the past. And this is where astronomy can help. Because, even though astronomers can't generally do experiments, the Universe itself is a huge atomic physics laboratory. By studying very remote objects, astronomers can look back over a long time span. In this way it becomes possible to test the values of the physical constants when the Universe had only 25% of is present age, that is, about 10,000 million years ago. Very far beacons To do so, astronomers rely on spectroscopy - the measurement of the properties of light emitted or absorbed by matter. When the light from a flame is observed through a prism, a rainbow is visible. When sprinkling salt on the flame, distinct yellow lines are superimposed on the usual colours of the rainbow, so-called emission lines. Putting a gas cell between the flame and the prism, one sees however dark lines onto the rainbow: these are absorption lines. The wavelength of these emission and absorption lines is directly related to the energy levels of the atoms in the salt or in the gas. Spectroscopy thus allows us to study atomic structure. The fine structure of atoms can be observed spectroscopically as the splitting of certain energy levels in those atoms. So if alpha were to change over time, the emission and absorption spectra of these atoms would change as well. One way to look for any changes in the value of alpha over the history of the Universe is therefore to measure the spectra of distant quasars, and compare the wavelengths of

  10. Fundamental Constants and Tests with Simple Atoms

    NASA Astrophysics Data System (ADS)

    Tan, Joseph

    2015-05-01

    Precise measurements with simple atoms provide stringent tests of physical laws, improving the accuracy of fundamental constants--a set of which will be selected to fully define the proposed New International System of Units. This talk focuses on the atomic constants (namely, the Rydberg constant, the fine-structure constant, and the proton charge radius), discussing the impact of the proton radius obtained from the Lamb-shift measurements in muonic hydrogen. Significant discrepancies persist despite years of careful examination: the slightly smaller proton radius obtained from muonic hydrogen requires the Rydberg constant and the fine-structure constant to have values that disagree significantly with the CODATA recommendations. After giving a general overview, I will discuss our effort to produce one-electron ions in Rydberg states, to enable a different test of theory and measurement of the Rydberg constant.

  11. Man's Size in Terms of Fundamental Constants.

    ERIC Educational Resources Information Center

    Press, William H.

    1980-01-01

    Reviews calculations that derive an order of magnitude expression for the size of man in terms of fundamental constants, assuming that man satifies these three properties: he is made of complicated molecules; he requires an atmosphere which is not hydrogen and helium; he is as large as possible. (CS)

  12. Fundamental constants: The teamwork of precision

    NASA Astrophysics Data System (ADS)

    Myers, Edmund G.

    2014-02-01

    A new value for the atomic mass of the electron is a link in a chain of measurements that will enable a test of the standard model of particle physics with better than part-per-trillion precision. See Letter p.467

  13. Differential Mobility Spectrometry: Preliminary Findings on Determination of Fundamental Constants

    NASA Technical Reports Server (NTRS)

    Limero, Thomas; Cheng, Patti; Boyd, John

    2007-01-01

    The electron capture detector (ECD) has been used for 40+ years (1) to derive fundamental constants such as a compound's electron affinity. Given this historical perspective, it is not surprising that differential mobility spectrometry (DMS) might be used in a like manner. This paper will present data from a gas chromatography (GC)-DMS instrument that illustrates the potential capability of this device to derive fundamental constants for electron-capturing compounds. Potential energy curves will be used to provide possible explanation of the data.

  14. Early universe constraints on time variation of fundamental constants

    SciTech Connect

    Landau, Susana J.; Mosquera, Mercedes E.; Scoccola, Claudia G.; Vucetich, Hector

    2008-10-15

    We study the time variation of fundamental constants in the early Universe. Using data from primordial light nuclei abundances, cosmic microwave background, and the 2dFGRS power spectrum, we put constraints on the time variation of the fine structure constant {alpha} and the Higgs vacuum expectation value without assuming any theoretical framework. A variation in leads to a variation in the electron mass, among other effects. Along the same line, we study the variation of {alpha} and the electron mass m{sub e}. In a purely phenomenological fashion, we derive a relationship between both variations.

  15. Testing Theories That Predict Time Variation of Fundamental Constants

    NASA Astrophysics Data System (ADS)

    Landau, Susana J.; Vucetich, Hector

    2002-05-01

    We consider astronomical and local bounds on the time variation of fundamental constants to test some generic Kaluza-Klein-like models and some particular cases of Beckenstein theory. Bounds on the free parameters of the different theories are obtained. Furthermore, we find that none of the proposed models is able to explain recent results (as from Webb and coworkers in 1999 and 2001) claiming an observed variation of the fine-structure constant from quasar absorption systems at redshifts 0.5

  16. Machine Shop Fundamentals: Part I.

    ERIC Educational Resources Information Center

    Kelly, Michael G.; And Others

    These instructional materials were developed and designed for secondary and adult limited English proficient students enrolled in machine tool technology courses. Part 1 includes 24 lessons covering introduction, safety and shop rules, basic machine tools, basic machine operations, measurement, basic blueprint reading, layout, and bench tools.…

  17. CONSTRAINING FUNDAMENTAL CONSTANT EVOLUTION WITH H I AND OH LINES

    SciTech Connect

    Kanekar, N.; Langston, G. I.; Stocke, J. T.; Carilli, C. L.; Menten, K. M.

    2012-02-20

    We report deep Green Bank Telescope spectroscopy in the redshifted H I 21 cm and OH 18 cm lines from the z = 0.765 absorption system toward PMN J0134-0931. A comparison between the 'satellite' OH 18 cm line redshifts, or between the redshifts of the H I 21 cm and 'main' OH 18 cm lines, is sensitive to changes in different combinations of three fundamental constants, the fine structure constant {alpha}, the proton-electron mass ratio {mu} {identical_to} m{sub p} /m{sub e} , and the proton g-factor g{sub p} . We find that the satellite OH 18 cm lines are not perfectly conjugate, with both different line shapes and stronger 1612 MHz absorption than 1720 MHz emission. This implies that the satellite lines of this absorber are not suitable to probe fundamental constant evolution. A comparison between the redshifts of the H I 21 cm and OH 18 cm lines, via a multi-Gaussian fit, yields the strong constraint [{Delta}F/F] = [ - 5.2 {+-} 4.3] Multiplication-Sign 10{sup -6}, where F {identical_to} g{sub p} [{mu}{alpha}{sup 2}]{sup 1.57} and the error budget includes contributions from both statistical and systematic errors. We thus find no evidence for a change in the constants between z = 0.765 and the present epoch. Incorporating the constraint [{Delta}{mu}/{mu}] < 3.6 Multiplication-Sign 10{sup -7} from another absorber at a similar redshift and assuming that fractional changes in g{sub p} are much smaller than those in {alpha}, we obtain [{Delta}{alpha}/{alpha}] = (- 1.7 {+-} 1.4) Multiplication-Sign 10{sup -6} over a look-back time of 6.7 Gyr.

  18. Stars in other universes: stellar structure with different fundamental constants

    NASA Astrophysics Data System (ADS)

    Adams, Fred C.

    2008-08-01

    Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. To make this problem tractable, we develop a semi-analytical stellar structure model that allows for physical understanding of these stars with unconventional parameters, as well as a means to survey the relevant parameter space. In this work, the most important quantities that determine stellar properties—and are allowed to vary—are the gravitational constant G, the fine structure constant α and a composite parameter \\mathcal {C} that determines nuclear reaction rates. Working within this model, we delineate the portion of parameter space that allows for the existence of stars. Our main finding is that a sizable fraction of the parameter space (roughly one-fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion. As a result, the set of parameters necessary to support stars are not particularly rare. In addition, we briefly consider the possibility that unconventional stars (e.g. black holes, dark matter stars) play the role filled by stars in our universe and constrain the allowed parameter space.

  19. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  20. Violation of fundamental symmetries and variation of fundamental constants in atomic phenomena

    SciTech Connect

    Flambaum, V. V.

    2007-06-13

    We present a review of recent works on variation of fundamental constants and violation of parity in atoms and nuclei.Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental 'constants' in expanding Universe. The spatial variation can explain fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence.We describe recent works devoted to the variation of the fine structure constant {alpha}, strong interaction and fundamental masses (Higgs vacuum). There are some hints for the variation in quasar absorption spectra, Big Bang nucleosynthesis, and Oklo natural nuclear reactor data.A very promising method to search for the variation consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transitions between very close atomic and molecular energy levels. A new idea is to build a 'nuclear' clock based on UV transition in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude. Measurements of violation of fundamental symmetries, parity (P) and time reversal (T), in atoms allows one to test unification theories in atomic experiments. We have developed an accurate method of many-body calculations - all-orders summation of dominating diagrams in residual e-e interaction. To calculate QED radiative corrections to energy levels and electromagnetic amplitudes in many-electron atoms and molecules we derived the ''radiative potential'' and the low-energy theorem. This method is simple and can be easily incorporated into any many-body theory approach. Using the radiative correction and many-body calculations we obtained the PNC amplitude EPNC = -0.898(1 {+-} 0.5%) x 10-11ieaB(-QW/N). From the measurements of the PNC amplitude we extracted the Cs weak charge QW = -72.66(29)exp(36)theor. The

  1. Violation of fundamental symmetries and variation of fundamental constants in atomic phenomena

    NASA Astrophysics Data System (ADS)

    Flambaum, V. V.

    2007-06-01

    We present a review of recent works on variation of fundamental constants and violation of parity in atoms and nuclei. Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. The spatial variation can explain fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence. We describe recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses (Higgs vacuum). There are some hints for the variation in quasar absorption spectra, Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transitions between very close atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on UV transition in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Measurements of violation of fundamental symmetries, parity (P) and time reversal (T), in atoms allows one to test unification theories in atomic experiments. We have developed an accurate method of many-body calculations - all-orders summation of dominating diagrams in residual e-e interaction. To calculate QED radiative corrections to energy levels and electromagnetic amplitudes in many-electron atoms and molecules we derived the ``radiative potential'' and the low-energy theorem. This method is simple and can be easily incorporated into any many-body theory approach. Using the radiative correction and many-body calculations we obtained the PNC amplitude EPNC = -0.898(1 +/- 0.5%) × 10-11ieaB(-QW/N). From the measurements of the PNC amplitude we extracted the Cs weak charge QW = -72.66(29)exp(36)theor. The

  2. Base units of the SI, fundamental constants and modern quantum physics.

    PubMed

    Bordé, Christian J

    2005-09-15

    Over the past 40 years, a number of discoveries in quantum physics have completely transformed our vision of fundamental metrology. This revolution starts with the frequency stabilization of lasers using saturation spectroscopy and the redefinition of the metre by fixing the velocity of light c. Today, the trend is to redefine all SI base units from fundamental constants and we discuss strategies to achieve this goal. We first consider a kinematical frame, in which fundamental constants with a dimension, such as the speed of light c, the Planck constant h, the Boltzmann constant k(B) or the electron mass m(e) can be used to connect and redefine base units. The various interaction forces of nature are then introduced in a dynamical frame, where they are completely characterized by dimensionless coupling constants such as the fine structure constant alpha or its gravitational analogue alpha(G). This point is discussed by rewriting the Maxwell and Dirac equations with new force fields and these coupling constants. We describe and stress the importance of various quantum effects leading to the advent of this new quantum metrology. In the second part of the paper, we present the status of the seven base units and the prospects of their possible redefinitions from fundamental constants in an experimental perspective. The two parts can be read independently and they point to these same conclusions concerning the redefinitions of base units. The concept of rest mass is directly related to the Compton frequency of a body, which is precisely what is measured by the watt balance. The conversion factor between mass and frequency is the Planck constant, which could therefore be fixed in a realistic and consistent new definition of the kilogram based on its Compton frequency. We discuss also how the Boltzmann constant could be better determined and fixed to replace the present definition of the kelvin.

  3. Variation of the Fundamental Constants:. Theory and Observations

    NASA Astrophysics Data System (ADS)

    Flambaum, V. V.

    2007-10-01

    Review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses (Higgs vacuum) is presented. The results from Big Bang nucleosynthesis, quasar absorption spectra, and Oklo natural nuclear reactor data give us the space-time variation on the Universe lifetime scale. Comparison of different atomic clocks gives us the present time variation. Assuming linear variation with time we can compare different results. The best limit on the variation of the electron-to-proton mass ratio μ = me/Mp and Xe = me/ΛQCD follows from the quasar absorption spectra:1 ˙ {μ }/μ = ˙ {X}e/X_e = (1 ± 3) × 10-16 yr-1. A combination of this result and the atomic clock results2,3 gives the best limt on variation of α : ˙ {α }/α = (-0.8 ± 0.8) × 10-16 yr-1. The Oklo natural reactor gives the best limit on the variation of Xs = ms/ΛQCD where ms is the strange quark mass:4,5 ∣ ˙ {X}s/X_s∣ < 10-18 yr-1. Note that the Oklo data can not give us any limit on the variation of a since the effect of α there is much smaller than the effect of Xs and should be neglected. Huge enhancement of the relative variation effects happens in transitions between close atomic, molecular and nuclear energy levels. We suggest several new cases where the levels are very narrow. Large enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance. How changing physical constants and violation of local position invariance may occur? Light scalar fields very naturally appear in modern cosmological models, affecting parameters of the Standard Model (e.g. α). Cosmological variations of these scalar fields should occur because of drastic changes of matter composition in Universe: the latest such event is rather recent (about 5 billion years ago), from matter to dark energy domination. Massive bodies (stars or galaxies) can also affect physical constants. They have large scalar charge S

  4. Variation of the Fundamental Constants:. Theory and Observations

    NASA Astrophysics Data System (ADS)

    Flambaum, V. V.

    Review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses (Higgs vacuum) is presented. The results from Big Bang nucleosynthesis, quasar absorption spectra, and Oklo natural nuclear reactor data give us the space-time variation on the Universe lifetime scale. Comparison of different atomic clocks gives us the present time variation. Assuming linear variation with time we can compare different results. The best limit on the variation of the electron-to-proton mass ratio μ = me/Mp and Xe = me/ΛQCD follows from the quasar absorption spectra:1 ˙ {μ }/μ = ˙ {X}e/Xe = (1 ± 3) × 10-16 yr-1. A combination of this result and the atomic clock results2,3 gives the best limt on variation of α : ˙ {α }/α = (-0.8 ± 0.8) × 10-16 yr-1. The Oklo natural reactor gives the best limit on the variation of Xs = ms/ΛQCD where ms is the strange quark mass:4,5 ∣ ˙ {X}s/Xs∣ < 10-18 yr-1. Note that the Oklo data can not give us any limit on the variation of α since the effect of α there is much smaller than the effect of Xs and should be neglected. Huge enhancement of the relative variation effects happens in transitions between close atomic, molecular and nuclear energy levels. We suggest several new cases where the levels are very narrow. Large enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance. How changing physical constants and violation of local position invariance may occur? Light scalar fields very naturally appear in modern cosmological models, affecting parameters of the Standard Model (e.g. α). Cosmological variations of these scalar fields should occur because of drastic changes of matter composition in Universe: the latest such event is rather recent (about 5 billion years ago), from matter to dark energy domination. Massive bodies (stars or galaxies) can also affect physical constants. They have large scalar charge S

  5. A Fundamental Breakdown. Part II: Manipulative Skills

    ERIC Educational Resources Information Center

    Townsend, J. Scott; Mohr, Derek J.

    2005-01-01

    In the May, 2005, issue of "TEPE," the "Research to Practice" section initiated a two-part series focused on assessing fundamental locomotor and manipulative skills. The series was generated in response to research by Pappa, Evanggelinou, & Karabourniotis (2005), recommending that curricular programming in physical education at the elementary…

  6. Search for variation of fundamental constants and violations of fundamental symmetries using isotope comparisons

    SciTech Connect

    Berengut, J. C.; Flambaum, V. V.; Kava, E. M.

    2011-10-15

    Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper, we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark-mass variation, in a number of isotopes of experimental interest including {sup 201,199}Hg and {sup 87,85}Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation values of proton and neutron spin in nuclei are also needed to interpret measurements of violations of fundamental symmetries.

  7. Fundamental Insight on Developing Low Dielectric Constant Polyimides

    NASA Technical Reports Server (NTRS)

    Simpson, J. O.; SaintClair, A. K.

    1997-01-01

    Thermally stable, durable, insulative polyimides are in great demand for the fabrication of microelectronic devices. In this investigation dielectric and optical properties have been studied for several series of aromatic polyimides. The effect of polarizability, fluorine content, and free volume on dielectric constant was examined. In general, minimizing polarizability, maximizing free volume and fluorination all lowered dielectric constants in the polyimides studied.

  8. The fundamental constants of orthotropic affine plate/slab equations

    NASA Technical Reports Server (NTRS)

    Brunelle, E. J.

    1984-01-01

    The global constants associated with orthotropic slab/plate equations are discussed, and the rotational behavior of the modulus/compliance components associated with orthotropic slabs/plates are addressed. It is concluded that one cluster constant is less than or equal to unity for all physically possible materials. Rotationally anomalous behavior is found in two materials, and a simple inequality which can be used to identify regular or anomalous behavior is presented and discussed in detail.

  9. CODATA recommended values of the fundamental physical constants: 2002

    NASA Astrophysics Data System (ADS)

    Mohr, Peter J.; Taylor, Barry N.

    2005-01-01

    This paper gives the 2002 self-consistent set of values of the basic constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA) for international use. Further, it describes in detail the adjustment of the values of the subset of constants on which the complete 2002 set of recommended values is based. Two noteworthy additions in the 2002 adjustment are recommended values for the bound-state rms charge radii of the proton and deuteron and tests of the exactness of the Josephson and quantum-Hall-effect relations KJ=2e/h and RK=h/e2 , where KJ and RK are the Josephson and von Klitzing constants, respectively, e is the elementary charge, and h is the Planck constant. The 2002 set replaces the previously recommended 1998 CODATA set. The 2002 adjustment takes into account the data considered in the 1998 adjustment as well as the data that became available between 31 December 1998, the closing date of that adjustment, and 31 December 2002, the closing date of the new adjustment. The differences between the 2002 and 1998 recommended values compared to the uncertainties of the latter are generally not unreasonable. The new CODATA set of recommended values may also be found on the World Wide Web at physics.nist.gov/constants.

  10. CODATA recommended values of the fundamental physical constants: 2006

    SciTech Connect

    Mohr, Peter J.; Taylor, Barry N.; Newell, David B.

    2008-09-15

    This paper gives the 2006 self-consistent set of values of the basic constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA) for international use. Further, it describes in detail the adjustment of the values of the constants, including the selection of the final set of input data based on the results of least-squares analyses. The 2006 adjustment takes into account the data considered in the 2002 adjustment as well as the data that became available between 31 December 2002, the closing date of that adjustment, and 31 December 2006, the closing date of the new adjustment. The new data have led to a significant reduction in the uncertainties of many recommended values. The 2006 set replaces the previously recommended 2002 CODATA set and may also be found on the World Wide Web at physics.nist.gov/constants.

  11. Current Status of the Problem of Cosmological Variability of Fundamental Physical Constants

    NASA Astrophysics Data System (ADS)

    Varshslovich, D.A.; Ivanchik, A.V.; Orlov, A.V.; Potekhin, A.Y.; Petitjean, P.

    We review the current status of the problem of cosmological variability of fundamental physical constants, provided by modern laboratory experiments, Oklo phenomena analysis, and especially astronomical observations.

  12. Big Bang nucleosynthesis as a probe of varying fundamental ``constants''

    NASA Astrophysics Data System (ADS)

    Dent, Thomas; Stern, Steffen; Wetterich, Christof

    2007-11-01

    We analyze the effect of variation of fundamental couplings and mass scales on primordial nucleosynthesis in a systematic way. The first step establishes the response of primordial element abundances to the variation of a large number of nuclear physics parameters, including nuclear binding energies. We find a strong influence of the n-p mass difference, of the nucleon mass and of A = 3,4,7 binding energies. A second step relates the nuclear parameters to the parameters of the Standard Model of particle physics. The deuterium, and, above all, 7Li abundances depend strongly on the average light quark mass. We calculate the behaviour of abundances when variations of fundamental parameters obey relations arising from grand unification. We also discuss the possibility of a substantial shift in the lithium abundance while the deuterium and 4He abundances are only weakly affected.

  13. Fundamental Constants as Monitors of Particle Physics and Dark Energy

    NASA Astrophysics Data System (ADS)

    Thompson, Rodger

    2016-03-01

    This contribution considers the constraints on particle physics and dark energy parameter space imposed by the astronomical observational constraints on the variation of the proton to electron mass ratio μ and the fine structure constant α. These constraints impose limits on the temporal variation of these parameters on a time scale greater than half the age of the universe, a time scale inaccessible by laboratory facilities such as the Large Hadron Collider. The limits on the variance of μ and α constrain combinations of the QCD Scale, the Higgs VEV and the Yukawa coupling on the particle physics side and a combination of the temporal variation of rolling scalar field and its coupling to the constants on the dark energy side.

  14. Precision Measurement of Fundamental Constants Using GAMS4

    PubMed Central

    Dewey, M. S.; Kessler, E. G.

    2000-01-01

    We discuss the connection of high-energy gamma-ray measurements with precision atomic mass determinations. These rather different technologies, properly combined, are shown to lead to new values for the neutron mass and the molar Planck constant. We then proceed to describe the gamma-ray measurement process using the GAMS4 facility at the Institut Laue-Langevin and its application to a recent measurement of the 2.2 MeV deuteron binding energy and the neutron mass. Our paper concludes by describing the first crystal diffraction measurement of the 8.6 MeV 36Cl binding energy. PMID:27551583

  15. Can we test dark energy with running fundamental constants?

    NASA Astrophysics Data System (ADS)

    Doran, Michael

    2005-04-01

    We investigate a link between the running of the fine structure constant α and a time evolving scalar dark energy field. Employing a versatile parametrization for the equation of state, we exhaustively cover the space of dark energy models. Under the assumption that the change in α is to first order given by the evolution of the quintessence field, we show that current Oklo, quasi-stellar object and equivalence principle observations restrict the model parameters considerably more strongly than observations of the cosmic microwave background, large scale structure and supernovae Ia combined.

  16. Search for variations of fundamental constants using atomic fountain clocks.

    PubMed

    Marion, H; Pereira Dos Santos, F; Abgrall, M; Zhang, S; Sortais, Y; Bize, S; Maksimovic, I; Calonico, D; Grünert, J; Mandache, C; Lemonde, P; Santarelli, G; Laurent, Ph; Clairon, A; Salomon, C

    2003-04-18

    Over five years, we have compared the hyperfine frequencies of 133Cs and 87Rb atoms in their electronic ground state using several laser-cooled 133Cs and 87Rb atomic fountains with an accuracy of approximately 10(-15). These measurements set a stringent upper bound to a possible fractional time variation of the ratio between the two frequencies: d/dt ln([(nu(Rb))/(nu(Cs))]=(0.2+/-7.0)x 10(-16) yr(-1) (1sigma uncertainty). The same limit applies to a possible variation of the quantity (mu(Rb)/mu(Cs))alpha(-0.44), which involves the ratio of nuclear magnetic moments and the fine structure constant.

  17. Constraints on alternate universes: stars and habitable planets with different fundamental constants

    NASA Astrophysics Data System (ADS)

    Adams, Fred C.

    2016-02-01

    This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant α and the gravitational structure constant αG, and find the region in the α-αG plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper delineates the portion of the α-αG plane that satisfies all of these constraints. The results indicate that viable universes—with working stars and habitable planets—can exist within a parameter space where the structure constants α and αG vary by several orders of magnitude. These constraints also provide upper bounds on the structure constants (α,αG) and their ratio. We find the limit αG/α lesssim 10-34, which shows that habitable universes must have a large hierarchy between the strengths of the gravitational force and the electromagnetic force.

  18. Protonated Nitrous Oxide, NNOH(+): Fundamental Vibrational Frequencies and Spectroscopic Constants from Quartic Force Fields

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Fortenberry, Ryan C.; Lee, Timothy J.

    2013-01-01

    The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(subJ) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(exp-1), and the vibrational configuration interaction computed result is 3330.9 cm(exp-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the ISM and the laboratory.

  19. Protonated nitrous oxide, NNOH+: fundamental vibrational frequencies and spectroscopic constants from quartic force fields.

    PubMed

    Huang, Xinchuan; Fortenberry, Ryan C; Lee, Timothy J

    2013-08-28

    The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(J) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(-1), and the vibrational configuration interaction computed result is 3330.9 cm(-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the interstellar medium and the laboratory. PMID:24007003

  20. Effects of variation of fundamental constants from Big Bang to atomic clocks

    NASA Astrophysics Data System (ADS)

    Flambaum, Victor

    2004-05-01

    Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental "constants" in expanding Universe. I discuss effects of variation of the fine structure constant, strong interaction, quark mass and gravitational constant. The measurements of these variations cover the lifespan of the Universe from few minutes after Big Bang to the present time and give controversial results. There are some hints for the variations in Big Bang nucleosynthesis, quasar absorption spectra and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. A billion times enhancement of the variation effects happens in transitions between accidentally degenerate atomic energy levels.

  1. Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD+

    PubMed Central

    Biesheuvel, J.; Karr, J.-Ph.; Hilico, L.; Eikema, K. S. E.; Ubachs, W.; Koelemeij, J. C. J.

    2016-01-01

    The simplest molecules in nature, molecular hydrogen ions in the form of H2+ and HD+, provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD+ by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws. PMID:26815886

  2. Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD(.).

    PubMed

    Biesheuvel, J; Karr, J-Ph; Hilico, L; Eikema, K S E; Ubachs, W; Koelemeij, J C J

    2016-01-01

    The simplest molecules in nature, molecular hydrogen ions in the form of H2(+) and HD(+), provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD(+) by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physical constants and laws. PMID:26815886

  3. Constraining the variation of fundamental constants using 18 cm OH lines.

    PubMed

    Chengalur, Jayaram N; Kanekar, Nissim

    2003-12-12

    We describe a new technique to estimate variations in the fundamental constants using 18 cm OH absorption lines, with the advantage that all lines arise in the same species, allowing a clean comparison between the measured redshifts. In conjunction with one additional transition, it is possible to simultaneously measure changes in alpha, g(p), and y identical with m(e)/m(p). We use the 1665 and 1667 MHz line redshifts in conjunction with those of HI 21 cm and mm-wave molecular absorption in a gravitational lens at z approximately 0.68 to constrain changes in the three parameters over the redshift range 0part in 10(3)), this is the first simultaneous constraint on the variation of all three parameters. Either one (or more) of alpha, g(p), and y must vary with cosmological time or there must be systematic velocity offsets between the OH, HCO+, and HI absorbing clouds.

  4. Exploring variations in the fundamental constants with ELTs: the CODEX spectrograph on OWL

    NASA Astrophysics Data System (ADS)

    Molaro, Paolo; Murphy, Michael T.; Levshakov, Sergei A.

    Cosmological variations in the fine structure constant, α, can be probed through precise velocity measurements of metallic absorption lines from intervening gas clouds seen in spectra of distant quasars. Data from the Keck/HIRES instrument support a variation in α of 6 parts per million. Such a variation would have profound implications, possibly providing a window into the extra spatial dimensions required by unified theories such as string/M-theory. However, recent results from VLT/UVES suggest no variation in α. The COsmic Dynamics EXperiment (CODEX) spectrograph currently being designed for the ESO OWL telescope (Pasquini et al. 2005) with a resolution high enough to properly resolve even the narrowest of metallic absorption lines, R > 150000, will achieve a 2-to-3 order-of-magnitude precision increase in Δα/α. This will rival the precision available from the Oklo natural fission reactor and upcoming satellite-borne atomic clock experiments. Given the vital constraints on fundamental physics possible, the ELT community must consider such a high-resolution optical spectrograph like CODEX.

  5. Gyromagnetic factors and atomic clock constraints on the variation of fundamental constants

    SciTech Connect

    Luo Feng; Olive, Keith A.; Uzan, Jean-Philippe

    2011-11-01

    We consider the effect of the coupled variations of fundamental constants on the nucleon magnetic moment. The nucleon g-factor enters into the interpretation of the measurements of variations in the fine-structure constant, {alpha}, in both the laboratory (through atomic clock measurements) and in astrophysical systems (e.g. through measurements of the 21 cm transitions). A null result can be translated into a limit on the variation of a set of fundamental constants, that is usually reduced to {alpha}. However, in specific models, particularly unification models, changes in {alpha} are always accompanied by corresponding changes in other fundamental quantities such as the QCD scale, {Lambda}{sub QCD}. This work tracks the changes in the nucleon g-factors induced from changes in {Lambda}{sub QCD} and the light quark masses. In principle, these coupled variations can improve the bounds on the variation of {alpha} by an order of magnitude from existing atomic clock and astrophysical measurements. Unfortunately, the calculation of the dependence of g-factors on fundamental parameters is notoriously model-dependent.

  6. Measuring Variations in the Fundamental Constants with the Square Kilometre Array

    NASA Astrophysics Data System (ADS)

    Curran, S.

    Recent theories of the fundamental interactions naturally predict space-time variations of the fundamental constants. In these theories (e.g. superstring and Mtheory), the constants naturally emerge as functions of the scale-lengths of the extra dimensions (e.g., [1,2]). At present, no mechanism has been found for keeping the compactified scale-lengths fixed and so, if extra dimensions exist and their sizes undergo any cosmological evolution, our 3-D coupling constants may vary in time. Several other modern theories also provide strong motivation for an experimental search for variation in the fine structure constant, α ≡ e 2/ħc. Interestingly, varying constants can provide alternative solutions to the "cosmological problems", e.g. flatness, horizon, etc. The most effective and well understood method of measuring variations in a is by observing absorption lines due to gas clouds along the line-of-sight to distant quasars. Recent detailed studies of the relative positions of heavy element optical transitions and comparison with present day (laboratory) wavelengths, may indeed suggest that the a may have evolved with time [3,4], although this consensus is be no means universal [5]. It is therefore clear that an independent check is required, which can refute or confirm the optical results, thus providing a sound experimental test of possible unified theories. The study of redshifted radio absorption lines offers the best test of cosmological changes in the fundamental constants, although presently, the paucity of systems exhibiting Hi 21-cm and molecular absorption severely limits our ability to carry out statistically sound comparisons.

  7. Variation of fundamental constants in space and time: Theory and observations

    NASA Astrophysics Data System (ADS)

    Flambaum, V. V.

    2008-10-01

    Review of recent works devoted to the temporal and spatialvariation of the fundamental constants and dependence of the fundamentalconstants on the gravitational potential (violation of local position invariance) is presented. We discuss the variation of the fine structure constant α=e2/ħc, strong interaction andfundamental masses (Higgs vacuum), e.g. the electron-to-proton mass ratioμ=me/Mp or Xe=me/ΛQCD and Xq=mq/ΛQCD.We also present new results from Big Bang nucleosynthesisand Oklo natural nuclear reactor data and propose new measurements of enhanced effects in atoms, nuclei and molecules, both in quasar and laboratory spectra.

  8. Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?

    NASA Astrophysics Data System (ADS)

    Stadnik, Y. V.; Flambaum, V. V.

    2015-11-01

    We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ , which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ , produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial 4He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial 4He abundance.

  9. Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?

    PubMed

    Stadnik, Y V; Flambaum, V V

    2015-11-13

    We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ, which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ, produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial (4)He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial (4)He abundance.

  10. Searching for dark matter and variation of fundamental constants with laser and maser interferometry.

    PubMed

    Stadnik, Y V; Flambaum, V V

    2015-04-24

    Any slight variations in the fundamental constants of nature, which may be induced by dark matter or some yet-to-be-discovered cosmic field, would characteristically alter the phase of a light beam inside an interferometer, which can be measured extremely precisely. Laser and maser interferometry may be applied to searches for the linear-in-time drift of the fundamental constants, detection of topological defect dark matter through transient-in-time effects, and for a relic, coherently oscillating condensate, which consists of scalar dark matter fields, through oscillating effects. Our proposed experiments require either minor or no modifications of existing apparatus, and offer extensive reach into important and unconstrained spaces of physical parameters.

  11. Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?

    PubMed

    Stadnik, Y V; Flambaum, V V

    2015-11-13

    We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ, which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ, produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial (4)He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial (4)He abundance. PMID:26613429

  12. Dependence of macrophysical phenomena on the values of the fundamental constants

    NASA Astrophysics Data System (ADS)

    Press, W. H.; Lightman, A. P.

    1983-12-01

    Using simple arguments, it is considered how the fundamental constants determine the scales of various macroscopic phenomena, including the properties of solid matter; the distinction between rocks, asteroids, planets, and stars; the conditions on habitable planets; the length of the day and year; and the size and athletic ability of human beings. Most of the results, where testable, are accurate to within a couple of orders of magnitude.

  13. A search for varying fundamental constants using hertz-level frequency measurements of cold CH molecules

    PubMed Central

    Truppe, S.; Hendricks, R.J.; Tokunaga, S.K.; Lewandowski, H.J.; Kozlov, M.G.; Henkel, Christian; Hinds, E.A.; Tarbutt, M.R.

    2013-01-01

    Many modern theories predict that the fundamental constants depend on time, position or the local density of matter. Here we develop a spectroscopic method for pulsed beams of cold molecules, and use it to measure the frequencies of microwave transitions in CH with accuracy down to 3 Hz. By comparing these frequencies with those measured from sources of CH in the Milky Way, we test the hypothesis that fundamental constants may differ between the high- and low-density environments of the Earth and the interstellar medium. For the fine structure constant we find Δα/α=(0.3±1.1) × 10−7, the strongest limit to date on such a variation of α. For the electron-to-proton mass ratio we find Δμ/μ=(−0.7±2.2) × 10−7. We suggest how dedicated astrophysical measurements can improve these constraints further and can also constrain temporal variation of the constants. PMID:24129439

  14. A search for varying fundamental constants using hertz-level frequency measurements of cold CH molecules.

    PubMed

    Truppe, S; Hendricks, R J; Tokunaga, S K; Lewandowski, H J; Kozlov, M G; Henkel, Christian; Hinds, E A; Tarbutt, M R

    2013-01-01

    Many modern theories predict that the fundamental constants depend on time, position or the local density of matter. Here we develop a spectroscopic method for pulsed beams of cold molecules, and use it to measure the frequencies of microwave transitions in CH with accuracy down to 3 Hz. By comparing these frequencies with those measured from sources of CH in the Milky Way, we test the hypothesis that fundamental constants may differ between the high- and low-density environments of the Earth and the interstellar medium. For the fine structure constant we find Δα/α=(0.3 ± 1.1) × 10⁻⁷, the strongest limit to date on such a variation of α. For the electron-to-proton mass ratio we find Δμ/μ=(-0.7 ± 2.2) × 10⁻⁷. We suggest how dedicated astrophysical measurements can improve these constraints further and can also constrain temporal variation of the constants. PMID:24129439

  15. Sensitivity of rotational transitions in CH and CD to a possible variation of fundamental constants

    NASA Astrophysics Data System (ADS)

    de Nijs, Adrian J.; Ubachs, Wim; Bethlem, Hendrick L.

    2012-09-01

    The sensitivity of rotational transitions in CH and CD to a possible variation of fundamental constants has been investigated. Largely enhanced sensitivity coefficients are found for specific transitions which are due to accidental degeneracies between the different fine-structure manifolds. These degeneracies occur when the spin-orbit coupling constant is close to four times the rotational constant. CH and particularly CD match this condition closely. Unfortunately, an analysis of the transition strengths shows that the same condition that leads to an enhanced sensitivity suppresses the transition strength, making these transitions too weak to be of relevance for testing the variation of fundamental constants over cosmological time scales. We propose a test in CH based on the comparison between the rotational transitions between the e and f components of the Ω'=1/2,J=1/2 and Ω'=3/2,J=3/2 levels at 532 and 536 GHz and other rotational or Λ-doublet transitions in CH involving the same absorbing ground levels. Such a test, to be performed by radioastronomy of highly redshifted objects, is robust against systematic effects.

  16. Competing bounds on the present-day time variation of fundamental constants

    SciTech Connect

    Dent, Thomas; Stern, Steffen; Wetterich, Christof

    2009-04-15

    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time-varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the weak equivalence principle also lead to comparable bounds on present variations of constants. The 'winner in sensitivity' depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. Weak equivalence principle tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favor dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.

  17. Competing bounds on the present-day time variation of fundamental constants

    NASA Astrophysics Data System (ADS)

    Dent, Thomas; Stern, Steffen; Wetterich, Christof

    2009-04-01

    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time-varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the weak equivalence principle also lead to comparable bounds on present variations of constants. The “winner in sensitivity” depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. Weak equivalence principle tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favor dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.

  18. A Different Look at Dark Energy and the Time Variation of Fundamental Constants

    SciTech Connect

    Weinstein, Marvin; /SLAC

    2011-02-07

    This paper makes the simple observation that a fundamental length, or cutoff, in the context of Friedmann-Lemaitre-Robertson-Walker (FRW) cosmology implies very different things than for a static universe. It is argued that it is reasonable to assume that this cutoff is implemented by fixing the number of quantum degrees of freedom per co-moving volume (as opposed to a Planck volume) and the relationship of the vacuum-energy of all of the fields in the theory to the cosmological constant (or dark energy) is re-examined. The restrictions that need to be satisfied by a generic theory to avoid conflicts with current experiments are discussed, and it is shown that in any theory satisfying these constraints knowing the difference between w and minus one allows one to predict w. It is argued that this is a robust result and if this prediction fails the idea of a fundamental cutoff of the type being discussed can be ruled out. Finally, it is observed that, within the context of a specific theory, a co-moving cutoff implies a predictable time variation of fundamental constants. This is accompanied by a general discussion of why this is so, what are the strongest phenomenological limits upon this predicted variation, and which limits are in tension with the idea of a co-moving cutoff. It is pointed out, however, that a careful comparison of the predicted time variation of fundamental constants is not possible without restricting to a particular model field-theory and that is not done in this paper.

  19. Spectroscopy of antiprotonic helium atoms and its contribution to the fundamental physical constants

    PubMed Central

    Hayano, Ryugo S.

    2010-01-01

    Antiprotonic helium atom, a metastable neutral system consisting of an antiproton, an electron and a helium nucleus, was serendipitously discovered, and has been studied at CERN’s antiproton decelerator facility. Its transition frequencies have recently been measured to nine digits of precision by laser spectroscopy. By comparing these experimental results with three-body QED calculations, the antiproton-to-electron massratio was determined as 1836.152674(5). This result contributed to the CODATA recommended values of the fundamental physical constants. PMID:20075605

  20. Manifestations of Dark matter and variation of the fundamental constants in atomic and astrophysical phenomena

    NASA Astrophysics Data System (ADS)

    Flambaum, Victor

    2016-05-01

    Low-mass boson dark matter particles produced after Big Bang form classical field and/or topological defects. In contrast to traditional dark matter searches, effects produced by interaction of an ordinary matter with this field and defects may be first power in the underlying interaction strength rather than the second or fourth power (which appears in a traditional search for the dark matter). This may give a huge advantage since the dark matter interaction constant is extremely small. Interaction between the density of the dark matter particles and ordinary matter produces both `slow' cosmological evolution and oscillating variations of the fundamental constants including the fine structure constant alpha and particle masses. Recent atomic dysprosium spectroscopy measurements and the primordial helium abundance data allowed us to improve on existing constraints on the quadratic interactions of the scalar dark matter with the photon, electron and light quarks by up to 15 orders of magnitude. Limits on the linear and quadratic interactions of the dark matter with W and Z bosons have been obtained for the first time. In addition to traditional methods to search for the variation of the fundamental constants (atomic clocks, quasar spectra, Big Bang Nucleosynthesis, etc) we discuss variations in phase shifts produced in laser/maser interferometers (such as giant LIGO, Virgo, GEO600 and TAMA300, and the table-top silicon cavity and sapphire interferometers), changes in pulsar rotational frequencies (which may have been observed already in pulsar glitches), non-gravitational lensing of cosmic radiation and the time-delay of pulsar signals. Other effects of dark matter and dark energy include apparent violation of the fundamental symmetries: oscillating or transient atomic electric dipole moments, precession of electron and nuclear spins about the direction of Earth's motion through an axion condensate, and axion-mediated spin-gravity couplings, violation of Lorentz

  1. The fundamentals of fetal magnetic resonance imaging: Part 2.

    PubMed

    Plunk, Matthew R; Chapman, Teresa

    2014-01-01

    Careful assessment of fetal anatomy by a combination of ultrasound and fetal magnetic resonance imaging offers the clinical teams and counselors caring for the patient information that can be critical for the management of both the mother and the fetus. In the second half of this 2-part review, we focus on space-occupying lesions in the fetal body. Because developing fetal tissues are programmed to grow rapidly, mass lesions can have a substantial effect on the formation of normal adjacent organs. Congenital diaphragmatic hernia and lung masses, fetal teratoma, and intra-abdominal masses are discussed, with an emphasis on differential etiologies and on fundamental management considerations. PMID:24974309

  2. Constraints on changes in fundamental constants from a cosmologically distant OH absorber or emitter.

    PubMed

    Kanekar, N; Carilli, C L; Langston, G I; Rocha, G; Combes, F; Subrahmanyan, R; Stocke, J T; Menten, K M; Briggs, F H; Wiklind, T

    2005-12-31

    We have detected the four 18 cm OH lines from the z approximaetely 0.765 gravitational lens toward PMN J0134-0931. The 1612 and 1720 MHz lines are in conjugate absorption and emission, providing a laboratory to test the evolution of fundamental constants over a large lookback time. We compare the HI and OH main line absorption redshifts of the different components in the z approximately 0.765 absorber and the z approximately 0.685 lens toward B0218 + 357 to place stringent constraints on changes in F triple-bond g(p)[alpha(2)/mu](1.57). We obtain [DeltaF/F] = (0.44 +/- 0.36(stat) +/- 1.0(sys)t) x 10(-5), consistent with no evolution over the redshift range 0 < z < or = 0.7. The measurements have a 2sigma sensitivity of [Deltaalpha/alpha] < 6.7 x 10(-6) or [Deltamu/mu] < 1.4 x 10(-5) to fractional changes in alpha and mu over a period of approximately 6.5 G yr, half the age of the Universe. These are among the most sensitive constraints on changes in mu.

  3. Fundamentals of Physics, Part 1 (Chapters 1-11)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    . 10-8 Torque. 10-9 Newton's Second Law for Rotation. 10-10 Work and Rotational Kinetic Energy. Review & Summary. Questions. Problems. Chapter 11.Rolling, Torque, and Angular Momentum. When a jet-powered car became supersonic in setting the land-speed record, what was the danger to the wheels? 11-1 What Is Physics? 11-2 Rolling as Translation and Rotation Combined. 11-3 The Kinetic Energy of Rolling. 11-4 The Forces of Rolling. 11-5 The Yo-Yo. 11-6 Torque Revisited. 11-7 Angular Momentum. 11-8 Newton's Second Law in Angular Form. 11-9 The Angular Momentum of a System of Particles. 11-10 The Angular Momentum of a Rigid Body Rotating About a Fixed Axis. 11-11 Conservation of Angular Momentum. 11-12 Precession of a Gyroscope. Review & Summary. Questions. Problems. Appendix A: The International System of Units (SI). Appendix B: Some Fundamental Constants of Physics. Appendix C: Some Astronomical Data. Appendix D: Conversion Factors. Appendix E: Mathematical Formulas. Appendix F: Properties of the Elements. Appendix G: Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  4. Data stewardship - a fundamental part of the scientific method (Invited)

    NASA Astrophysics Data System (ADS)

    Foster, C.; Ross, J.; Wyborn, L. A.

    2013-12-01

    This paper emphasises the importance of data stewardship as a fundamental part of the scientific method, and the need to effect cultural change to ensure engagement by earth scientists. It is differentiated from the science of data stewardship per se. Earth System science generates vast quantities of data, and in the past, data analysis has been constrained by compute power, such that sub-sampling of data often provided the only way to reach an outcome. This is analogous to Kahneman's System 1 heuristic, with its simplistic and often erroneous outcomes. The development of HPC has liberated earth sciences such that the complexity and heterogeneity of natural systems can be utilised in modelling at any scale, global, or regional, or local; for example, movement of crustal fluids. Paradoxically, now that compute power is available, it is the stewardship of the data that is presenting the main challenges. There is a wide spectrum of issues: from effectively handling and accessing acquired data volumes [e.g. satellite feeds per day/hour]; through agreed taxonomy to effect machine to machine analyses; to idiosyncratic approaches by individual scientists. Except for the latter, most agree that data stewardship is essential. Indeed it is an essential part of the science workflow. As science struggles to engage and inform on issues of community importance, such as shale gas and fraccing, all parties must have equal access to data used for decision making; without that, there will be no social licence to operate or indeed access to additional science funding (Heidorn, 2008). The stewardship of scientific data is an essential part of the science process; but often it is regarded, wrongly, as entirely in the domain of data custodians or stewards. Geoscience Australia has developed a set of six principles that apply to all science activities within the agency: Relevance to Government Collaborative science Quality science Transparent science Communicated science Sustained

  5. Towards an Increased Accuracy of Fundamental Properties of Stars: Proposing a Set of Nominal Astrophysical Parameters and Constants

    NASA Astrophysics Data System (ADS)

    Prša, A.; Harmanec, P.

    2012-04-01

    With the precision of space-borne photometers better than 100 ppm (i.e. MOST, CoRoT and Kepler), the derived stellar properties often suffer from systematic offsets due to the values used for solar mass, radius and luminosity, and to fundamental astrophysical constants. Stellar parameters are often expressed in terms of L⊙, M⊙ and R⊙, but the actual values used vary from study to study. Here, we propose to adopt a nominal set of fundamental solar parameters that will impose consistency across published works and eliminate systematics that stem from inconsistent values. We further implore the community to rigorously use the official values of fundamental astrophysical constants set forth by the Committee on Data for Science and Technology (CODATA).

  6. Fundamentals of Trapped Ion Mobility Spectrometry Part II: Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Silveira, Joshua A.; Michelmann, Karsten; Ridgeway, Mark E.; Park, Melvin A.

    2016-04-01

    Trapped ion mobility spectrometry (TIMS) is a new high resolution (R up to ~300) separation technique that utilizes an electric field to hold ions stationary against a moving gas. Recently, an analytical model for TIMS was derived and, in part, experimentally verified. A central, but not yet fully explored, component of the model involves the fluid dynamics at work. The present study characterizes the fluid dynamics in TIMS using simulations and ion mobility experiments. Results indicate that subsonic laminar flow develops in the analyzer, with pressure-dependent gas velocities between ~120 and 170 m/s measured at the position of ion elution. One of the key philosophical questions addressed is: how can mobility be measured in a dynamic system wherein the gas is expanding and its velocity is changing? We noted previously that the analytically useful work is primarily done on ions as they traverse the electric field gradient plateau in the analyzer. In the present work, we show that the position-dependent change in gas velocity on the plateau is balanced by a change in pressure and temperature, ultimately resulting in near position-independent drag force. That the drag force, and related variables, are nearly constant allows for the use of relatively simple equations to describe TIMS behavior. Nonetheless, we derive a more comprehensive model, which accounts for the spatial dependence of the flow variables. Experimental resolving power trends were found to be in close agreement with the theoretical dependence of the drag force, thus validating another principal component of TIMS theory.

  7. Fundamentals of Physics, Part 4 (Chapters 34-38)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-04-01

    of Time. 37-6 The Relativity of Length. 37-7 The Lorentz Transformation. 37-8 Some Consequences of the Lorentz Equations. 37-9 The Relativity of Velocities. 37-10 Doppler Effect for Light. 37-11 A New Look at Momentum. 37-12 A New Look at Energy. Review & Summary. Questions. Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  8. Kinetic performance limits of constant pressure versus constant flow rate gradient elution separations. Part I: theory.

    PubMed

    Broeckhoven, K; Verstraeten, M; Choikhet, K; Dittmann, M; Witt, K; Desmet, G

    2011-02-25

    We report on a general theoretical assessment of the potential kinetic advantages of running LC gradient elution separations in the constant-pressure mode instead of in the customarily used constant-flow rate mode. Analytical calculations as well as numerical simulation results are presented. It is shown that, provided both modes are run with the same volume-based gradient program, the constant-pressure mode can potentially offer an identical separation selectivity (except from some small differences induced by the difference in pressure and viscous heating trajectory), but in a significantly shorter time. For a gradient running between 5 and 95% of organic modifier, the decrease in analysis time can be expected to be of the order of some 20% for both water-methanol and water-acetonitrile gradients, and only weakly depending on the value of V(G)/V₀ (or equivalently t(G)/t₀). Obviously, the gain will be smaller when the start and end composition lie closer to the viscosity maximum of the considered water-organic modifier system. The assumptions underlying the obtained results (no effects of pressure and temperature on the viscosity or retention coefficient) are critically reviewed, and can be inferred to only have a small effect on the general conclusions. It is also shown that, under the adopted assumptions, the kinetic plot theory also holds for operations where the flow rate varies with the time, as is the case for constant-pressure operation. Comparing both operation modes in a kinetic plot representing the maximal peak capacity versus time, it is theoretically predicted here that both modes can be expected to perform equally well in the fully C-term dominated regime (where H varies linearly with the flow rate), while the constant pressure mode is advantageous for all lower flow rates. Near the optimal flow rate, and for linear gradients running from 5 to 95% organic modifier, time gains of the order of some 20% can be expected (or 25-30% when accounting for

  9. Fundamentals of Physics, Part 2 (Chapters 12-20)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    Engines. 20-8 A Statistical View of Entropy. Review & Summary Questions Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  10. Rovibrational Spectroscopic Constants and Fundamental Vibrational Frequencies for Isotopologues of Cyclic and Bent Singlet HC2N isomers

    NASA Technical Reports Server (NTRS)

    Inostroza, Natalia; Fortenberry, Ryan C.; Huang, Xinchuan; Lee, Timothy J.

    2013-01-01

    Through established, highly-accurate ab initio quartic force fields (QFFs), a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1(sup 1) 1A' and bent 2(sup 1)A' DCCN, H(C13)CCN, HC(C-13)N, and HCC(N-15) isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1 to 3.2 / cm range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly-dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X 3A0 HCCN.

  11. Rovibrational spectroscopic constants and fundamental vibrational frequencies for isotopologues of cyclic and bent singlet HC{sub 2}N isomers

    SciTech Connect

    Inostroza, Natalia; Fortenberry, Ryan C.; Lee, Timothy J.; Huang, Xinchuan

    2013-12-01

    Through established, highly accurate ab initio quartic force fields, a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1 {sup 1} A' and bent 2 {sup 1} A' DCCN, H{sup 13}CCN, HC{sup 13}CN, and HCC{sup 15}N isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good, with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1-3.2 cm{sup –1} range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X {sup 3} A' HCCN.

  12. Fundamentals of Physics, Part 3 (Chapters 22-33)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-03-01

    magnetic .eld used in an MRI scan cause a patient to be burned? 30-1 What Is Physics? 30-2 Two Experiments. 30-3 Faraday's Law of Induction. 30-4 Lenz's Law. 30-5 Induction and Energy Transfers. 30-6 Induced Electric Fields. 30-7 Inductors and Inductance. 30-8 Self-Induction. 30-9 RL Circuits. 30-10 Energy Stored in a Magnetic Field. 30-11 Energy Density of a Magnetic Field. 30-12 Mutual Induction. Review & Summary. Questions. Problems. Chapter 31. Electromagnetic Oscillations and Alternating Current. How did a solar eruption knock out the power-grid system of Quebec? 31-1 What Is Physics? 31-2 LC Oscillations, Qualitatively. 31-3 The Electrical-Mechanical Analogy. 31-4 LC Oscillations, Quantitatively. 31-5 Damped Oscillations in an RLC Circuit. 31-6 Alternating Current. 31-7 Forced Oscillations. 31-8 Three Simple Circuits. 31-9 The Series RLC Circuit. 31-10 Power in Alternating-Current Circuits. 31-11 Transformers. Review & Summary. Questions. Problems. Chapter 32. Maxwell's Equations; Magnetism of Matter. How can a mural painting record the direction of Earth's magnetic field? 32-1 What Is Physics? 32-2 Gauss' Law for Magnetic Fields. 32-3 Induced Magnetic Fields. 32-4 Displacement Current. 32-5 Maxwell's Equations. 32-6 Magnets. 32-7 Magnetism and Electrons. 32-8 Magnetic Materials. 32-9 Diamagnetism. 32-10 Paramagnetism. 32-11 Ferromagnetism. Review & Summary. Questions. Problems. Appendices. A. The International System of Units (SI). B. Some Fundamental Constants of Physics. C. Some Astronomical Data. D. Conversion Factors. E. Mathematical Formulas. F. Properties of the Elements. G. Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  13. On uniform constants of strong uniqueness in Chebyshev approximations and fundamental results of N. G. Chebotarev

    NASA Astrophysics Data System (ADS)

    Marinov, Anatolii V.

    2011-06-01

    In the problem of the best uniform approximation of a continuous real-valued function f\\in C(Q) in a finite-dimensional Chebyshev subspace M\\subset C(Q), where Q is a compactum, one studies the positivity of the uniform strong uniqueness constant \\gamma(N)=\\inf\\{\\gamma(f)\\colon f\\in N\\}. Here \\gamma(f) stands for the strong uniqueness constant of an element f_M\\in M of best approximation of f, that is, the largest constant \\gamma>0 such that the strong uniqueness inequality \\Vert f-\\varphi\\Vert\\ge\\Vert f-f_M\\Vert+\\gamma\\Vert f_M-\\varphi\\Vert holds for any \\varphi\\in M. We obtain a characterization of the subsets N\\subset C(Q) for which there is a neighbourhood O(N) of N satisfying the condition \\gamma(O(N))>0. The pioneering results of N. G. Chebotarev were published in 1943 and concerned the sharpness of the minimum in minimax problems and the strong uniqueness of algebraic polynomials of best approximation. They seem to have been neglected by the specialists, and we discuss them in detail.

  14. New limits on coupling of fundamental constants to gravity using 87Sr optical lattice clocks.

    PubMed

    Blatt, S; Ludlow, A D; Campbell, G K; Thomsen, J W; Zelevinsky, T; Boyd, M M; Ye, J; Baillard, X; Fouché, M; Le Targat, R; Brusch, A; Lemonde, P; Takamoto, M; Hong, F-L; Katori, H; Flambaum, V V

    2008-04-11

    The 1S0-3P0 clock transition frequency nuSr in neutral 87Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1 x 10(-15) level makes nuSr the best agreed-upon optical atomic frequency. We combine periodic variations in the 87Sr clock frequency with 199Hg+ and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant alpha, electron-proton mass ratio mu, and light quark mass. Furthermore, after 199Hg+, 171Yb+, and H, we add 87Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of alpha and mu.

  15. New Limits on Coupling of Fundamental Constants to Gravity Using {sup 87}Sr Optical Lattice Clocks

    SciTech Connect

    Blatt, S.; Ludlow, A. D.; Campbell, G. K.; Thomsen, J. W.; Zelevinsky, T.; Boyd, M. M.; Ye, J.; Baillard, X.; Fouche, M.; Le Targat, R.; Brusch, A.; Lemonde, P.; Takamoto, M.; Hong, F.-L.; Katori, H.; Flambaum, V. V.

    2008-04-11

    The {sup 1}S{sub 0}-{sup 3}P{sub 0} clock transition frequency {nu}{sub Sr} in neutral {sup 87}Sr has been measured relative to the Cs standard by three independent laboratories in Boulder, Paris, and Tokyo over the last three years. The agreement on the 1x10{sup -15} level makes {nu}{sub Sr} the best agreed-upon optical atomic frequency. We combine periodic variations in the {sup 87}Sr clock frequency with {sup 199}Hg{sup +} and H-maser data to test local position invariance by obtaining the strongest limits to date on gravitational-coupling coefficients for the fine-structure constant {alpha}, electron-proton mass ratio {mu}, and light quark mass. Furthermore, after {sup 199}Hg{sup +}, {sup 171}Yb{sup +}, and H, we add {sup 87}Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of {alpha} and {mu}.

  16. Investigation of the Fundamental Constants Stability Based on the Reactor Oklo Burn-Up Analysis

    NASA Astrophysics Data System (ADS)

    Onegin, M. S.; Yudkevich, M. S.; Gomin, E. A.

    2012-12-01

    The burn-up of few samples of the natural Oklo reactor zones 3, 5 was calculated using the modern Monte Carlo code. We reconstructed the neutron spectrum in the core by means of the isotope ratios: 147Sm/148Sm and 176Lu/175Lu. These ratios unambiguously determine the water content and core temperature. The isotope ratio of the 149Sm in the sample calculated using this spectrum was compared with experimental one. The disagreement between these two values allows one to limit a possible shift of the low lying resonance of 149Sm. Then, these limits were converted to the limits for the change of the fine structure constant α. We have found out, that for the rate of α change, the inequality ěrt˙ {α }/α ěrt<= 5× 10-18 is fulfilled, which is one order higher than our previous limit.

  17. Identification of Parts Failures. FOS: Fundamentals of Service.

    ERIC Educational Resources Information Center

    John Deere Co., Moline, IL.

    This parts failures identification manual is one of a series of power mechanics texts and visual aids covering theory of operation, diagnosis of trouble problems, and repair of automotive and off-the-road construction and agricultural equipment. Materials provide basic information with many illustrations for use by vocational students and teachers…

  18. Limits on variations in fundamental constants from 21-cm and ultraviolet Quasar absorption lines.

    PubMed

    Tzanavaris, P; Webb, J K; Murphy, M T; Flambaum, V V; Curran, S J

    2005-07-22

    Quasar absorption spectra at 21-cm and UV rest wavelengths are used to estimate the time variation of x [triple-bond] alpha(2)g(p)mu, where alpha is the fine structure constant, g(p) the proton g factor, and m(e)/m(p) [triple-bond] mu the electron/proton mass ratio. Over a redshift range 0.24 < or = zeta(abs) < or = 2.04, (Deltax/x)(weighted)(total) = (1.17 +/- 1.01) x 10(-5). A linear fit gives x/x = (-1.43 +/- 1.27) x 10(-15) yr(-1). Two previous results on varying alpha yield the strong limits Deltamu/mu = (2.31 +/- 1.03) x 10(-5) and Deltamu/mu=(1.29 +/- 1.01) x10(-5). Our sample, 8 x larger than any previous, provides the first direct estimate of the intrinsic 21-cm and UV velocity differences 6 km s(-1).

  19. Fundamental and overtone vibrational spectroscopy, enthalpy of hydrogen bond formation and equilibrium constant determination of the methanol-dimethylamine complex.

    PubMed

    Du, Lin; Mackeprang, Kasper; Kjaergaard, Henrik G

    2013-07-01

    We have measured gas phase vibrational spectra of the bimolecular complex formed between methanol (MeOH) and dimethylamine (DMA) up to about 9800 cm(-1). In addition to the strong fundamental OH-stretching transition we have also detected the weak second overtone NH-stretching transition. The spectra of the complex are obtained by spectral subtraction of the monomer spectra from spectra recorded for the mixture. For comparison, we also measured the fundamental OH-stretching transition in the bimolecular complex between MeOH and trimethylamine (TMA). The enthalpies of hydrogen bond formation (ΔH) for the MeOH-DMA and MeOH-TMA complexes have been determined by measurements of the fundamental OH-stretching transition in the temperature range from 298 to 358 K. The enthalpy of formation is found to be -35.8 ± 3.9 and -38.2 ± 3.3 kJ mol(-1) for MeOH-DMA and MeOH-TMA, respectively, in the 298 to 358 K region. The equilibrium constant (Kp) for the formation of the MeOH-DMA complex has been determined from the measured and calculated transition intensities of the OH-stretching fundamental transition and the NH-stretching second overtone transition. The transition intensities were calculated using an anharmonic oscillator local mode model with dipole moment and potential energy curves calculated using explicitly correlated coupled cluster methods. The equilibrium constant for formation of the MeOH-DMA complex was determined to be 0.2 ± 0.1 atm(-1), corresponding to a ΔG value of about 4.0 kJ mol(-1).

  20. Frequency ratio of two optical clock transitions in 171Yb+ and constraints on the time variation of fundamental constants.

    PubMed

    Godun, R M; Nisbet-Jones, P B R; Jones, J M; King, S A; Johnson, L A M; Margolis, H S; Szymaniec, K; Lea, S N; Bongs, K; Gill, P

    2014-11-21

    Singly ionized ytterbium, with ultranarrow optical clock transitions at 467 and 436 nm, is a convenient system for the realization of optical atomic clocks and tests of present-day variation of fundamental constants. We present the first direct measurement of the frequency ratio of these two clock transitions, without reference to a cesium primary standard, and using the same single ion of 171Yb+. The absolute frequencies of both transitions are also presented, each with a relative standard uncertainty of 6×10(-16). Combining our results with those from other experiments, we report a threefold improvement in the constraint on the time variation of the proton-to-electron mass ratio, μ/μ=0.2(1.1)×10(-16)  yr(-1), along with an improved constraint on time variation of the fine structure constant, α/α=-0.7(2.1)×10(-17)  yr(-1). PMID:25479482

  1. Writing biomedical manuscripts part I: fundamentals and general rules.

    PubMed

    Ohwovoriole, A E

    2011-01-01

    It is a professional obligation for health researchers to investigate and communicate their findings to the medical community. The writing of a publishable scientific manuscript can be a daunting task for the beginner and to even some established researchers. Many manuscripts fail to get off the ground and/or are rejected. The writing task can be made easier and the quality improved by using and following simple rules and leads that apply to general scientific writing .The manuscript should follow a standard structure:(e.g. (Abstract) plus Introduction, Methods, Results, and Discussion/Conclusion, the IMRAD model. The authors must also follow well established fundamentals of good communication in science and be systematic in approach. The manuscript must move from what is currently known to what was unknown that was investigated using a hypothesis, research question or problem statement. Each section has its own style of structure and language of presentation. The beginning of writing a good manuscript is to do a good study design and to pay attention to details at every stage. Many manuscripts are rejected because of errors that can be avoided if the authors follow simple guidelines and rules. One good way to avoid potential disappointment in manuscript writing is to follow the established general rules along with those of the journal in which the paper is to be published. An important injunction is to make the writing precise, clear, parsimonious, and comprehensible to the intended audience. The purpose of this article is to arm and encourage potential biomedical authors with tools and rules that will enable them to write contemporary manuscripts, which can stand the rigorous peer review process. The expectations of standard journals, and common pitfalls the major elements of a manuscript are covered.

  2. Fundamentals of Physics, Part 1 (Chapters 1-11)

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    Chapter 1.Measurement. How does the appearance of a new type of cloud signal changes in Earth's atmosphere? 1-1 What Is Physics? 1-2 Measuring Things. 1-3 The International System of Units. 1-4 Changing Units. 1-5 Length. 1-6 Time. 1-7 Mass. Review & Summary. Problems. Chapter 2.Motion Along a Straight Line. What causes whiplash injury in rear-end collisions of cars? 2-1 What Is Physics? 2-2 Motion. 2-3 Position and Displacement. 2-4 Average Velocity and Average Speed. 2-5 Instantaneous Velocity and Speed. 2-6 Acceleration. 2-7 Constant Acceleration: A Special Case. 2-8 Another Look at Constant Acceleration. 2-9 Free-Fall Acceleration. 2-10 Graphical Integration in Motion Analysis. Review & Summary. Questions. Problems. Chapter 3.Vectors. How does an ant know the way home with no guiding clues on the deser t plains? 3-2 Vectors and Scalars. 3-3 Adding Vectors Geometrically. 3-4 Components of Vectors. 3-5 Unit Vectors. 3-6 Adding Vectors by Components. 3-7 Vectors and the Laws of Physics. 3-8 Multiplying Vectors. Review & Summary. Questions. Problems. Chapter 4.Motion in Two and Three Dimensions. In a motorcycle jump for record distance, where does the jumper put the second ramp? 4-1 What Is Physics? 4-2 Position and Displacement. 4-3 Average Velocity and Instantaneous Velocity. 4-4 Average Acceleration and Instantaneous Acceleration. 4-5 Projectile Motion. 4-6 Projectile Motion Analyzed. 4-7 Uniform Circular Motion. 4-8 Relative Motion in One Dimension. 4-9 Relative Motion in Two Dimensions. Review & Summary. Questions. Problems. Chapter 5.Force and Motion-I. When a pilot takes off from an aircraft carrier, what causes the compulsion to fly the plane into the ocean? 5-1 What Is Physics? 5-2 Newtonian Mechanics. 5-3 Newton's First Law. 5-4 Force. 5-5 Mass. 5-6 Newton's Second Law. 5-7 Some Particular Forces. 5-8 Newton's Third Law. 5-9 Applying Newton's Laws. Review & Summary. Questions. Problems. Chapter 6.Force and Motion-II. Can a Grand Prix race car be driven

  3. Natural nuclear reactor at Oklo and variation of fundamental constants: Computation of neutronics of a fresh core

    SciTech Connect

    Petrov, Yu. V.; Nazarov, A. I.; Onegin, M. S.; Petrov, V. Yu.; Sakhnovsky, E. G.

    2006-12-15

    Using modern methods of reactor physics, we performed full-scale calculations of the Oklo natural reactor. For reliability, we used recent versions of two Monte Carlo codes: the Russian code MCU-REA and the well-known international code MCNP. Both codes produced similar results. We constructed a computer model of the Oklo reactor zone RZ2 which takes into account all details of design and composition. The calculations were performed for three fresh cores with different uranium contents. Multiplication factors, reactivities, and neutron fluxes were calculated. We also estimated the temperature and void effects for the fresh core. As would be expected, we found for the fresh core a significant difference between reactor and Maxwell spectra, which had been used before for averaging cross sections in the Oklo reactor. The averaged cross section of {sub 62}{sup 149}Sm and its dependence on the shift of a resonance position E{sub r} (due to variation of fundamental constants) are significantly different from previous results. Contrary to the results of previous papers, we found no evidence of a change of the samarium cross section: a possible shift of the resonance energy is given by the limits -73{<=}{delta}E{sub r}{<=}62 meV. Following tradition, we have used formulas of Damour and Dyson to estimate the rate of change of the fine structure constant {alpha}. We obtain new, more accurate limits of -4x10{sup -17}{<=}{alpha}{center_dot}/{alpha}{<=}3x10{sup -17} yr{sup -1}. Further improvement of the accuracy of the limits can be achieved by taking account of the core burn-up. These calculations are in progress.

  4. Natural nuclear reactor at Oklo and variation of fundamental constants: Computation of neutronics of a fresh core

    NASA Astrophysics Data System (ADS)

    Petrov, Yu. V.; Nazarov, A. I.; Onegin, M. S.; Petrov, V. Yu.; Sakhnovsky, E. G.

    2006-12-01

    Using modern methods of reactor physics, we performed full-scale calculations of the Oklo natural reactor. For reliability, we used recent versions of two Monte Carlo codes: the Russian code MCU-REA and the well-known international code MCNP. Both codes produced similar results. We constructed a computer model of the Oklo reactor zone RZ2 which takes into account all details of design and composition. The calculations were performed for three fresh cores with different uranium contents. Multiplication factors, reactivities, and neutron fluxes were calculated. We also estimated the temperature and void effects for the fresh core. As would be expected, we found for the fresh core a significant difference between reactor and Maxwell spectra, which had been used before for averaging cross sections in the Oklo reactor. The averaged cross section of 62149Sm and its dependence on the shift of a resonance position Er (due to variation of fundamental constants) are significantly different from previous results. Contrary to the results of previous papers, we found no evidence of a change of the samarium cross section: a possible shift of the resonance energy is given by the limits -73⩽ΔEr⩽62 meV. Following tradition, we have used formulas of Damour and Dyson to estimate the rate of change of the fine structure constant α. We obtain new, more accurate limits of -4×10-17⩽α·/α⩽3×10-17yr-1. Further improvement of the accuracy of the limits can be achieved by taking account of the core burn-up. These calculations are in progress.

  5. Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts.

    PubMed

    Tamosiunaite, Minija; Sutterlütti, Rahel M; Stein, Simon C; Wörgötter, Florentin

    2015-01-01

    Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them.

  6. Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts

    PubMed Central

    Tamosiunaite, Minija; Sutterlütti, Rahel M.; Stein, Simon C.; Wörgötter, Florentin

    2015-01-01

    Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them. PMID:26441797

  7. Enhanced effects of variation of the fundamental constants in laser interferometers and application to dark-matter detection

    NASA Astrophysics Data System (ADS)

    Stadnik, Y. V.; Flambaum, V. V.

    2016-06-01

    We outline laser interferometer measurements to search for variation of the electromagnetic fine-structure constant α and particle masses (including a nonzero photon mass). We propose a strontium optical lattice clock—silicon single-crystal cavity interferometer as a small-scale platform for these measurements. Our proposed laser interferometer measurements, which may also be performed with large-scale gravitational-wave detectors, such as LIGO, Virgo, GEO600, or TAMA300, may be implemented as an extremely precise tool in the direct detection of scalar dark matter that forms an oscillating classical field or topological defects.

  8. Direct numerical simulation of ignition front propagation in a constant volume with temperature inhomogeneities. I. Fundamental analysis and diagnostics

    SciTech Connect

    Chen, Jacqueline H.; Hawkes, Evatt R.; Sankaran, Ramanan; Mason, Scott D.; Im, Hong G.

    2006-04-15

    The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry with a view to providing better understanding and modeling of combustion processes in homogeneous charge compression-ignition engines. Numerical diagnostics are developed to analyze the mode of combustion and the dependence of overall ignition progress on initial mixture conditions. The roles of dissipation of heat and mass are divided conceptually into transport within ignition fronts and passive scalar dissipation, which modifies the statistics of the preignition temperature field. Transport within ignition fronts is analyzed by monitoring the propagation speed of ignition fronts using the displacement speed of a scalar that tracks the location of maximum heat release rate. The prevalence of deflagrative versus spontaneous ignition front propagation is found to depend on the local temperature gradient, and may be identified by the ratio of the instantaneous front speed to the laminar deflagration speed. The significance of passive scalar mixing is examined using a mixing timescale based on enthalpy fluctuations. Finally, the predictions of the multizone modeling strategy are compared with the DNS, and the results are explained using the diagnostics developed. (author)

  9. The Effect of Approximating Some Molecular Integrals in Coupled-Cluster Calculations: Fundamental Frequencies and Rovibrational Spectroscopic Constants of Cyclopropenylidene

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Dateo, Christopher E.

    2005-01-01

    The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, denoted CCSD(T), has been used, in conjunction with approximate integral techniques, to compute highly accurate rovibrational spectroscopic constants of cyclopropenylidene, C3H2. The approximate integral technique was proposed in 1994 by Rendell and Lee in order to avoid disk storage and input/output bottlenecks, and today it will also significantly aid in the development of algorithms for distributed memory, massively parallel computer architectures. It is shown in this study that use of approximate integrals does not impact the accuracy of CCSD(T) calculations. In addition, the most accurate spectroscopic data yet for C3H2 is presented based on a CCSD(T)/cc-pVQZ quartic force field that is modified to include the effects of core-valence electron correlation. Cyclopropenylidene is of great astronomical and astrobiological interest because it is the smallest aromatic ringed compound to be positively identified in the interstellar medium, and is thus involved in the prebiotic processing of carbon and hydrogen. The singles and doubles coupled-cluster method that includes a perturbational estimate of

  10. Superposition of super-integrable pseudo-Euclidean potentials in N = 2 with a fundamental constant of motion of arbitrary order in the momenta

    SciTech Connect

    Campoamor-Stursberg, R.

    2014-04-15

    It is shown that for any α,β∈R and k∈Z, the Hamiltonian H{sub k}=p{sub 1}p{sub 2}−αq{sub 2}{sup (2k+1)}q{sub 1}{sup (−2k−3)}−(β)/2 q{sub 2}{sup k}q{sub 1}{sup (−k−2)} is super-integrable, possessing fundamental constants of motion of degrees 2 and 2k + 2 in the momenta.

  11. Fundamental two-stage formulation for Bayesian system identification, Part II: Application to ambient vibration data

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-Liang; Au, Siu-Kui

    2016-01-01

    A fundamental theory has been developed for a general two-stage Bayesian system identification problem in the companion paper (Part I). This paper applies the theory to the particular case of structural system identification using ambient vibration data. In Stage I, the modal properties are identified using the Fast Bayesian FFT method. Given the data, their posterior distribution can be well approximated by a Gaussian distribution whose mean and covariance matrix can be computed efficiently. In Stage II, the structural model parameters (e.g., stiffness, mass) are identified incorporating the posterior distribution of the natural frequencies and mode shapes in Stage I and their conditional distribution based on the theoretical structural finite element model. Synthetic and experimental data are used to illustrate the proposed theory and applications. A number of factors commonly relevant to structural system identification are studied, including the number of measured degrees of freedom, the number of identifiable modes and sensor alignment error.

  12. Recent developments in modeling of hot rolling processes: Part I - Fundamentals

    NASA Astrophysics Data System (ADS)

    Hirt, Gerhard; Bambach, Markus; Seuren, Simon; Henke, Thomas; Lohmar, Johannes

    2013-05-01

    The numerical simulation of industrial rolling processes has gained substantial relevance over the past decades. A large variety of models have been put forward to simulate single and multiple rolling passes taking various interactions between the process, the microstructure evolution and the rolling mill into account. On the one hand, these include sophisticated approaches which couple models on all scales from the product's microstructure level up to the elastic behavior of the roll stand. On the other hand, simplified but fast models are used for on-line process control and automatic pass schedule optimization. This publication gives a short overview of the fundamental equations used in modeling of hot rolling of metals. Part II of this paper will present selected applications of hot rolling simulations.

  13. Reduction of iron-oxide-carbon composites: part I. Estimation of the rate constants

    SciTech Connect

    Halder, S.; Fruehan, R.J.

    2008-12-15

    A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO{sub 2} and wustite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wustite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wustite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wustite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (> 1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.

  14. Constraining the Evolution of the Fundamental Constants with a Solid-State Optical Frequency Reference Based on the {sup 229}Th Nucleus

    SciTech Connect

    Rellergert, Wade G.; Hudson, Eric R.; DeMille, D.; Greco, R. R.; Hehlen, M. P.; Torgerson, J. R.

    2010-05-21

    We describe a novel approach to directly measure the energy of the narrow, low-lying isomeric state in {sup 229}Th. Since nuclear transitions are far less sensitive to environmental conditions than atomic transitions, we argue that the {sup 229}Th optical nuclear transition may be driven inside a host crystal with a high transition Q. This technique might also allow for the construction of a solid-state optical frequency reference that surpasses the short-term stability of current optical clocks, as well as improved limits on the variability of fundamental constants. Based on analysis of the crystal lattice environment, we argue that a precision (short-term stability) of 3x10{sup -17}<{Delta}f/f<1x10{sup -15} after 1 s of photon collection may be achieved with a systematic-limited accuracy (long-term stability) of {Delta}f/f{approx}2x10{sup -16}. Improvement by 10{sup 2}-10{sup 3} of the constraints on the variability of several important fundamental constants also appears possible.

  15. Finite element modeling of borehole heat exchanger systems. Part 1. Fundamentals

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Bauer, D.; Heidemann, W.; Rühaak, W.; Schätzl, P.

    2011-08-01

    Single borehole heat exchanger (BHE) and arrays of BHE are modeled by using the finite element method. The first part of the paper derives the fundamental equations for BHE systems and their finite element representations, where the thermal exchange between the borehole components is modeled via thermal transfer relations. For this purpose improved relationships for thermal resistances and capacities of BHE are introduced. Pipe-to-grout thermal transfer possesses multiple grout points for double U-shape and single U-shape BHE to attain a more accurate modeling. The numerical solution of the final 3D problems is performed via a widely non-sequential (essentially non-iterative) coupling strategy for the BHE and porous medium discretization. Four types of vertical BHE are supported: double U-shape (2U) pipe, single U-shape (1U) pipe, coaxial pipe with annular (CXA) and centred (CXC) inlet. Two computational strategies are used: (1) The analytical BHE method based on Eskilson and Claesson's (1988) solution, (2) numerical BHE method based on Al-Khoury et al.'s (2005) solution. The second part of the paper focusses on BHE meshing aspects, the validation of BHE solutions and practical applications for borehole thermal energy store systems.

  16. 40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Compounds With Henry's Law Constant Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...

  17. Reduction of Iron-Oxide-Carbon Composites: Part I. Estimation of the Rate Constants

    NASA Astrophysics Data System (ADS)

    Halder, S.; Fruehan, R. J.

    2008-12-01

    A new ironmaking concept using iron-oxide-carbon composite pellets has been proposed, which involves the combination of a rotary hearth furnace (RHF) and an iron bath smelter. This part of the research focuses on studying the two primary chemical kinetic steps. Efforts have been made to experimentally measure the kinetics of the carbon gasification by CO2 and wüstite reduction by CO by isolating them from the influence of heat- and mass-transport steps. A combined reaction model was used to interpret the experimental data and determine the rate constants. Results showed that the reduction is likely to be influenced by the chemical kinetics of both carbon oxidation and wüstite reduction at the temperatures of interest. Devolatilized wood-charcoal was observed to be a far more reactive form of carbon in comparison to coal-char. Sintering of the iron-oxide at the high temperatures of interest was found to exert a considerable influence on the reactivity of wüstite by virtue of altering the internal pore surface area available for the reaction. Sintering was found to be predominant for highly porous oxides and less of an influence on the denser ores. It was found using an indirect measurement technique that the rate constants for wüstite reduction were higher for the porous iron-oxide than dense hematite ore at higher temperatures (>1423 K). Such an indirect mode of measurement was used to minimize the influence of sintering of the porous oxide at these temperatures.

  18. Feedback control of wave propagation in a rectangular panel, Part 1: Theoretical investigation of fundamental characteristics

    NASA Astrophysics Data System (ADS)

    Iwamoto, Hiroyuki; Tanaka, Nobuo; Hill, Simon G.

    2013-08-01

    This study presents the feedback control of flexural waves propagating in a rectangular panel. The objective of this paper (part 1) is to theoretically investigate the fundamental properties of the feedback wave control system. First, a transfer matrix method in the Laplace domain is introduced which is based on a wave solution of a rectangular panel. This is followed by the derivation of the characteristic equation and the feedback control laws for absorbing the reflected waves. Then, from a viewpoint of numerical simulations, the control performance of the proposed method is clarified. It is found that the reflected wave absorbing control enables inactivation of vibration modes since standing waves which cause resonant phenomena disappear from the structural vibration. Finally, the stability verification of the proposed control system is conducted using Nyquist diagram. It is shown that although the controller has unstable poles in some cases, the nominal control system is stable irrespective of whether the collocation holds or not. Furthermore, it is clarified that a wave-absorbing control system becomes robust for the parameter fluctuation if the uncontrolled region does not exist.

  19. High resolution infrared synchrotron study of CH2D81Br: ground state constants and analysis of the ν5, ν6 and ν9 fundamentals

    NASA Astrophysics Data System (ADS)

    Baldacci, A.; Stoppa, P.; Visinoni, R.; Wugt Larsen, R.

    2012-09-01

    The high resolution infrared absorption spectrum of CH2D81Br has been recorded by Fourier transform spectroscopy in the range 550-1075 cm-1, with an unapodized resolution of 0.0025 cm-1, employing a synchrotron radiation source. This spectral region is characterized by the ν6 (593.872 cm-1), ν5 (768.710 cm-1) and ν9 (930.295 cm-1) fundamental bands. The ground state constants up to sextic centrifugal distortion terms have been obtained for the first time by ground-state combination differences from the three bands and subsequently employed for the evaluation of the excited state parameters. Watson's A-reduced Hamiltonian in the Ir representation has been used in the calculations. The ν 6 = 1 level is essentially free from perturbation whereas the ν 5 = 1 and ν 9 = 1 states are mutually interacting through a-type Coriolis coupling. Accurate spectroscopic parameters of the three excited vibrational states and a high-order coupling constant which takes into account the interaction between ν5 and ν9 have been determined.

  20. Non-empirical calculations of NMR indirect carbon-carbon coupling constants. Part 6: propellanes.

    PubMed

    Krivdin, Leonid B

    2004-01-01

    A full set of carbon-carbon coupling constants have been calculated at the SOPPA level in the series of six most representative propellanes. Special attention was focused on spin-spin couplings involving both bridgehead carbons, and these data were rationalized in terms of the multipath coupling mechanism and hybridization effects. Many unknown couplings in the propellane frameworks were predicted with high reliability.

  1. Fundamental two-stage formulation for Bayesian system identification, Part I: General theory

    NASA Astrophysics Data System (ADS)

    Au, Siu-Kui; Zhang, Feng-Liang

    2016-01-01

    Structural system identification is concerned with the determination of structural model parameters (e.g., stiffness, mass) based on measured response data collected from the subject structure. For linear structures, one popular strategy is to adopt a 'two-stage' approach. That is, modal identification (e.g., frequency, mode shape) is performed in Stage I, whose information is used for inferring the structural parameters in Stage II. Different variants of Bayesian two-stage formulations have been proposed in the past. A prediction error model is commonly introduced to build a link between Stages I and II, treating the most probable values of the natural frequencies and mode shapes identified in Stage I as 'data' for Stage II. This type of formulation, which casts a prediction error model through descriptive statistics, involves heuristics that distort the fundamental nature of the Bayesian approach, although it has appeared to be inevitable. In this paper, a fundamental theory is developed for the Bayesian two-stage problem. The posterior distribution of structural parameters is derived rigorously in terms of the information available in the problem, namely the prior distribution of structural parameters, the posterior distribution of modal parameters in Stage I and the distribution of modal parameters conditional on the structural parameters that connects Stages I and II. The theory reveals a fundamental principle that ensures no double-counting of prior information in the two-stage identification process. Mathematical statements are also derived that provide insights into the role of the structural modeling error. Beyond the original structural model identification problem that motivated the work, the developed theory can be applied in more general settings. In the companion paper, examples with synthetic and real experimental data are provided to illustrate the proposed theory.

  2. The Incinerator: Section One, Basic Parts and Fundamentals. Part 5, Air Pollution Training Institute Self-Instructional Course SI-466.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Research Triangle Park, NC. Air Pollution Training Inst.

    This workbook is part five of a self-instructional course prepared for the United States Environmental Protection Agency. The student proceeds at his own pace and when questions are asked, after answering, he either turns to the next page to check his response or refers to the previously covered material. The purpose of this course is to prepare…

  3. Non-empirical calculations of NMR indirect carbon-carbon coupling constants. Part 7--spiroalkanes.

    PubMed

    Krivdin, Leonid B

    2004-06-01

    Carbon-carbon spin-spin coupling constants were calculated at the SOPPA level for a series of seven classical spiroalkanes, spiro[2.2]pentane, spiro[2.3]hexane, spiro[2.4]heptane, spiro[2.5]octane, spiro[3.3]heptane, spiro[4.4]nonane and spiro[5.5]undecane, with special focus upon couplings involving and/or across spiro carbons. Many interesting structural trends were investigated originating in specific geometries and unusual bonding environments at the spiro carbon.

  4. Wall jet analysis for circulation control aerodynamics. Part 1: Fundamental CFD and turbulence modeling concepts

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.

    1987-01-01

    An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.

  5. Estimation of brittleness index using dynamic and static elastic constants in the Haenam Basin, Southwestern Part of Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Hwang, Seho; Shin, Jehyun; Kim, Jongman; Won, Byeongho; Song, Wonkyoung; Kim, Changryol; Ki, Jungseok

    2014-05-01

    One of the most important physical properties is the measurement of the elastic constants of the formation in the evaluation of shale gas. Normally the elastic constants by geophysical well logging and the laboratory test are used in the design of hydraulic fracturing . The three inches diameter borehole of the depth of 505 m for the evaluation of shale gas drilled and was fully cored at the Haenan Basin, southwestern part of Korea Peninsula. We performed a various laboratory tests and geophysical well logging using slime hole logging system. Geophysical well logs include the radioactive logs such as natural gamma log, density log and neutron log, and monopole and dipole sonic log, and image logs. Laboratory tests are the axial compression test, elastic wave velocities and density, and static elastic constants measurements for 21 shale and sandstone cores. We analyzed the relationships between the physical properties by well logs and laboratory test as well as static elastic constants by laboratory tests. In the case of an sonic log using a monopole source of main frequency 23 kHz, measuring P-wave velocity was performed reliably. When using the dipole excitation of low frequency, the signal to noise ratio of the measured shear wave was very low. But when measuring using time mode in a predetermined depth, the signal to noise ratio of measured data relatively improved to discriminate the shear wave. P-wave velocities by laboratory test and sonic logging agreed well overall, but S-wave velocities didn't. The reason for the discrepancy between the laboratory test and sonic log is mainly the low signal to noise ratio of sonic log data by low frequency dipole source, and measuring S-wave in the small diameter borehole is still challenge. The relationship between the P-wave velocity and two dynamic elastic constants, Young's modulus and Poisson's ratio, shows a good correlation. And the relationship between the static elastic constants and dynamic elastic constants also

  6. Optimizing drug delivery systems using systematic "design of experiments." Part I: fundamental aspects.

    PubMed

    Singh, Bhupinder; Kumar, Rajiv; Ahuja, Naveen

    2005-01-01

    , postulation of mathematical models for various chosen response characteristics, fitting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology. The broad topic of DoE optimization methodology is covered in two parts. Part I of the review attempts to provide thought-through and thorough information on diverse DoE aspects organized in a seven-step sequence. Besides dealing with basic DoE terminology for the novice, the article covers the niceties of several important experimental designs, mathematical models, and optimum search techniques using numeric and graphical methods, with special emphasis on computer-based approaches, artificial neural networks, and judicious selection of designs and models.

  7. Toward a Fundamental Theory of Optimal Feature Selection: Part II-Implementation and Computational Complexit.

    PubMed

    Morgera, S D

    1987-01-01

    Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.

  8. On understanding the very different science premises meaningful to CAM versus orthodox medicine: Part II--applications of Part I fundamentals to five different space-time examples.

    PubMed

    Tiller, William A

    2010-04-01

    In Part I of this pair of articles, the fundamental experimental observations and theoretical perspectives were provided for one to understand the key differences between our normal, uncoupled state of physical reality and the human consciousness-induced coupled state of physical reality. Here in Part II, the thermodynamics of complementary and alternative medicine, which deals with the partially coupled state of physical reality, is explored via the use of five different foci of relevance to today's science and medicine: (1) homeopathy; (2) the placebo effect; (3) long-range, room temperature, macroscopic size-scale, information entanglement; (4) an explanation for dark matter/energy plus human levitation possibility; and (5) electrodermal diagnostic devices. The purpose of this pair of articles is to clearly differentiate the use and limitations of uncoupled state physics in both nature and today's orthodox medicine from coupled state physics in tomorrow's complementary and alternative medicine. PMID:20423220

  9. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype-phenotype maps.

    PubMed

    Greenbury, S F; Ahnert, S E

    2015-12-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype-phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into 'constrained' and 'unconstrained' sequences, in the broadest possible sense. As 'constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. 'Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with 'coding' and 'non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps.

  10. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype–phenotype maps

    PubMed Central

    Greenbury, S. F.; Ahnert, S. E.

    2015-01-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype–phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into ‘constrained' and ‘unconstrained' sequences, in the broadest possible sense. As ‘constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. ‘Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with ‘coding' and ‘non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063

  11. Scattering of the fundamental shear horizontal guided wave by a part-thickness crack in an isotropic plate.

    PubMed

    Rajagopal, P; Lowe, M J S

    2008-11-01

    The interaction of the fundamental shear horizontal (SH0) guided mode with part-thickness cracks in an isotropic plate is studied as an extension within the context and general framework of previous work ["Short range scattering of the fundamental shear horizontal guided wave mode normally incident at a through thickness crack in an isotropic plate," J. Acoust Soc. Am. 122, 1527-1538 (2007); "Angular influence on scattering when the fundamental shear horizontal guided wave mode is incident at a through-thickness crack in an isotropic plate," J. Acoust. Soc. Am. 124, 2021-2030 (2008)] by the authors with through-cracks. The symmetric incidence case where the principal direction of the incident beam bisects the crack face at 90 degrees is studied using finite element simulations validated by experiments and analysis, and conclusions are inferred for general incidence angles using insights obtained with the through-thickness studies. The influence of the crack length and the monitoring distance on the specular reflection is first examined, followed by a study of the angular profile of the reflected field. With each crack length considered, the crack depth and operating frequencies are varied. For all crack depths studied, the trend of the results is identical to that for the corresponding through-thickness case and the values differ only by a frequency dependent scale factor. Theoretical analysis is used to understand the physical basis for such behavior and estimates are suggested for the scale factor--exact for the high-frequency scattering regime and empirical for the medium- and low-frequency regimes.

  12. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype-phenotype maps.

    PubMed

    Greenbury, S F; Ahnert, S E

    2015-12-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype-phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into 'constrained' and 'unconstrained' sequences, in the broadest possible sense. As 'constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. 'Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with 'coding' and 'non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063

  13. Characterization of anisotropie elastic constants of silicon-carbide participate reinforced aluminum metal matrix composites: Part I. Experiment

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Hsu, David K.; Shannon, Robert E.; Liaw, Peter K.

    1994-04-01

    The anisotropic elastic properties of silicon-carbide particulate (SiC p ) reinforced Al metal matrix composites were characterized using ultrasonic techniques and microstructural analysis. The composite materials, fabricated by a powder metallurgy extrusion process, included 2124, 6061, and 7091 Al alloys reinforced by 10 to 30 pct of α-SiC p by volume. Results were presented for the assumed orthotropic elastic constants obtained from ultrasonic velocities and for the microstructural data on particulate shape, aspect ratio, and orientation distribution. All of the composite samples exhibited a systematic anisotropy: the stiffness in the extrusion direction was the highest, and the stiffness in the out-of-plane direction was the lowest. Microstructural analysis suggested that the observed anisotropy could be attributed to the preferred orientation of SiC p . The ultrasonic velocity was found to be sensitive to internal defects such as porosity and intermetallic compounds. It has been observed that ultrasonics may be a useful, nondestructive technique for detecting small directional differences in the overall elastic constants of the composites since a good correlation has been noted between the velocity and microstructure and the mechanical test. By incorporating the observed microstructural characteristics, a theoretical model for predicting the anisotropic stiffnesses of the composites has been developed and is presented in a companion article (Part II).

  14. Spherical Couette flow of Oldroyd 8-constant model - Part I. Solution up to the second-order approximation

    NASA Astrophysics Data System (ADS)

    Abu-El Hassan, A.

    2006-05-01

    The steady flow of an incompressible Oldroyd 8-constant fluid in the annular region between two spheres, or so-called spherical Couette flow, is investigated. The inner sphere rotates with anangular velocity about the z-axis, which passes through the center of the spheres, while the outer sphere is kept at rest. The viscoelasticity of the fluid is assumed to dominate the inertia such that the latter can be neglected in the momentum equation. An analytical solution is obtained through the expansion of the dynamical variables in a power series of the dimensionless retardation time. The leading velocity term denotes the Newtonian rotation about the z-axis. The first-order term results in a secondary flow represented by the stream function that divides the flow region into four symmetric parts. The second-order term is the viscoelastic contribution to the primary viscous flow. The first-order approximation depends on the viscosity and four of the material time-constants of the fluid. The second-order approximation depends on the eight viscometric parameters of the fluid. The torque acting on the outer sphere has an additional term due to viscoelasticity that depends on all the material parameters of the fluid under consideration. For an Oldroyd-B fluid this contributed term enhances the primary torque but in the case of fluids with higher elasticity the torque components may be enhanced or diminished depending on the values of the viscometric parameters.

  15. Direct numerical simulation of ignition front propagation in a constant volume with temperature inhomogeneities, Part II : Parametric study.

    SciTech Connect

    Sankaran, Ramanan; Chen, Jacqueline H.; Hawkes, Evatt R.; Pebay, Philippe Pierre

    2005-01-01

    The influence of thermal stratification on autoignition at constant volume and high pressure is studied by direct numerical simulation (DNS) with detailed hydrogen/air chemistry. Parametric studies on the effect of the initial amplitude of the temperature fluctuations, the initial length scales of the temperature and velocity fluctuations, and the turbulence intensity are performed. The combustion mode is characterized using the diagnostic measures developed in Part I of this study. Specifically, the ignition front speed and the scalar mixing timescales are used to identify the roles of molecular diffusion and heat conduction in each case. Predictions from a multizone model initialized from the DNS fields are presented and differences are explained using the diagnostic tools developed.

  16. Fifth Fundamental Catalogue (FK5). Part 1: Basic fundamental stars (Fricke, Schwan, and Lederle 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.

  17. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit. PMID:27558073

  18. Toward the Development of a Fundamentally Based Chemical Model for Cyclopentanone: High-Pressure-Limit Rate Constants for H Atom Abstraction and Fuel Radical Decomposition.

    PubMed

    Zhou, Chong-Wen; Simmie, John M; Pitz, William J; Curran, Henry J

    2016-09-15

    Theoretical aspects of the development of a chemical kinetic model for the pyrolysis and combustion of a cyclic ketone, cyclopentanone, are considered. Calculated thermodynamic and kinetic data are presented for the first time for the principal species including 2- and 3-oxo-cyclopentyl radicals, which are in reasonable agreement with the literature. These radicals can be formed via H atom abstraction reactions by Ḣ and Ö atoms and ȮH, HȮ2, and ĊH3 radicals, the rate constants of which have been calculated. Abstraction from the β-hydrogen atom is the dominant process when ȮH is involved, but the reverse holds true for HȮ2 radicals. The subsequent β-scission of the radicals formed is also determined, and it is shown that recent tunable VUV photoionization mass spectrometry experiments can be interpreted in this light. The bulk of the calculations used the composite model chemistry G4, which was benchmarked in the simplest case with a coupled cluster treatment, CCSD(T), in the complete basis set limit.

  19. Fundamental ecology is fundamental.

    PubMed

    Courchamp, Franck; Dunne, Jennifer A; Le Maho, Yvon; May, Robert M; Thébaud, Christophe; Hochberg, Michael E

    2015-01-01

    The primary reasons for conducting fundamental research are satisfying curiosity, acquiring knowledge, and achieving understanding. Here we develop why we believe it is essential to promote basic ecological research, despite increased impetus for ecologists to conduct and present their research in the light of potential applications. This includes the understanding of our environment, for intellectual, economical, social, and political reasons, and as a major source of innovation. We contend that we should focus less on short-term, objective-driven research and more on creativity and exploratory analyses, quantitatively estimate the benefits of fundamental research for society, and better explain the nature and importance of fundamental ecology to students, politicians, decision makers, and the general public. Our perspective and underlying arguments should also apply to evolutionary biology and to many of the other biological and physical sciences.

  20. ON THE VARIATIONS OF FUNDAMENTAL CONSTANTS AND ACTIVE GALACTIC NUCLEUS FEEDBACK IN THE QUASI-STELLAR OBJECT HOST GALAXY RXJ0911.4+0551 at z = 2.79

    SciTech Connect

    Weiss, A.; Henkel, C.; Menten, K. M.; Walter, F.; Downes, D.; Cox, P.; Carrili, C. L.

    2012-07-10

    We report on sensitive observations of the CO(J = 7{yields}6) and C I({sup 3}P{sub 2} {yields} {sup 3}P{sub 1}) transitions in the z = 2.79 QSO host galaxy RXJ0911.4+0551 using the IRAM Plateau de Bure interferometer. Our extremely high signal-to-noise spectra combined with the narrow CO line width of this source (FWHM = 120 km s{sup -1}) allows us to estimate sensitive limits on the spacetime variations of the fundamental constants using two emission lines. Our observations show that the C I and CO line shapes are in good agreement with each other but that the C I line profile is of the order of 10% narrower, presumably due to the lower opacity in the latter line. Both lines show faint wings with velocities up to {+-}250 km s{sup -1}, indicative of a molecular outflow. As such, the data provide direct evidence for negative feedback in the molecular gas phase at high redshift. Our observations allow us to determine the observed frequencies of both transitions with so far unmatched accuracy at high redshift. The redshift difference between the CO and C I lines is sensitive to variations of {Delta}F/F with F = {alpha}{sup 2}/{mu} where {alpha} is the fine structure constant and {mu} is the electron-to-proton mass ratio. We find {Delta}F/F (6.9 {+-} 3.7) Multiplication-Sign 10{sup -6} at a look-back time of 11.3 Gyr, which, within the uncertainties, is consistent with no variations of the fundamental constants.

  1. Fundamentally updating fundamentals.

    PubMed

    Armstrong, Gail; Barton, Amy

    2013-01-01

    Recent educational research indicates that the six competencies of the Quality and Safety Education for Nurses initiative are best introduced in early prelicensure clinical courses. Content specific to quality and safety has traditionally been covered in senior level courses. This article illustrates an effective approach to using quality and safety as an organizing framework for any prelicensure fundamentals of nursing course. Providing prelicensure students a strong foundation in quality and safety in an introductory clinical course facilitates early adoption of quality and safety competencies as core practice values.

  2. Non-empirical calculations of NMR indirect carbon-carbon coupling constants. Part 9--Bicyclobutane-containing polycycloalkanes.

    PubMed

    Krivdin, Leonid B

    2004-10-01

    13C--(13)C spin-spin coupling constants characterizing the bicyclobutane moiety of seven well-known bicyclobutane-containing polycycloalkanes were calculated at the SOPPA level. Benchmark calculations on tricyclopentane and octabisvalene revealed an appropriate level of theory and sufficient quality of basis sets used to perform geometry searches and to calculate spin-spin coupling constants. Several experimental uncertainties were resolved and a number of interesting couplings were predicted. The most interesting trend observed in this series of polycycloalkanes is the marked increase (decrease in absolute value) of J(C,C) between bridgehead carbons with increase in the puckering angle of the bicyclobutane moiety. This predicts almost zero coupling between bridgehead carbons of tricyclopentane and explains the positive J(C,C) in tetrahedrane in contrast to the negative bridgehead-bridgehead J(C,C) in bicyclobutane.

  3. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 3; Constant Stress and Cyclic Stress Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  4. Theory versus experiment for the rotordynamic coefficients of annular gas seals. Part 2: Constant clearance and convergent-tapered geometry

    NASA Technical Reports Server (NTRS)

    Nelson, C. C.; Childs, D. W.; Nicks, C.; Elrod, D.

    1985-01-01

    The leakage and rotordynamic coefficients of constant-clearance and convergent-tapered annular gas seals were measured in an experimental test facility. The results are presented along with the theoretically predicted values. Of particular interest is the prediction that optimally tapered seals have significantly larger direct siffness than straight seals. The experimental results verify this prediction. Generally the theory does quite well, but fails to predict the large increase in direct stiffness when the fluid is pre-rotated.

  5. How to increase treatment effectiveness and efficiency in psychiatry: creative psychopharmacotherapy - part 1: definition, fundamental principles and higher effectiveness polypharmacy.

    PubMed

    Jakovljević, Miro

    2013-09-01

    Psychopharmacotherapy is a fascinating field that can be understood in many different ways. It is both a science and an art of communication with a heavily subjective dimension. The advent of a significant number of the effective and well tolerated mental health medicines during and after 1990s decade of the brain has increased our possibilities to treat major mental disorders in more successful ways with much better treatment outcome including full recovery. However, there is a huge gap between our possibilities for achieving high treatment effectiveness and not satisfying results in day-to-day clinical practice. Creative approach to psychopharmacotherapy could advance everyday clinical practice and bridge the gap. Creative psychopharmacotherapy is a concept that incorporates creativity as its fundamental tool. Creativity involves the intention and ability to transcend limiting traditional ideas, rules, patterns and relationships and to create meaningful new ideas, interpretations, contexts and methods in clinical psychopharmacology.

  6. Measurements in the turbulent boundary layer at constant pressure in subsonic and supersonic flow. Part 1: Mean flow

    NASA Technical Reports Server (NTRS)

    Collins, D. J.; Coles, D. E.; Hicks, J. W.

    1978-01-01

    Experiments were carried out to test the accuracy of laser Doppler instrumentation for measurement of Reynolds stresses in turbulent boundary layers in supersonic flow. Two facilities were used to study flow at constant pressure. In one facility, data were obtained on a flat plate at M sub e = 0.1, with Re theta up to 8,000. In the other, data were obtained on an adiabatic nozzle wall at M sub e = 0.6, 0.8, 1.0, 1.3, and 2.2, with Re theta = 23,000 and 40,000. The mean flow as observed using Pitot tube, Preston tube, and floating element instrumentation is described. Emphasis is on the use of similarity laws with Van Driest scaling and on the inference of the shearing stress profile and the normal velocity component from the equations of mean motion. The experimental data are tabulated.

  7. Characterization of high-power lithium-ion cells during constant current cycling. Part I. Cycle performance and electrochemical diagnostics

    SciTech Connect

    Shim, Joongpyo; Striebel, Kathryn A.

    2003-01-24

    Twelve-cm{sup 2} pouch type lithium-ion cells were assembled with graphite anodes, LiNi{sub 0.8}Co{sub 0.15}Al{sub 0.05}O{sub 2} cathodes and 1M LiPF{sub 6}/EC/DEC electrolyte. These pouch cells were cycled at different depths of discharge (100 percent and 70 percent DOD) at room temperature to investigate cycle performance and pulse power capability. The capacity loss and power fade of the cells cycled over 100 percent DOD was significantly faster than the cell cycled over 70 percent DOD. The overall cell impedance increased with cycling, although the ohmic resistance from the electrolyte was almost constant. From electrochemical analysis of each electrode after cycling, structural and/or impedance changes in the cathode are responsible for most of the capacity and power fade, not the consumption of cycleable Li from side-reactions.

  8. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 2; Constant Stress Rate Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  9. Hypotheses and fundamental study design characteristics for evaluating potential reduced-risk tobacco products. Part I: Heuristic.

    PubMed

    Murrelle, Lenn; Coggins, Christopher R E; Gennings, Chris; Carchman, Richard A; Carter, Walter H; Davies, Bruce D; Krauss, Marc R; Lee, Peter N; Schleef, Raymond R; Zedler, Barbara K; Heidbreder, Christian

    2010-06-01

    The risk-reducing effect of a potential reduced-risk tobacco product (PRRP) can be investigated conceptually in a long-term, prospective study of disease risks among cigarette smokers who switch to a PRRP and in appropriate comparison groups. Our objective was to provide guidance for establishing the fundamental design characteristics of a study intended to (1) determine if switching to a PRRP reduces the risk of lung cancer (LC) compared with continued cigarette smoking, and (2) compare, using a non-inferiority approach, the reduction in LC risk among smokers who switched to a PRRP to the reduction in risk among smokers who quit smoking entirely. Using standard statistical methods applied to published data on LC incidence after smoking cessation, we show that the sample size and duration required for a study designed to evaluate the potential for LC risk reduction for an already marketed PRRP, compared with continued smoking, varies depending on the LC risk-reducing effectiveness of the PRRP, from a 5-year study with 8000-30,000 subjects to a 15-year study with <5000 to 10,000 subjects. To assess non-inferiority to quitting, the required sample size tends to be about 10 times greater, again depending on the effectiveness of the PRRP.

  10. Measurement of the positive muon lifetime and determination of the Fermi constant to part-per-million precision.

    PubMed

    Webber, D M; Tishchenko, V; Peng, Q; Battu, S; Carey, R M; Chitwood, D B; Crnkovic, J; Debevec, P T; Dhamija, S; Earle, W; Gafarov, A; Giovanetti, K; Gorringe, T P; Gray, F E; Hartwig, Z; Hertzog, D W; Johnson, B; Kammel, P; Kiburg, B; Kizilgul, S; Kunkle, J; Lauss, B; Logashenko, I; Lynch, K R; McNabb, R; Miller, J P; Mulhauser, F; Onderwater, C J G; Phillips, J; Rath, S; Roberts, B L; Winter, P; Wolfe, B

    2011-01-28

    We report a measurement of the positive muon lifetime to a precision of 1.0 ppm; it is the most precise particle lifetime ever measured. The experiment used a time-structured, low-energy muon beam and a segmented plastic scintillator array to record more than 2×10(12) decays. Two different stopping target configurations were employed in independent data-taking periods. The combined results give τ(μ(+)) (MuLan)=2 196 980.3(2.2)  ps, more than 15 times as precise as any previous experiment. The muon lifetime gives the most precise value for the Fermi constant: G(F) (MuLan)=1.166 378 8(7)×10(-5)  GeV(-2) (0.6 ppm). It is also used to extract the μ(-)p singlet capture rate, which determines the proton's weak induced pseudoscalar coupling g(P).

  11. Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Precesses with Applications to Hybrid Rocket Motors. Part 1; Experimental Investigation

    NASA Technical Reports Server (NTRS)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Johnson, David K.; Serin, Nadir; Risha, Grant A.; Merkle, Charles L.; Venkateswaran, Sankaran

    1996-01-01

    This final report summarizes the major findings on the subject of 'Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Processes with Applications to Hybrid Rocket Motors', performed from 1 April 1994 to 30 June 1996. Both experimental results from Task 1 and theoretical/numerical results from Task 2 are reported here in two parts. Part 1 covers the experimental work performed and describes the test facility setup, data reduction techniques employed, and results of the test firings, including effects of operating conditions and fuel additives on solid fuel regression rate and thermal profiles of the condensed phase. Part 2 concerns the theoretical/numerical work. It covers physical modeling of the combustion processes including gas/surface coupling, and radiation effect on regression rate. The numerical solution of the flowfield structure and condensed phase regression behavior are presented. Experimental data from the test firings were used for numerical model validation.

  12. Fabric transitions in quartz via viscoplastic self-consistent modeling part I: Axial compression and simple shear under constant strain

    NASA Astrophysics Data System (ADS)

    Morales, Luiz F. G.; Lloyd, Geoffrey E.; Mainprice, David

    2014-12-01

    Quartz is a common crustal mineral that deforms plastically in a wide range of temperatures and pressures, leading to the development of different types of crystallographic preferred orientation (CPO) patterns. In this contribution we present the results of an extensive modeling of quartz fabric transitions via a viscoplastic self-consistent (VPSC) approach. For that, we have performed systematic simulations using different sets of relative critical resolved shear stress of the main quartz slip systems. We have performed these simulations in axial compression and simple shear regimes under constant Von Mises equivalent strain of 100% (γ = 1.73), assuming that the aggregates deformed exclusively by dislocation glide. Some of the predicted CPOs patterns are similar to those observed in naturally and experimentally deformed quartz. Nevertheless, some classical CPO patterns usually interpreted as result from dislocation glide (e.g. Y-maxima due to prism < a > slip) are clearly not developed in the simulated conditions. In addition we reported new potential preferred orientation patterns that might happen in high temperature conditions, both in axial compression and simple shear. We have demonstrated that CPOs generated under axial compression are usually stronger that those predicted under simple shear, due to the continuous rotation observed in the later simulations. The fabric strength depends essentially on the dominant active slip system, and normally the stronger CPOs result from dominant basal slip in < a >, followed by rhomb < a > and prism [c] slip, whereas prism < a > slip does not produce strong fabrics. The opening angle of quartz [0001] fabric used as a proxy of temperature seems to be reliable for deformation temperatures of ~ 400 °C, when the main slip systems have similar behaviors.

  13. Selectivity and delignification kinetics for oxidative short-term lime pretreatment of poplar wood, Part I: Constant-pressure.

    PubMed

    Sierra-Ramírez, Rocío; Garcia, Laura A; Holtzapple, Mark Thomas

    2011-07-01

    Kinetic models applied to oxygen bleaching of paper pulp focus on the degradation of polymers, either lignin or carbohydrates. Traditionally, they separately model different moieties that degrade at three different rates: rapid, medium, and slow. These models were successfully applied to lignin and carbohydrate degradation of poplar wood submitted to oxidative pretreatment with lime at the following conditions: temperature 110-180°C, total pressure 7.9-21.7 bar, and excess lime loading of 0.5 g Ca(OH)2 per gram dry biomass. These conditions were held constant for 1-6 h. The models properly fit experimental data and were used to determine pretreatment selectivity in two fashions: differential and integral. By assessing selectivity, the detrimental effect of pretreatment on carbohydrates at high temperatures and at low lignin content was determined. The models can be used to identify pretreatment conditions that selectively remove lignin while preserving carbohydrates. Lignin removal≥50% with glucan preservation≥90% was observed for differential glucan selectivities between ∼10 and ∼30 g lignin degraded per gram glucan degraded. Pretreatment conditions complying with these reference values were preferably observed at 140°C, total pressure≥14.7 bars, and for pretreatment times between 2 and 6 h depending on the total pressure (the higher the pressure, the less time). They were also observed at 160°C, total pressure of 14.7 and 21.7 bars, and pretreatment time of 2 h. Generally, at 110°C lignin removal is insufficient and at 180°C carbohydrates do not preserve well. PMID:21692196

  14. The hubble constant.

    PubMed

    Huchra, J P

    1992-04-17

    The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution. PMID:17743107

  15. Nucleosynthesis and the variation of fundamental couplings

    SciTech Connect

    Mueller, Christian M.; Schaefer, Gregor; Wetterich, Christof

    2004-10-15

    We determine the influence of a variation of the fundamental 'constants' on the predicted helium abundance in Big Bang Nucleosynthesis. The analytic estimate is performed in two parts: the first step determines the dependence of the helium abundance on the nuclear physics parameters, while the second step relates those parameters to the fundamental couplings of particle physics. This procedure can incorporate in a flexible way the time variation of several couplings within a grand unified theory while keeping the nuclear physics computation separate from any GUT model dependence.

  16. Tether fundamentals

    NASA Technical Reports Server (NTRS)

    Carroll, J. A.

    1986-01-01

    Some fundamental aspects of tethers are presented and briefly discussed. The effects of gravity gradients, dumbbell libration in circular orbits, tether control strategies and impact hazards for tethers are among those fundamentals. Also considered are aerodynamic drag, constraints in momentum transfer applications and constraints with permanently deployed tethers. The theoretical feasibility of these concepts are reviewed.

  17. The Hubble constant

    NASA Technical Reports Server (NTRS)

    Huchra, John P.

    1992-01-01

    The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy radial velocities and distances. Although there has been considerable progress in the development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution.

  18. Probing fundamental film parameters of immobilized enzymes--towards enhanced biosensor performance. Part II-Electroanalytical estimation of immobilized enzyme performance.

    PubMed

    Fogel, R; Limson, J L

    2011-07-10

    The method of immobilization of a protein has a great influence on the overall conformation, and hence, functioning of the protein. Thus, a greater understanding of the events undergone by the protein during immobilization is key to manipulating the immobilization method to produce a strategy that influences the advantages of immobilization while minimizing their disadvantages in biosensor design. In this, the second paper of a two-part series, we have assessed the kinetic parameters of thin-film laccase monolayers, covalently attached to SAMs differing in spacer-arm length and lateral density of spacer arms. This was achieved using chronoamperometry and an electroactive product (p-benzoquinone), which was modeled in a non-linear regressional fashion to extract the relevant parameters. Finally, comparisons between the kinetic parameters presented in this paper and the rheological parameters of laccase monolayers immobilized in the same manner (Part I of this two paper series) were performed. Improvements in the maximal enzyme-catalysed current, i(max), the apparent Michaelis-Menten constant, K(m) and the apparent biosensor sensitivity were noted for most of the surfaces with increasing linker length. Decreasing the lateral density of the spacer-arms brought about a general improvement in these parameters, which is attributed to the decrease in multiple points of immobilization undergone by functional proteins. Finally, comparisons between rheological data and kinetics data showed that the degree of viscosity exhibited by protein films has a negative influence on attached protein layers, while enhanced protein hydration levels (assessed piezoelectrically from data obtained in Paper 1) has a positive effect on those surfaces comprising rigidly bound protein layers.

  19. Temporal variation of coupling constants and nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Oberhummer, H.; Csótó, A.; Fairbairn, M.; Schlattl, H.; Sharma, M. M.

    2003-05-01

    We investigate the triple-alpha process and the Oklo phenomenon to obtain constraints on possible cosmological time variations of fundamental constants. Specifically we study cosmological temporal constraints for the fine structure constant and nucleon and meson masses.

  20. Marketing fundamentals.

    PubMed

    Redmond, W H

    2001-01-01

    This chapter outlines current marketing practice from a managerial perspective. The role of marketing within an organization is discussed in relation to efficiency and adaptation to changing environments. Fundamental terms and concepts are presented in an applied context. The implementation of marketing plans is organized around the four P's of marketing: product (or service), promotion (including advertising), place of delivery, and pricing. These are the tools with which marketers seek to better serve their clients and form the basis for competing with other organizations. Basic concepts of strategic relationship management are outlined. Lastly, alternate viewpoints on the role of advertising in healthcare markets are examined. PMID:11401791

  1. Fundamentals of Library Instruction

    ERIC Educational Resources Information Center

    McAdoo, Monty L.

    2012-01-01

    Being a great teacher is part and parcel of being a great librarian. In this book, veteran instruction services librarian McAdoo lays out the fundamentals of the discipline in easily accessible language. Succinctly covering the topic from top to bottom, he: (1) Offers an overview of the historical context of library instruction, drawing on recent…

  2. Food Service Fundamentals.

    ERIC Educational Resources Information Center

    Marine Corps Inst., Washington, DC.

    Developed as part of the Marine Corps Institute (MCI) correspondence training program, this course on food service fundamentals is designed to provide a general background in the basic aspects of the food service program in the Marine Corps; it is adaptable for nonmilitary instruction. Introductory materials include specific information for MCI…

  3. The fundamentals of hospice compliance what is it and what are the implications for the future? An overview for hospice clinicians part 2: Hospice risk areas.

    PubMed

    Balfour, Susan

    2012-05-01

    This article, Part 2 of a 2-part series, continues the examination of the Medicare compliance climate and its impact on hospice providers. This 2nd part focuses on hospice-specific compliance risk areas and specific risk-reduction strategies. The case example from Part 1 is continued.

  4. Healthcare fundamentals.

    PubMed

    Kauk, Justin; Hill, Austin D; Althausen, Peter L

    2014-07-01

    In order for a trauma surgeon to have an intelligent discussion with hospital administrators, healthcare plans, policymakers, or any other physicians, a basic understanding of the fundamentals of healthcare is paramount. It is truly shocking how many surgeons are unable to describe the difference between Medicare and Medicaid or describe how hospitals and physicians get paid. These topics may seem burdensome but they are vital to all business decision making in the healthcare field. The following chapter provides further insight about what we call "the basics" of providing medical care today. Most of the topics presented can be applied to all specialties of medicine. It is broken down into 5 sections. The first section is a brief overview of government programs, their influence on care delivery and reimbursement, and past and future legislation. Section 2 focuses on the compliance, care provision, and privacy statutes that regulate physicians who care for Medicare/Medicaid patient populations. With a better understanding of these obligations, section 3 discusses avenues by which physicians can stay informed of current and pending health policy and provides ways that they can become involved in shaping future legislation. The fourth section changes gears slightly by explaining how the concepts of trade restraint, libel, antitrust legislation, and indemnity relate to physician practice. The fifth, and final, section ties all of components together by describing how physician-hospital alignment can be mutually beneficial in providing patient care under current healthcare policy legislation.

  5. Development of procedures for calculating stiffness and damping properties of elastomers in engineering applications. Part 2: Elastomer characteristics at constant temperature

    NASA Technical Reports Server (NTRS)

    Gupta, P. K.; Tessarzik, J. M.; Cziglenyi, L.

    1974-01-01

    Dynamic properties of a commerical polybutadiene compound were determined at a constant temperature of 32 C by a forced-vibration resonant mass type of apparatus. The constant thermal state of the elastomer was ensured by keeping the ambient temperature constant and by limiting the power dissipation in the specimen. Experiments were performed with both compression and shear specimens at several preloads (nominal strain varying from 0 to 5 percent), and the results are reported in terms of a complex stiffness as a function of frequency. Very weak frequency dependence is observed and a simple power law type of correlation is shown to represent the data well. Variations in the complex stiffness as a function of preload are also found to be small for both compression and shear specimens.

  6. Cosmology with varying constants.

    PubMed

    Martins, Carlos J A P

    2002-12-15

    The idea of possible time or space variations of the 'fundamental' constants of nature, although not new, is only now beginning to be actively considered by large numbers of researchers in the particle physics, cosmology and astrophysics communities. This revival is mostly due to the claims of possible detection of such variations, in various different contexts and by several groups. I present the current theoretical motivations and expectations for such variations, review the current observational status and discuss the impact of a possible confirmation of these results in our views of cosmology and physics as a whole.

  7. Measurement fundamentals

    SciTech Connect

    Webb, R.A.

    1995-12-01

    The need to have accurate petroleum measurement is obvious. Petroleum measurement is the basis of commerce between oil producers, royalty owners, oil transporters, refiners, marketers, the Department of Revenue, and the motoring public. Furthermore, petroleum measurements are often used to detect operational problems or unwanted releases in pipelines, tanks, marine vessels, underground storage tanks, etc. Therefore, consistent, accurate petroleum measurement is an essential part of any operation. While there are several methods and different types of equipment used to perform petroleum measurement, the basic process stays the same. The basic measurement process is the act of comparing an unknown quantity, to a known quantity, in order to establish its magnitude. The process can be seen in a variety of forms; such as measuring for a first-down in a football game, weighing meat and produce at the grocery, or the use of an automobile odometer.

  8. On the variability of the Charnock constant and the functional dependence of the drag coefficient on wind speed: Part II-Observations

    NASA Astrophysics Data System (ADS)

    Bye, John A. T.; Wolff, Jörg-Olaf; Lettmann, Karsten A.

    2014-07-01

    An analytical expression for the 10 m drag law in terms of the 10 m wind speed at the maximum in the 10 m drag coefficient, and the Charnock constant is presented, which is based on the results obtained from a model of the air-sea interface derived in Bye et al. (2010). This drag law is almost independent of wave age and over the mid-range of wind speeds (5-17 ms-1) is very similar to the drag law based on observed data presented in Foreman and Emeis (2010). The linear fit of the observed data which incorporates a constant into the traditional definition of the drag coefficient is shown to arise to first-order as a consequence of the momentum exchange across the air-sea boundary layer brought about by wave generation and spray production which are explicitly represented in the theoretical model.

  9. Modelling the fate of nonylphenolic compounds in the Seine River--part 1: determination of in-situ attenuation rate constants.

    PubMed

    Cladière, Mathieu; Bonhomme, Céline; Vilmin, Lauriane; Gasperi, Johnny; Flipo, Nicolas; Tassin, Bruno

    2014-01-15

    Assessing the fate of endocrine disrupting compounds (EDCs) in the environment is currently a key issue for determining their impacts on aquatic ecosystems. The 4-nonylphenol (4-NP) is a well known EDC and results from the biodegradation of surfactant nonylphenol ethoxylates (NPnEOs). Fate mechanisms of NPnEO are well documented but their rate constants have been mainly determined through laboratory experiments. This study aims at evaluating the in-situ fate of 4-NP, nonylphenol monoethoxylate (NP1EO) and nonylphenolic acetic acid (NP1EC). Two sampling campaigns were carried out on the Seine River in July and September 2011, along a 28km-transect downstream Paris City. The field measurements are used for the calibration of a sub-model of NPnEO fate, included into a hydro-ecological model of the Seine River (ProSe). The timing of the sampling is based on the Seine River velocity in order to follow a volume of water. Based on our results, in-situ attenuation rate constants of 4-NP, NP1EO and NP1EC for both campaigns are evaluated. These rate constants vary greatly. Although the attenuation rate constants in July are especially high (higher than 1d(-1)), those obtained in September are lower and consistent with the literature. This is probably due to the biogeochemical conditions in the Seine River. Indeed, the July sampling campaign took place at the end of an algal bloom leading to an unusual bacterial biomass while the September campaign was carried out during common biogeochemical status. Finally, the uncertainties on measurements and on the calibration parameters are estimated through a sensitivity analysis. This study provides relevant information regarding the fate of biodegradable pollutants in an aquatic environment by coupling field measurements and a biogeochemical model. Such data may be very helpful in the future to better understand the fate of nonylphenolic compounds or any other pollutants at the basin scale. PMID:24100207

  10. Fundamental tests and measures of the structure of matter at short distances

    SciTech Connect

    Brodsky, S.J.

    1981-07-01

    Recent progress in gauge field theories has led to a new perspective on the structure of matter and basic interactions at short distances. It is clear that at very high energies quantum electrodynamics, together with the weak and strong interactions, are part of a unified theory with new fundamental constants, new symmetries, and new conservation laws. A non-technical introduction to these topics is given, with emphasis on fundamental tests and measurements. 21 references.

  11. The fundamentals of hospice compliance: what is it and what are the implications for the future? An overview for hospice clinicians, part 1.

    PubMed

    Balfour, Susan

    2012-02-01

    This article, Part 1 of a 2-part series, provides an overview of the current Medicare compliance climate and its implications for hospice providers. Content focuses on the 7 elements of a comprehensive compliance framework as defined by the Health and Human Services Office of the Inspector General in its 1999 Compliance Guidance for Hospices. A brief case example is provided and serves to set the stage for Part 2, which will explore hospice-specific risk areas and specific risk-reduction strategies.

  12. Jumping on the Train of Personalized Medicine: A Primer for Non-Geneticist Clinicians: Part 2. Fundamental Concepts in Genetic Epidemiology

    PubMed Central

    Li, Aihua; Meyre, David

    2014-01-01

    With the decrease in sequencing costs, personalized genome sequencing will eventually become common in medical practice. We therefore write this series of three reviews to help non-geneticist clinicians to jump into the fast-moving field of personalized medicine. In the first article of this series, we reviewed the fundamental concepts in molecular genetics. In this second article, we cover the key concepts and methods in genetic epidemiology including the classification of genetic disorders, study designs and their implementation, genetic marker selection, genotyping and sequencing technologies, gene identification strategies, data analyses and data interpretation. This review will help the reader critically appraise a genetic association study. In the next article, we will discuss the clinical applications of genetic epidemiology in the personalized medicine area. PMID:25598767

  13. Extending the Constant Power Speed Range of the Brushless DC Motor through Dual Mode Inverter Control -- Part I: Theory and Simulation

    SciTech Connect

    Lawler, J.S.

    2001-10-29

    An inverter topology and control scheme has been developed that can drive low-inductance, surface-mounted permanent magnet motors over the wide constant power speed range required in electric vehicle applications. This new controller is called the dual-mode inverter control (DMIC) [1]. The DMIC can drive either the Permanent Magnet Synchronous Machine (PMSM) with sinusoidal back emf, or the brushless dc machine (BDCM) with trapezoidal emf in the motoring and regenerative braking modes. In this paper we concentrate on the BDCM under high-speed motoring conditions. Simulation results show that if all motor and inverter loss mechanisms are neglected, the constant power speed range of the DMIC is infinite. The simulation results are supported by closed form expressions for peak and rms motor current and average power derived from analytical solution to the differential equations governing the DMIC/BDCM drive for the lossless case. The analytical solution shows that the range of motor inductance that can be accommodated by the DMIC is more than an order of magnitude such that the DMIC is compatible with both low- and high-inductance BDCMs. Finally, method is given for integrating the classical hysteresis band current control, used for motor control below base speed, with the phase advance of DMIC that is applied above base speed. The power versus speed performance of the DMIC is then simulated across the entire speed range.

  14. Fundamental Study of a Single Point Lean Direct Injector. Part I: Effect of Air Swirler Angle and Injector Tip Location on Spray Characteristics

    NASA Technical Reports Server (NTRS)

    Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.

    2014-01-01

    Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 deg is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.

  15. Fundamental Study of a Single Point Lean Direct Injector. Part I: Effect of Air Swirler Angle and Injector Tip Location on Spray Characteristics

    NASA Technical Reports Server (NTRS)

    Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.

    2015-01-01

    Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 degrees is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.

  16. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part I: fundamental theory.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    This paper provides a systematic rate-distortion (R-D) analysis of the dead-zone plus uniform threshold scalar quantization (DZ+UTSQ) with nearly uniform reconstruction quantization (NURQ) for generalized Gaussian distribution (GGD), which consists of two aspects: R-D performance analysis and R-D modeling. In R-D performance analysis, we first derive the preliminary constraint of optimum entropy-constrained DZ+UTSQ/NURQ for GGD, under which the property of the GGD distortion-rate (D-R) function is elucidated. Then for the GGD source of actual transform coefficients, the refined constraint and precise conditions of optimum DZ+UTSQ/NURQ are rigorously deduced in the real coding bit rate range, and efficient DZ+UTSQ/NURQ design criteria are proposed to reasonably simplify the utilization of effective quantizers in practice. In R-D modeling, inspired by R-D performance analysis, the D-R function is first developed, followed by the novel rate-quantization (R-Q) and distortion-quantization (D-Q) models derived using analytical and heuristic methods. The D-R, R-Q, and D-Q models form the source model describing the relationship between the rate, distortion, and quantization steps. One application of the proposed source model is the effective two-pass VBR coding algorithm design on an encoder of H.264/AVC reference software, which achieves constant video quality and desirable rate control accuracy.

  17. Measurements in the Turbulent Boundary Layer at Constant Pressure in Subsonic and Supersonic Flow. Part 2: Laser-Doppler Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Dimotakis, P. E.; Collins, D. J.; Lang, D. B.

    1979-01-01

    A description of both the mean and the fluctuating components of the flow, and of the Reynolds stress as observed using a dual forward scattering laser-Doppler velocimeter is presented. A detailed description of the instrument and of the data analysis techniques were included in order to fully document the data. A detailed comparison was made between the laser-Doppler results and those presented in Part 1, and an assessment was made of the ability of the laser-Doppler velocimeter to measure the details of the flows involved.

  18. Varying Constants, Gravitation and Cosmology

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2011-12-01

    Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  19. Beyond the chemiosmotic theory: analysis of key fundamental aspects of energy coupling in oxidative phosphorylation in the light of a torsional mechanism of energy transduction and ATP synthesis--invited review part 1.

    PubMed

    Nath, Sunil

    2010-08-01

    In Part 1 of this invited article, we consider the fundamental aspects of energy coupling in oxidative phosphorylation. The central concepts of the chemiosmotic theory are re-examined and the major problems with its experimental verification are analyzed and reassessed from first principles. Several of its assumptions and interpretations (with regard, for instance, to consideration of the membrane as an inert barrier, the occurrence of energy transduction at thermodynamic equilibrium, the completely delocalized nature of the protonmotive force, and the notion of indirect coupling) are shown to be questionable. Important biological implications of this analysis for molecular mechanisms of biological energy transduction are enumerated. A fresh molecular mechanism of the uncoupling of oxidative phosphorylation by classical weak acid anion uncouplers and an adequate explanation for the existence of uncoupler-resistant mutants (which until now has remained a mystery) has been proposed based on novel insights arising from a new torsional mechanism of energy transduction and ATP synthesis.

  20. Fundamental physics in space: The French contribution

    NASA Astrophysics Data System (ADS)

    Léon-Hirtz, Sylvie

    2003-08-01

    This paper outlines the space Fundamental Physics projects developped under CNES responsability together with the french scientific community, either in the national french programme or in the french contribution to the ESA programme, mainly: -the MICROSCOPE project which aims at testing the Equivalence Principle between inertial mass and gravitational mass at a high level of precision, on a microsatellite of the MYRIADE series developped by CNES, -the PHARAO cold-atom clock which is part of the ACES project of ESA, located on an external pallett of the International Space Station, together with a swiss H-MASER and a micro-wave link making comparison with ground clocks, aimed at relativistic tests and measurement of universal constants, -the T2L2 optical link allowing to compare ultra-stable and ultra-precise clocks, -a contribution to the AMS spectrometer which searches for cosmic antimatter, on the external part of the International Space Station, -a contribution to the LISA mission of ESA for direct detection and measurement of gravitational waves by interferometry, -ground-based studies on cold-atom interferometers which could be part of the HYPER project submitted to ESA.

  1. Combustion Fundamentals Research

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.

  2. Exchange Rates and Fundamentals.

    ERIC Educational Resources Information Center

    Engel, Charles; West, Kenneth D.

    2005-01-01

    We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…

  3. Millikan's measurement of Planck's constant

    NASA Astrophysics Data System (ADS)

    Franklin, Allan

    2013-12-01

    Robert Millikan is famous for measuring the charge of the electron. His result was better than any previous measurement and his method established that there was a fundamental unit of charge, or charge quantization. He is less well-known for his measurement of Planck's constant, although, as discussed below, he is often mistakenly given credit for providing significant evidence in support of Einstein's photon theory of light.1 His Nobel Prize citation was "for his work on the elementary electric charge of electricity and the photoelectric effect," an indication of the significance of his work on the photoelectric effect.

  4. About variable constants

    NASA Astrophysics Data System (ADS)

    Blichert-Toft, J.; Albarede, F.

    2011-12-01

    When only modern isotope compositions are concerned, the choice of normalization values is inconsequential provided that their values are universally accepted. No harm is done as long as large amounts of standard reference material with known isotopic differences with respect to the reference value ('anchor point') can be maintained under controlled conditions. For over five decades, the scientific community has been referring to an essentially unavailable SMOW for stable O and H isotopes and to a long-gone belemnite sample for carbon. For radiogenic isotopes, the isotope composition of the daughter element, the parent-daughter ratio, and a particular value of the decay constant are all part of the reference. For the Lu-Hf system, for which the physical measurements of the decay constant have been particularly defective, the reference includes the isotope composition of Hf and the Lu/Hf ratio of an unfortunately heterogeneous chondrite mix that has been successively refined by Patchett and Tatsumoto (1981), Blichert-Toft and Albarede (1997, BTA), and Bouvier et al. (2008, BVP). The \\varepsilonHf(T) difference created by using BTA and BVP is nearly within error (+0.45 epsilon units today and -0.36 at 3 Ga) and therefore of little or no consequence. A more serious issue arises when the chondritic reference is taken to represent the Hf isotope evolution of the Bulk Silicate Earth (BSE): the initial isotope composition of the Solar System, as determined by the indistinguishable intercepts of the external eucrite isochron (Blichert-Toft et al., 2002) and the internal angrite SAH99555 isochron (Thrane et al., 2010), differs from the chondrite value of BTA and BVP extrapolated to 4.56 Ga by ~5 epsilon units. This difference and the overestimated value of the 176Lu decay constant derived from the slopes of these isochrons, have been interpreted as reflecting irradiation of the solar nebula by either gamma (Albarede et al., 2006) or cosmic rays (Thrane et al., 2010) during

  5. Fundamentals of phosphate transfer.

    PubMed

    Kirby, Anthony J; Nome, Faruk

    2015-07-21

    Historically, the chemistry of phosphate transfer-a class of reactions fundamental to the chemistry of Life-has been discussed almost exclusively in terms of the nucleophile and the leaving group. Reactivity always depends significantly on both factors; but recent results for reactions of phosphate triesters have shown that it can also depend strongly on the nature of the nonleaving or "spectator" groups. The extreme stabilities of fully ionised mono- and dialkyl phosphate esters can be seen as extensions of the same effect, with one or two triester OR groups replaced by O(-). Our chosen lead reaction is hydrolysis-phosphate transfer to water: because water is the medium in which biological chemistry takes place; because the half-life of a system in water is an accepted basic index of stability; and because the typical mechanisms of hydrolysis, with solvent H2O providing specific molecules to act as nucleophiles and as general acids or bases, are models for reactions involving better nucleophiles and stronger general species catalysts. Not least those available in enzyme active sites. Alkyl monoester dianions compete with alkyl diester monoanions for the slowest estimated rates of spontaneous hydrolysis. High stability at physiological pH is a vital factor in the biological roles of organic phosphates, but a significant limitation for experimental investigations. Almost all kinetic measurements of phosphate transfer reactions involving mono- and diesters have been followed by UV-visible spectroscopy using activated systems, conveniently compounds with good leaving groups. (A "good leaving group" OR* is electron-withdrawing, and can be displaced to generate an anion R*O(-) in water near pH 7.) Reactivities at normal temperatures of P-O-alkyl derivatives-better models for typical biological substrates-have typically had to be estimated: by extended extrapolation from linear free energy relationships, or from rate measurements at high temperatures. Calculation is free

  6. The Determination of the Strong Coupling Constant

    NASA Astrophysics Data System (ADS)

    Dissertori, Günther

    2016-10-01

    The strong coupling constant is one of the fundamental parameters of the Standard Theory of particle physics. In this review I will briefly summarise the theoretical framework, within which the strong coupling constant is defined and how it is connected to measurable observables. Then I will give an historical overview of its experimental determinations and discuss the current status and world average value. Among the many different techniques used to determine this coupling constant in the context of quantum chromodynamics, I will focus in particular on a number of measurements carried out at the Large Electron-Positron Collider (LEP) and the Large Hadron Collider (LHC) at CERN.

  7. Fundamentals of Physics

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-01-01

    No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.

  8. On the Khinchin Constant

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Craw, James M. (Technical Monitor)

    1995-01-01

    We prove known identities for the Khinchin constant and develop new identities for the more general Hoelder mean limits of continued fractions. Any of these constants can be developed as a rapidly converging series involving values of the Riemann zeta function and rational coefficients. Such identities allow for efficient numerical evaluation of the relevant constants. We present free-parameter, optimizable versions of the identities, and report numerical results.

  9. The cosmological constant

    NASA Technical Reports Server (NTRS)

    Carroll, Sean M.; Press, William H.; Turner, Edwin L.

    1992-01-01

    The cosmological constant problem is examined in the context of both astronomy and physics. Effects of a nonzero cosmological constant are discussed with reference to expansion dynamics, the age of the universe, distance measures, comoving density of objects, growth of linear perturbations, and gravitational lens probabilities. The observational status of the cosmological constant is reviewed, with attention given to the existence of high-redshift objects, age derivation from globular clusters and cosmic nuclear data, dynamical tests of Omega sub Lambda, quasar absorption line statistics, gravitational lensing, and astrophysics of distant objects. Finally, possible solutions to the physicist's cosmological constant problem are examined.

  10. Optical constants of solid methane

    NASA Technical Reports Server (NTRS)

    Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.

    1989-01-01

    Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented of the optical constants of solid methane for the 0.4 to 2.6 micron region. K is reported for both the amorphous and the crystalline (annealed) states. Using the previously measured values of the real part of the refractive index, n, of liquid methane at 110 K n is computed for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH4.

  11. Calculation of magnetostriction constants

    NASA Astrophysics Data System (ADS)

    Tatebayashi, T.; Ohtsuka, S.; Ukai, T.; Mori, N.

    1986-02-01

    The magnetostriction constants h1 and h2 for Ni and Fe metals and the anisotropy constants K1 and K2 for Fe metal are calculated on the basis of the approximate d bands obtained by Deegan's prescription, by using Gilat-Raubenheimer's method. The obtained results are compared with the experimental ones.

  12. Astronomical reach of fundamental physics.

    PubMed

    Burrows, Adam S; Ostriker, Jeremiah P

    2014-02-18

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law. PMID:24477692

  13. Fundamental studies of polymer filtration

    SciTech Connect

    Smith, B.F.; Lu, M.T.; Robison, T.W.; Rogers, Y.C.; Wilson, K.V.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The objectives of this project were (1) to develop an enhanced fundamental understanding of the coordination chemistry of hazardous-metal-ion complexation with water-soluble metal-binding polymers, and (2) to exploit this knowledge to develop improved separations for analytical methods, metals processing, and waste treatment. We investigated features of water-soluble metal-binding polymers that affect their binding constants and selectivity for selected transition metal ions. We evaluated backbone polymers using light scattering and ultrafiltration techniques to determine the effect of pH and ionic strength on the molecular volume of the polymers. The backbone polymers were incrementally functionalized with a metal-binding ligand. A procedure and analytical method to determine the absolute level of functionalization was developed and the results correlated with the elemental analysis, viscosity, and molecular size.

  14. Astronomical reach of fundamental physics.

    PubMed

    Burrows, Adam S; Ostriker, Jeremiah P

    2014-02-18

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.

  15. Astronomical reach of fundamental physics

    PubMed Central

    Burrows, Adam S.; Ostriker, Jeremiah P.

    2014-01-01

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law. PMID:24477692

  16. Universal constants and equations of turbulent motion

    NASA Astrophysics Data System (ADS)

    Baumert, Helmut

    2011-11-01

    For turbulence at high Reynolds number we present an analogy with the kinetic theory of gases, with dipoles made of vortex tubes as frictionless, incompressible but deformable quasi-particles. Their movements are governed by Helmholtz' elementary vortex rules applied locally. A contact interaction or ``collision'' leads either to random scatter of a trajectory or to the formation of two likewise rotating, fundamentally unstable whirls forming a dissipative patch slowly rotating around its center of mass, the latter almost at rest. This approach predicts von Karman's constant as 1/sqrt(2 pi) = 0.399 and the spatio-temporal dynamics of energy-containing time and length scales controlling turbulent mixing [Baumert 2005, 2009]. A link to turbulence spectra was missing so far. In the present contribution it is shown that the above image of dipole movements is compatible with Kolmogorov's spectra if dissipative patches, beginning as two likewise rotating eddies, evolve locally into a space-filling bearing in the sense of Herrmann [1990], i.e. into an ``Apollonian gear.'' Its parts and pieces are are frictionless, excepting the dissipative scale of size zero. Our approach predicts the dimensionless pre-factor in the 3D Eulerian wavenumber spectrum (in terms of pi) as 1.8, and in the Lagrangian frequency spectrum as the integer number 2. Our derivations are free of empirical relations and rest on geometry, methods from many-particle physics, and on elementary conservation laws only. Department of the Navy Grant, ONR Global

  17. Optical constants of solid methane

    NASA Technical Reports Server (NTRS)

    Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.

    1990-01-01

    Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented on the optical constants of solid methane for the 0.4 to 2.6 micrometer region. Deposition onto a substrate at 10 K produces glassy (semi-amorphous) material. Annealing this material at approximately 33 K for approximately 1 hour results in a crystalline material as seen by sharper, more structured bands and negligible background extinction due to scattering. The constant k is reported for both the amorphous and the crystalline (annealed) states. Typical values (at absorption maxima) are in the .001 to .0001 range. Below lambda = 1.1 micrometers the bands are too weak to be detected by transmission through the films less than or equal to 215 micrometers in thickness, employed in the studies to date. Using previously measured values of the real part of the refractive index, n, of liquid methane at 110 K, n is computed for solid methane using the Lorentz-Lorenz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for

  18. Space Shuttle astrodynamical constants

    NASA Technical Reports Server (NTRS)

    Cockrell, B. F.; Williamson, B.

    1978-01-01

    Basic space shuttle astrodynamic constants are reported for use in mission planning and construction of ground and onboard software input loads. The data included here are provided to facilitate the use of consistent numerical values throughout the project.

  19. The cosmological constant problem

    SciTech Connect

    Dolgov, A.D.

    1989-05-01

    A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs.

  20. Constant potential pulse polarography

    USGS Publications Warehouse

    Christie, J.H.; Jackson, L.L.; Osteryoung, R.A.

    1976-01-01

    The new technique of constant potential pulse polarography, In which all pulses are to be the same potential, is presented theoretically and evaluated experimentally. The response obtained is in the form of a faradaic current wave superimposed on a constant capacitative component. Results obtained with a computer-controlled system exhibit a capillary response current similar to that observed In normal pulse polarography. Calibration curves for Pb obtained using a modified commercial pulse polarographic instrument are in good accord with theoretical predictions.

  1. Development of Monopole Interaction Models for Ionic Compounds. Part I: Estimation of Aqueous Henry’s Law Constants for Ions and Gas Phase pKa Values for Acidic Compounds

    EPA Science Inventory

    The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry’s Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aq...

  2. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  3. Arguing against fundamentality

    NASA Astrophysics Data System (ADS)

    McKenzie, Kerry

    This paper aims to open up discussion on the relationship between fundamentality and naturalism, and in particular on the question of whether fundamentality may be denied on naturalistic grounds. A historico-inductive argument for an anti-fundamentalist conclusion, prominent within the contemporary metaphysical literature, is examined; finding it wanting, an alternative 'internal' strategy is proposed. By means of an example from the history of modern physics - namely S-matrix theory - it is demonstrated that (1) this strategy can generate similar (though not identical) anti-fundamentalist conclusions on more defensible naturalistic grounds, and (2) that fundamentality questions can be empirical questions. Some implications and limitations of the proposed approach are discussed.

  4. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  5. Fundamentals of fluid lubrication

    NASA Technical Reports Server (NTRS)

    Hamrock, Bernard J.

    1991-01-01

    The aim is to coordinate the topics of design, engineering dynamics, and fluid dynamics in order to aid researchers in the area of fluid film lubrication. The lubrication principles that are covered can serve as a basis for the engineering design of machine elements. The fundamentals of fluid film lubrication are presented clearly so that students that use the book will have confidence in their ability to apply these principles to a wide range of lubrication situations. Some guidance on applying these fundamentals to the solution of engineering problems is also provided.

  6. Fundamentals of fluid sealing

    NASA Technical Reports Server (NTRS)

    Zuk, J.

    1976-01-01

    The fundamentals of fluid sealing, including seal operating regimes, are discussed and the general fluid-flow equations for fluid sealing are developed. Seal performance parameters such as leakage and power loss are presented. Included in the discussion are the effects of geometry, surface deformations, rotation, and both laminar and turbulent flows. The concept of pressure balancing is presented, as are differences between liquid and gas sealing. Mechanisms of seal surface separation, fundamental friction and wear concepts applicable to seals, seal materials, and pressure-velocity (PV) criteria are discussed.

  7. Reading Is Fundamental, 1977.

    ERIC Educational Resources Information Center

    Smithsonian Institution, Washington, DC. National Reading is Fun-damental Program.

    Reading Is Fundamental (RIF) is a national, nonprofit organization designed to motivate children to read by making a wide variety of inexpensive books available to them and allowing the children to choose and keep books that interest them. This annual report for 1977 contains the following information on the RIF project: an account of the…

  8. Fundamentals of Chemical Processes.

    ERIC Educational Resources Information Center

    Moser, William R.

    1985-01-01

    Describes a course that provides students with a fundamental understanding of the chemical, catalytic, and engineering sciences related to the chemical reactions taking place in a variety of reactors of different configurations. Also describes the eight major lecture topics, course examinations, and term papers. The course schedule is included.…

  9. Unification of Fundamental Forces

    NASA Astrophysics Data System (ADS)

    Salam, Abdus; Taylor, Foreword by John C.

    2005-10-01

    Foreword John C. Taylor; 1. Unification of fundamental forces Abdus Salam; 2. History unfolding: an introduction to the two 1968 lectures by W. Heisenberg and P. A. M. Dirac Abdus Salam; 3. Theory, criticism, and a philosophy Werner Heisenberg; 4. Methods in theoretical physics Paul Adrian Maurice Dirac.

  10. Fundamentals of Diesel Engines.

    ERIC Educational Resources Information Center

    Marine Corps Inst., Washington, DC.

    This student guide, one of a series of correspondence training courses designed to improve the job performance of members of the Marine Corps, deals with the fundamentals of diesel engine mechanics. Addressed in the three individual units of the course are the following topics: basic principles of diesel mechanics; principles, mechanics, and…

  11. Homeschooling and Religious Fundamentalism

    ERIC Educational Resources Information Center

    Kunzman, Robert

    2010-01-01

    This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to…

  12. Laser Fundamentals and Experiments.

    ERIC Educational Resources Information Center

    Van Pelt, W. F.; And Others

    As a result of work performed at the Southwestern Radiological Health Laboratory with respect to lasers, this manual was prepared in response to the increasing use of lasers in high schools and colleges. It is directed primarily toward the high school instructor who may use the text for a short course in laser fundamentals. The definition of the…

  13. Elastic constants of calcite

    USGS Publications Warehouse

    Peselnick, L.; Robie, R.A.

    1962-01-01

    The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics.

  14. A Legal Constant

    ERIC Educational Resources Information Center

    Taylor, Kelley R.

    2009-01-01

    The 21st century has brought many technological, social, and economic changes--nearly all of which have affected schools and the students, administrators, and faculty members who are in them. Luckily, as some things change, other things remain the same. Such is true with the fundamental legal principles that guide school administrators' actions…

  15. Rotor-Liquid-Fundament System's Oscillation

    NASA Astrophysics Data System (ADS)

    Kydyrbekuly, A.

    The work is devoted to research of oscillation and sustainability of stationary twirl of vertical flexible static dynamically out-of-balance rotor with cavity partly filled with liquid and set on relative frame fundament. The accounting of such factors like oscillation of fundament, liquid oscillation, influence of asymmetry of installation of a rotor on a shaft, anisotropism of shaft support and fundament, static and dynamic out-of-balance of a rotor, an external friction, an internal friction of a shaft, allows to settle an invoice more precisely kinematic and dynamic characteristics of system.

  16. On the role of the Avogadro constant in redefining SI units for mass and amount of substance

    NASA Astrophysics Data System (ADS)

    Leonard, B. P.

    2007-02-01

    There is a common misconception that the Avogadro constant is one of the fundamental constants of nature, in the same category as the speed of light, the Planck constant and the invariant masses of atomic-scale particles. Although the absolute mass of any specified atomic-scale entity is an invariant universal constant of nature, the Avogadro constant relating this to a macroscopic quantity is not. Rather, it is a man-made construct, designed by convention to define a convenient unit relating the atomic and macroscopic scales. The misportrayal seems to stem from the widespread use of the term 'fixed-Avogadro-constant' for describing a redefinition of the kilogram that is, in fact, based on a fixed atomic-scale particle mass. This paper endeavours to clarify the role of the Avogadro constant in current definitions of SI units for mass and amount of substance as well as recently proposed redefinitions of these units—in particular, those based on fixing the numerical values of the Planck and Avogadro constants, respectively. Precise definitions lead naturally to a rational, straightforward and intuitively obvious construction of appropriate (exactly defined) atomic-scale units for these quantities. And this, in turn, suggests a direct and easily comprehended two-part statement of the fixed-Planck-constant kilogram definition involving a well-understood and physically meaningful de Broglie-Compton frequency.

  17. Measuring Boltzmann's Constant with Carbon Dioxide

    ERIC Educational Resources Information Center

    Ivanov, Dragia; Nikolov, Stefan

    2013-01-01

    In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…

  18. Fundamentals of Polarized Light

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael

    2003-01-01

    The analytical and numerical basis for describing scattering properties of media composed of small discrete particles is formed by the classical electromagnetic theory. Although there are several excellent textbooks outlining the fundamentals of this theory, it is convenient for our purposes to begin with a summary of those concepts and equations that are central to the subject of this book and will be used extensively in the following chapters. We start by formulating Maxwell's equations and constitutive relations for time- harmonic macroscopic electromagnetic fields and derive the simplest plane-wave solution that underlies the basic optical idea of a monochromatic parallel beam of light. This solution naturally leads to the introduction of such fundamental quantities as the refractive index and the Stokes parameters. Finally, we define the concept of a quasi-monochromatic beam of light and discuss its implications.

  19. Compassion is a constant.

    PubMed

    Scott, Tricia

    2015-11-01

    Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands.

  20. XrayOpticsConstants

    2005-06-20

    This application (XrayOpticsConstants) is a tool for displaying X-ray and Optical properties for a given material, x-ray photon energy, and in the case of a gas, pressure. The display includes fields such as the photo-electric absorption attenuation length, density, material composition, index of refraction, and emission properties (for scintillator materials).

  1. Compassion is a constant.

    PubMed

    Scott, Tricia

    2015-11-01

    Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands. PMID:26542898

  2. Fundamental studies in geodynamics

    NASA Technical Reports Server (NTRS)

    Anderson, D. L.; Hager, B. H.; Kanamori, H.

    1981-01-01

    Research in fundamental studies in geodynamics continued in a number of fields including seismic observations and analysis, synthesis of geochemical data, theoretical investigation of geoid anomalies, extensive numerical experiments in a number of geodynamical contexts, and a new field seismic volcanology. Summaries of work in progress or completed during this report period are given. Abstracts of publications submitted from work in progress during this report period are attached as an appendix.

  3. Value of Fundamental Science

    NASA Astrophysics Data System (ADS)

    Burov, Alexey

    Fundamental science is a hard, long-term human adventure that has required high devotion and social support, especially significant in our epoch of Mega-science. The measure of this devotion and this support expresses the real value of the fundamental science in public opinion. Why does fundamental science have value? What determines its strength and what endangers it? The dominant answer is that the value of science arises out of curiosity and is supported by the technological progress. Is this really a good, astute answer? When trying to attract public support, we talk about the ``mystery of the universe''. Why do these words sound so attractive? What is implied by and what is incompatible with them? More than two centuries ago, Immanuel Kant asserted an inseparable entanglement between ethics and metaphysics. Thus, we may ask: which metaphysics supports the value of scientific cognition, and which does not? Should we continue to neglect the dependence of value of pure science on metaphysics? If not, how can this issue be addressed in the public outreach? Is the public alienated by one or another message coming from the face of science? What does it mean to be politically correct in this sort of discussion?

  4. Rare Isotopes and Fundamental Symmetries

    NASA Astrophysics Data System (ADS)

    Brown, B. Alex; Engel, Jonathan; Haxton, Wick; Ramsey-Musolf, Michael; Romalis, Michael; Savard, Guy

    2009-01-01

    Experiments searching for new interactions in nuclear beta decay / Klaus P. Jungmann -- The beta-neutrino correlation in sodium-21 and other nuclei / P. A. Vetter ... [et al.] -- Nuclear structure and fundamental symmetries/ B. Alex Brown -- Schiff moments and nuclear structure / J. Engel -- Superallowed nuclear beta decay: recent results and their impact on V[symbol] / J. C. Hardy and I. S. Towner -- New calculation of the isospin-symmetry breaking correlation to superallowed Fermi beta decay / I. S. Towner and J. C. Hardy -- Precise measurement of the [symbol]H to [symbol]He mass difference / D. E. Pinegar ... [et al.] -- Limits on scalar currents from the 0+ to 0+ decay of [symbol]Ar and isospin breaking in [symbol]Cl and [symbol]Cl / A. Garcia -- Nuclear constraints on the weak nucleon-nucleon interaction / W. C. Haxton -- Atomic PNC theory: current status and future prospects / M. S. Safronova -- Parity-violating nucleon-nucleon interactions: what can we learn from nuclear anapole moments? / B. Desplanques -- Proposed experiment for the measurement of the anapole moment in francium / A. Perez Galvan ... [et al.] -- The Radon-EDM experiment / Tim Chupp for the Radon-EDM collaboration -- The lead radius Eexperiment (PREX) and parity violating measurements of neutron densities / C. J. Horowitz -- Nuclear structure aspects of Schiff moment and search for collective enhancements / Naftali Auerbach and Vladimir Zelevinsky -- The interpretation of atomic electric dipole moments: Schiff theorem and its corrections / C. -P. Liu -- T-violation and the search for a permanent electric dipole moment of the mercury atom / M. D. Swallows ... [et al.] -- The new concept for FRIB and its potential for fundamental interactions studies / Guy Savard -- Collinear laser spectroscopy and polarized exotic nuclei at NSCL / K. Minamisono -- Environmental dependence of masses and coupling constants / M. Pospelov.

  5. Can compactifications solve the cosmological constant problem?

    NASA Astrophysics Data System (ADS)

    Hertzberg, Mark P.; Masoumi, Ali

    2016-06-01

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ = 0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain why Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.

  6. Fundamental Physics in Space: the French Contribution

    NASA Astrophysics Data System (ADS)

    Leon-Hirtz, S.

    2002-01-01

    Relativity and quantum physics provide the framework for contemporary physics in which the relations between matter, space and time have been radically rethought during the past century. Physicists however cannot be satisfied with these two distinct theories and they are seeking to unify them and thereby quantify the gravitational field. The key of this research lies in the highly precise study of the gravitational laws. Space environment, allowing large distance experiments and isolation from terrestrial noise, is the ideal place for carrying out very precise experiments on gravitation and is highly suitable for seeking new interactions that could show up in low-energy conditions. Since 1993 when the scientific community gave its first recommandations, CNES has been working out with french research laboratories on a variety of advanced technical instrumentations needed to fulfill such space experiments, especially in the fields of electrostatic microaccelerometers, cold atom clocks and cold atom inertial sensors, optical datation, optical interferometry and drag-free control. A number of Fundamental Physics projects are now under progress, in the frame of the national programme and the participation to the ESA programme, such as : -the MICROSCOPE microsatellite project aimed at testing the Equivalence Principle between inertial mass and gravitational mass at a high level of precision, which is the fourth CNES scientific project based on the MYRIADE microsatellite series, -the PHARAO cold-atom clock which is the heart of the ACES (Atomic Clock Ensemble in Space) european project located on an external pallett of the International Space Station, together with a swiss H- MASER and a micro-wave link making comparison with ground clocks, aimed at relativistic tests and measurement of universal constants, -the T2L2 optical link allowing to compare ultra-stable and ultra-precise clocks, -contribution to the AMS spectrometer aimed at the search for cosmic antimatter, on

  7. Varying constants quantum cosmology

    SciTech Connect

    Leszczyńska, Katarzyna; Balcerzak, Adam; Dabrowski, Mariusz P. E-mail: abalcerz@wmf.univ.szczecin.pl

    2015-02-01

    We discuss minisuperspace models within the framework of varying physical constants theories including Λ-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ansätze for the variability of constants: c(a) = c{sub 0} a{sup n} and G(a)=G{sub 0} a{sup q}. We find that most of the varying c and G minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe ''from nothing'' (a=0) to a Friedmann geometry with the scale factor a{sub t} is large for growing c models and is strongly suppressed for diminishing c models. As for G varying, the probability of tunneling is large for G diminishing, while it is small for G increasing. In general, both varying c and G change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

  8. Fundamental "Uncertainty" in Science

    NASA Astrophysics Data System (ADS)

    Reichl, Linda E.

    The conference on "Uncertainty and Surprise" was concerned with our fundamental inability to predict future events. How can we restructure organizations to effectively function in an uncertain environment? One concern is that many large complex organizations are built on mechanical models, but mechanical models cannot always respond well to "surprises." An underlying assumption a bout mechanical models is that, if we give them enough information about the world, they will know the future accurately enough that there will be few or no surprises. The assumption is that the future is basically predictable and deterministic.

  9. Fundamental experiments in velocimetry

    SciTech Connect

    Briggs, Matthew Ellsworth; Hull, Larry; Shinas, Michael

    2009-01-01

    One can understand what velocimetry does and does not measure by understanding a few fundamental experiments. Photon Doppler Velocimetry (PDV) is an interferometer that will produce fringe shifts when the length of one of the legs changes, so we might expect the fringes to change whenever the distance from the probe to the target changes. However, by making PDV measurements of tilted moving surfaces, we have shown that fringe shifts from diffuse surfaces are actually measured only from the changes caused by the component of velocity along the beam. This is an important simplification in the interpretation of PDV results, arising because surface roughness randomizes the scattered phases.

  10. Frontiers of Fundamental Physics 14

    NASA Astrophysics Data System (ADS)

    The 14th annual international symposium "Frontiers of Fundamental Physics" (FFP14) was organized by the OCEVU Labex. It was held in Marseille, on the Saint-Charles Campus of Aix Marseille University (AMU) and had over 280 participants coming from all over the world. FFP Symposium began in India in 1997 and it became itinerant in 2004, through Europe, Canada and Australia. It covers topics in fundamental physics with the objective to enable scholars working in related areas to meet on a single platform and exchange ideas. In addition to highlighting the progress in these areas, the symposium invites the top researchers to reflect on the educational aspects of our discipline. Moreover, the scientific concepts are also discussed through philosophical and epistemological viewpoints. Several eminent scientists, such as the laureates of prestigious awards (Nobel Prize, Fields Medal,…), have already participated in these meetings. The FFP14 Symposium developed around seven main themes, namely: Astroparticle Physics, Cosmology, High Energy Physics, Quantum Gravity, Mathematical Physics, Physics Education, Epistemology and Philosophy. The morning was devoted to the plenary session, with talks for a broad audience of physicists in its first half (9:00-10:30), and more specialized in its second half (11:00-12:30); this part was held in three amphitheaters. The parallel session of the Symposium took place during the afternoon (14:30-18:30) with seven thematic conferences and an additional conference on open topics named "Frontiers of Fundamental Physics". These eight conferences were organized around the contributions of participants, in addition to the ones of invited speakers. Altogether, there were some 250 contributions to the symposium (talks and posters). The plenary talks were webcasted live and recorded. The slides of the talks and the videos of the plenary talks are available from the Symposium web site: http://ffp14.cpt.univ-mrs.fr/

  11. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  12. Fundamentals of electrokinetics

    NASA Astrophysics Data System (ADS)

    Kozak, M. W.

    The study of electrokinetics is a very mature field. Experimental studies date from the early 1800s, and acceptable theoretical analyses have existed since the early 1900s. The use of electrokinetics in practical field problems is more recent, but it is still quite mature. Most developments in the fundamental understanding of electrokinetics are in the colloid science literature. A significant and increasing divergence between the theoretical understanding of electrokinetics found in the colloid science literature and the theoretical analyses used in interpreting applied experimental studies in soil science and waste remediation has developed. The soil science literature has to date restricted itself to the use of very early theories, with their associated limitations. The purpose of this contribution is to review fundamental aspects of electrokinetic phenomena from a colloid science viewpoint. It is hoped that a bridge can be built between the two branches of the literature, from which both will benefit. Attention is paid to special topics such as the effects of overlapping double layers, applications in unsaturated soils, the influence of dispersivity, and the differences between electrokinetic theory and conductivity theory.

  13. Biochemical Engineering Fundamentals

    ERIC Educational Resources Information Center

    Bailey, J. E.; Ollis, D. F.

    1976-01-01

    Discusses a biochemical engineering course that is offered as part of a chemical engineering curriculum and includes topics that influence the behavior of man-made or natural microbial or enzyme reactors. (MLH)

  14. Change is a Constant.

    PubMed

    Lubowitz, James H; Provencher, Matthew T; Brand, Jefferson C; Rossi, Michael J; Poehling, Gary G

    2015-06-01

    In 2015, Henry P. Hackett, Managing Editor, Arthroscopy, retires, and Edward A. Goss, Executive Director, Arthroscopy Association of North America (AANA), retires. Association is a positive constant, in a time of change. With change comes a need for continuing education, research, and sharing of ideas. While the quality of education at AANA and ISAKOS is superior and most relevant, the unique reason to travel and meet is the opportunity to interact with innovative colleagues. Personal interaction best stimulates new ideas to improve patient care, research, and teaching. Through our network, we best create innovation.

  15. Royal Society, Discussion on the Constants of Physics, London, England, May 25, 26, 1983, Proceedings

    NASA Astrophysics Data System (ADS)

    1983-12-01

    Various topics dealing with the constants of physics are addressed. The subjects considered include: measurement of the fundamental constants; the search for proton decay; the constancy of G; limits on the variability of coupling constants from the Oklo natural reactor; implications of quasar spectroscopy for constancy of constants; theoretical prospects for understanding the values of fundamental constants; the strong, electromagnetic, and weak couplings; and field theories without fundamental gauge symmetries. Also discussed are: Einstein gravitation as a long-wavelength effective field theory; unification and supersymmetry; phase transitions in the early universe; the cosmological constant; large numbers and ratios in astrophysics and cosmology; dependence of macrophysical phenomena on the values of the fundamental constants; dimensionality; and the anthropic principle and its implications for biological evolution.

  16. Voltammetry as a virtual potentiometric sensor in modelling of a metal-ligand system and refinement of stability constants. Part 4. An electrochemical study of NiII complexes with methylene diphosphonic acid.

    PubMed

    Cukrowski, Ignacy; Mogano, Daniel M; Zeevaart, Jan Rijn

    2005-12-01

    The Ni(II)-MDP-OH system (MDP=methylene diphosphonic acid) and stability constants of complexes formed at ionic strength 0.15M at 298K were established by direct current polarography (DCP) and glass electrode potentiometry (GEP). The final M-L-OH model could only be arrived to by employing recent concept of virtual potentiometry (VP). VP-data were generated from non-equilibrium and dynamic DC polarographic technique. The VP and GEP data were refined simultaneously by software dedicated to potentiometric studies of metal complexes. Species distribution diagrams that were generated for different experimental conditions employed in this work assisted in making the final choice regarding the metal-ligand model. The model established contains ML, ML(2), ML(OH) and ML(OH)(2) with stability constants, as logbeta, 7.94+/-0.02, 13.75+/-0.02, 12.04 (fixed value), and 16.75+/-0.05, respectively. It has been demonstrated that virtual potential must be used in modelling operations (predictions of species formed) when a polarographic signal decreases significantly due to the formation of polarographically inactive species (or formation of inert complexes). The linear free energy relationships that included stability constant logK(1) for Ni(II)-MDP established in this work together with other available data were used to predict logK(1) values for Sm(III) and Ho(III) with MDP. The logK(1) values for Sm(III)-MDP and Ho(III)-MDP were estimated to be 9.65+/-0.10 and 9.85+/-0.10, respectively. PMID:16213588

  17. Fundamentals of Geophysics

    NASA Astrophysics Data System (ADS)

    Lowrie, William

    1997-10-01

    This unique textbook presents a comprehensive overview of the fundamental principles of geophysics. Unlike most geophysics textbooks, it combines both the applied and theoretical aspects to the subject. The author explains complex geophysical concepts using abundant diagrams, a simplified mathematical treatment, and easy-to-follow equations. After placing the Earth in the context of the solar system, he describes each major branch of geophysics: gravitation, seismology, dating, thermal and electrical properties, geomagnetism, paleomagnetism and geodynamics. Each chapter begins with a summary of the basic physical principles, and a brief account of each topic's historical evolution. The book will satisfy the needs of intermediate-level earth science students from a variety of backgrounds, while at the same time preparing geophysics majors for continued study at a higher level.

  18. Fundamentals in Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Basdevant, Jean-Louis, Rich, James, Spiro, Michael

    This course on nuclear physics leads the reader to the exploration of the field from nuclei to astrophysical issues. Much nuclear phenomenology can be understood from simple arguments such as those based on the Pauli principle and the Coulomb barrier. This book is concerned with extrapolating from such arguments and illustrating nuclear systematics with experimental data. Starting with the basic concepts in nuclear physics, nuclear models, and reactions, the book covers nuclear decays and the fundamental electro-weak interactions, radioactivity, and nuclear energy. After the discussions of fission and fusion leading into nuclear astrophysics, there is a presentation of the latest ideas about cosmology. As a primer this course will lay the foundations for more specialized subjects. This book emerged from a series of topical courses the authors delivered at the Ecole Polytechnique and will be useful for graduate students and for scientists in a variety of fields.

  19. Fundamentals of zoological scaling

    NASA Astrophysics Data System (ADS)

    Lin, Herbert

    1982-01-01

    Most introductory physics courses emphasize highly idealized problems with unique well-defined answers. Though many textbooks complement these problems with estimation problems, few books present anything more than an elementary discussion of scaling. This paper presents some fundamentals of scaling in the zoological domain—a domain complex by any standard, but one also well suited to illustrate the power of very simple physical ideas. We consider the following animal characteristics: skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing flapping, and maximum sizes of animals that fly and hover. These relationships are compared to zoological data and everyday experience, and match reasonably well.

  20. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  1. Fundamentals of Plasma Physics

    NASA Astrophysics Data System (ADS)

    Bellan, Paul M.

    2008-07-01

    Preface; 1. Basic concepts; 2. The Vlasov, two-fluid, and MHD models of plasma dynamics; 3. Motion of a single plasma particle; 4. Elementary plasma waves; 5. Streaming instabilities and the Landau problem; 6. Cold plasma waves in a magnetized plasma; 7. Waves in inhomogeneous plasmas and wave energy relations; 8. Vlasov theory of warm electrostatic waves in a magnetized plasma; 9. MHD equilibria; 10. Stability of static MHD equilibria; 11. Magnetic helicity interpreted and Woltjer-Taylor relaxation; 12. Magnetic reconnection; 13. Fokker-Planck theory of collisions; 14. Wave-particle nonlinearities; 15. Wave-wave nonlinearities; 16. Non-neutral plasmas; 17. Dusty plasmas; Appendix A. Intuitive method for vector calculus identities; Appendix B. Vector calculus in orthogonal curvilinear coordinates; Appendix C. Frequently used physical constants and formulae; Bibliography; References; Index.

  2. Measurement of the solar constant

    NASA Technical Reports Server (NTRS)

    Crommelynck, D.

    1981-01-01

    The absolute value of the solar constant and the long term variations that exist in the absolute value of the solar constant were measured. The solar constant is the total irradiance of the Sun at a distance of one astronomical unit. An absolute radiometer removed from the effects of the atmosphere with its calibration tested in situ was used to measure the solar constant. The importance of an accurate knowledge of the solar constant is emphasized.

  3. The Hubble constant.

    PubMed

    Tully, R B

    1993-06-01

    Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391

  4. When constants are important

    SciTech Connect

    Beiu, V.

    1997-04-01

    In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.

  5. The Hubble constant.

    PubMed Central

    Tully, R B

    1993-01-01

    Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391

  6. Unitaxial constant velocity microactuator

    DOEpatents

    McIntyre, Timothy J.

    1994-01-01

    A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-manometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment.

  7. Unitaxial constant velocity microactuator

    DOEpatents

    McIntyre, T.J.

    1994-06-07

    A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment is disclosed. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-nanometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment. 10 figs.

  8. Constant attitude orbit transfer

    NASA Astrophysics Data System (ADS)

    Cress, Peter; Evans, Michael

    A two-impulse orbital transfer technique is described in which the spacecraft attitude remains constant for both burns, eliminating the need for attitude maneuvers between the burns. This can lead to significant savings in vehicle weight, cost and complexity. Analysis is provided for a restricted class of applications of this transfer between circular orbits. For those transfers with a plane change less than 30 deg, the total velocity cost of the maneuver is less than twelve percent greater than that of an optimum plane split Hohmann transfer. While this maneuver does not minimize velocity requirement, it does provide a means of achieving necessary transfer while substantially reducing the cost and complexity of the spacecraft.

  9. A Constant Pressure Bomb

    NASA Technical Reports Server (NTRS)

    Stevens, F W

    1924-01-01

    This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound.

  10. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  11. Fundamentals of Atmospheric Radiation

    NASA Astrophysics Data System (ADS)

    Bohren, Craig F.; Clothiaux, Eugene E.

    2006-02-01

    This textbook fills a gap in the literature for teaching material suitable for students of atmospheric science and courses on atmospheric radiation. It covers the fundamentals of emission, absorption, and scattering of electromagnetic radiation from ultraviolet to infrared and beyond. Much of the book applies to planetary atmosphere. The authors are physicists and teach at the largest meteorology department of the US at Penn State. Craig T. Bohren has taught the atmospheric radiation course there for the past 20 years with no book. Eugene Clothiaux has taken over and added to the course notes. Problems given in the text come from students, colleagues, and correspondents. The design of the figures especially for this book is meant to ease comprehension. Discussions have a graded approach with a thorough treatment of subjects, such as single scattering by particles, at different levels of complexity. The discussion of the multiple scattering theory begins with piles of plates. This simple theory introduces concepts in more advanced theories, i.e. optical thickness, single-scattering albedo, asymmetry parameter. The more complicated theory, the two-stream theory, then takes the reader beyond the pile-of-plates theory. Ideal for advanced undergraduate and graduate students of atmospheric science.

  12. Fundamentals of the Control of Gas-Turbine Power Plants for Aircraft. Part 2; Principles of Control Common to Jet, Turbine-Propeller Jet, and Ducted-Fan Jet Power Plants

    NASA Technical Reports Server (NTRS)

    Kuehl, H.

    1947-01-01

    After defining the aims and requirements to be set for a control system of gas-turbine power plants for aircraft, the report will deal with devices that prevent the quantity of fuel supplied per unit of time from exceeding the value permissible at a given moment. The general principles of the actuation of the adjustable parts of the power plant are also discussed.

  13. Hydrogen molecular ions: new schemes for metrology and fundamental physics tests

    NASA Astrophysics Data System (ADS)

    Karr, Jean-Philippe; Patra, Sayan; Koelemeij, Jeroen C. J.; Heinrich, Johannes; Sillitoe, Nicolas; Douillet, Albane; Hilico, Laurent

    2016-06-01

    High-accuracy spectroscopy of hydrogen molecular ions has important applications for the metrology of fundamental constants and tests of fundamental theories. Up to now, the experimental resolution has not surpassed the part-per-billion range. We discuss two methods by which it could be improved by a huge factor. Firstly, the feasibility of Doppler-free quasidegenerate two-photon spectroscopy of trapped and sympathetically cooled ensembles of HD+ ions is discussed, and it is shown that rovibrational transitions may be detected with a good signal-to-noise ratio. Secondly, the performance of a molecular quantum-logic ion clock based on a single Be+-H2 + ion pair is analyzed in detail. Such a clock could allow testing the constancy of the proton-to-electron mass ratio at the 10-17/yr level.

  14. Fundamental constraints on two-time physics

    NASA Astrophysics Data System (ADS)

    Piceno, E.; Rosado, A.; Sadurní, E.

    2016-10-01

    We show that generalizations of classical and quantum dynamics with two times lead to a fundamentally constrained evolution. At the level of classical physics, Newton's second law is extended and exactly integrated in a (1 + 2) -dimensional space, leading to effective single-time evolution for any initial condition. The cases 2 + 2 and 3 + 2 are also analyzed. In the domain of quantum mechanics, we follow strictly the hypothesis of probability conservation by extending the Heisenberg picture to unitary evolution with two times. As a result, the observability of two temporal axes is constrained by a generalized uncertainty relation involving level spacings, total duration of the effect and Planck's constant.

  15. TASI Lectures on the cosmological constant

    SciTech Connect

    Bousso, Raphael; Bousso, Raphael

    2007-08-30

    The energy density of the vacuum, Lambda, is at least 60 orders of magnitude smaller than several known contributions to it. Approaches to this problem are tightly constrained by data ranging from elementary observations to precision experiments. Absent overwhelming evidence to the contrary, dark energy can only be interpreted as vacuum energy, so the venerable assumption that Lambda=0 conflicts with observation. The possibility remains that Lambda is fundamentally variable, though constant over large spacetime regions. This can explain the observed value, but only in a theory satisfying a number of restrictive kinematic and dynamical conditions. String theory offers a concrete realization through its landscape of metastable vacua.

  16. Fundamentals of Space Medicine

    NASA Astrophysics Data System (ADS)

    Clément, Gilles

    2005-03-01

    A total of more than 240 human space flights have been completed to date, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This readable text presents the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardio-vascular, bone, and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated, and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination of both the

  17. Fundamentals of Space Medicine

    NASA Astrophysics Data System (ADS)

    Clément, G.

    2003-10-01

    As of today, a total of more than 240 human space flights have been completed, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This book presents in a readable text the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardiovascular, bone and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination

  18. Maximum Entropy Fundamentals

    NASA Astrophysics Data System (ADS)

    Harremoeës, P.; Topsøe, F.

    2001-09-01

    In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over the development of natural

  19. Astronomers Gain Clues About Fundamental Physics

    NASA Astrophysics Data System (ADS)

    2005-12-01

    An international team of astronomers has looked at something very big -- a distant galaxy -- to study the behavior of things very small -- atoms and molecules -- to gain vital clues about the fundamental nature of our entire Universe. The team used the National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) to test whether the laws of nature have changed over vast spans of cosmic time. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) "The fundamental constants of physics are expected to remain fixed across space and time; that's why they're called constants! Now, however, new theoretical models for the basic structure of matter indicate that they may change. We're testing these predictions." said Nissim Kanekar, an astronomer at the National Radio Astronomy Observatory (NRAO), in Socorro, New Mexico. So far, the scientists' measurements show no change in the constants. "We've put the most stringent limits yet on some changes in these constants, but that's not the end of the story," said Christopher Carilli, another NRAO astronomer. "This is the exciting frontier where astronomy meets particle physics," Carilli explained. The research can help answer fundamental questions about whether the basic components of matter are tiny particles or tiny vibrating strings, how many dimensions the Universe has, and the nature of "dark energy." The astronomers were looking for changes in two quantities: the ratio of the masses of the electron and the proton, and a number physicists call the fine structure constant, a combination of the electron charge, the speed of light and the Planck constant. These values, considered fundamental physical constants, once were "taken as time independent, with values given once and forever" said German particle physicist Christof Wetterich. However, Wetterich explained, "the viewpoint of modern particle theory has changed in recent years," with ideas such as

  20. Beyond the Hubble Constant

    NASA Astrophysics Data System (ADS)

    1995-08-01

    about the distances to galaxies and thereby about the expansion rate of the Universe. A simple way to determine the distance to a remote galaxy is by measuring its redshift, calculate its velocity from the redshift and divide this by the Hubble constant, H0. For instance, the measured redshift of the parent galaxy of SN 1995K (0.478) yields a velocity of 116,000 km/sec, somewhat more than one-third of the speed of light (300,000 km/sec). From the universal expansion rate, described by the Hubble constant (H0 = 20 km/sec per million lightyears as found by some studies), this velocity would indicate a distance to the supernova and its parent galaxy of about 5,800 million lightyears. The explosion of the supernova would thus have taken place 5,800 million years ago, i.e. about 1,000 million years before the solar system was formed. However, such a simple calculation works only for relatively ``nearby'' objects, perhaps out to some hundred million lightyears. When we look much further into space, we also look far back in time and it is not excluded that the universal expansion rate, i.e. the Hubble constant, may have been different at earlier epochs. This means that unless we know the change of the Hubble constant with time, we cannot determine reliable distances of distant galaxies from their measured redshifts and velocities. At the same time, knowledge about such change or lack of the same will provide unique information about the time elapsed since the Universe began to expand (the ``Big Bang''), that is, the age of the Universe and also its ultimate fate. The Deceleration Parameter q0 Cosmologists are therefore eager to determine not only the current expansion rate (i.e., the Hubble constant, H0) but also its possible change with time (known as the deceleration parameter, q0). Although a highly accurate value of H0 has still not become available, increasing attention is now given to the observational determination of the second parameter, cf. also the Appendix at the

  1. IEC is fundamental.

    PubMed

    1994-02-01

    China's information, education, and communication [IEC] as part of the family planning program has been successful in Shaoxing County, Ningbo-Shaoxing Plain, Zhejiang Province: total 1992 population of 942.8 thousand, birth rate 14/1000, rate of natural increase 7/1000, and planned births 99%. Population education has been an important component of IEC and focuses on population theory, population policy, and population regulation through family planning. Organized workshops are available at the country level for training village leaders. There is inservice training of new administrators of family planning programs. Media has been used to promote the benefits of family planning for the state and families. Many couples have received only-child certificates, and many couples with permission to have a second child have willingly forfeited that opportunity. Recognition has been given to 300 units that have advanced and created better social environments. There has been an integration of ideological education and popular contraceptive knowledge. Reproductive and contraceptive knowledge has been enhanced through many lectures and training classes on producing healthy children, premarital planning, and pregnancy care. The country broadcasting station gives lectures on family planning. There is a family planning guidance station that provides counseling and publicity on commonly used methods. Social welfare projects are vigorously developed, such as old age homes in villages and kindergartens. Insurance plans have been initiated. Good contraceptive service provision and prenatal care are jointly part of the family planning programs. New methods such as the IUD have been successfully introduced and accepted by the population. IUD users since 1987 now number $6,581.

  2. Improving Estimated Optical Constants With MSTM and DDSCAT Modeling

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Wolff, M. J.

    2015-12-01

    We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long

  3. Fundamentals and Techniques of Nonimaging

    SciTech Connect

    O'Gallagher, J. J.; Winston, R.

    2003-07-10

    This is the final report describing a long term basic research program in nonimaging optics that has led to major advances in important areas, including solar energy, fiber optics, illumination techniques, light detectors, and a great many other applications. The term ''nonimaging optics'' refers to the optics of extended sources in systems for which image forming is not important, but effective and efficient collection, concentration, transport, and distribution of light energy is. Although some of the most widely known developments of the early concepts have been in the field of solar energy, a broad variety of other uses have emerged. Most important, under the auspices of this program in fundamental research in nonimaging optics established at the University of Chicago with support from the Office of Basic Energy Sciences at the Department of Energy, the field has become very dynamic, with new ideas and concepts continuing to develop, while applications of the early concepts continue to be pursued. While the subject began as part of classical geometrical optics, it has been extended subsequently to the wave optics domain. Particularly relevant to potential new research directions are recent developments in the formalism of statistical and wave optics, which may be important in understanding energy transport on the nanoscale. Nonimaging optics permits the design of optical systems that achieve the maximum possible concentration allowed by physical conservation laws. The earliest designs were constructed by optimizing the collection of the extreme rays from a source to the desired target: the so-called ''edge-ray'' principle. Later, new concentrator types were generated by placing reflectors along the flow lines of the ''vector flux'' emanating from lambertian emitters in various geometries. A few years ago, a new development occurred with the discovery that making the design edge-ray a functional of some other system parameter permits the construction of whole

  4. Fundamentals of air pollution. Third edition

    SciTech Connect

    Boubel, R.W.; Fox, D.L.; Turner, D.B.; Stern, A.C.

    1994-12-31

    This book presents an overview of air pollution. In Part I, the history of air pollution and the basic concepts involved with air pollution such as sources, scales, definitions are covered. Part II describes how airborne pollutants damage materials, vegetation, animals, and humans. Six fundamental aspects of air pollution are included in the text: The Elements of Air Pollution; The Effects of Air Pollution; Measurement and Monitoring of Air Pollution; Meterology of Air Pollution; regulatory Control of Air Pollution; and Engineering Control of Air Pollution.

  5. Fundamentals of ICF Hohlraums

    SciTech Connect

    Rosen, M D

    2005-09-30

    On the Nova Laser at LLNL, we demonstrated many of the key elements required for assuring that the next laser, the National Ignition Facility (NIF) will drive an Inertial Confinement Fusion (ICF) target to ignition. The indirect drive (sometimes referred to as ''radiation drive'') approach converts laser light to x-rays inside a gold cylinder, which then acts as an x-ray ''oven'' (called a hohlraum) to drive the fusion capsule in its center. On Nova we've demonstrated good understanding of the temperatures reached in hohlraums and of the ways to control the uniformity with which the x-rays drive the spherical fusion capsules. In these lectures we will be reviewing the physics of these laser heated hohlraums, recent attempts at optimizing their performance, and then return to the ICF problem in particular to discuss scaling of ICF gain with scale size, and to compare indirect vs. direct drive gains. In ICF, spherical capsules containing Deuterium and Tritium (DT)--the heavy isotopes of hydrogen--are imploded, creating conditions of high temperature and density similar to those in the cores of stars required for initiating the fusion reaction. When DT fuses an alpha particle (the nucleus of a helium atom) and a neutron are created releasing large amount amounts of energy. If the surrounding fuel is sufficiently dense, the alpha particles are stopped and can heat it, allowing a self-sustaining fusion burn to propagate radially outward and a high gain fusion micro-explosion ensues. To create those conditions the outer surface of the capsule is heated (either directly by a laser or indirectly by laser produced x-rays) to cause rapid ablation and outward expansion of the capsule material. A rocket-like reaction to that outward flowing heated material leads to an inward implosion of the remaining part of the capsule shell. The pressure generated on the outside of the capsule can reach nearly 100 megabar (100 million times atmospheric pressure [1b = 10{sup 6} cgs

  6. Constant-Pressure Hydraulic Pump

    NASA Technical Reports Server (NTRS)

    Galloway, C. W.

    1982-01-01

    Constant output pressure in gas-driven hydraulic pump would be assured in new design for gas-to-hydraulic power converter. With a force-multiplying ring attached to gas piston, expanding gas would apply constant force on hydraulic piston even though gas pressure drops. As a result, pressure of hydraulic fluid remains steady, and power output of the pump does not vary.

  7. Precision measurement of the Newtonian gravitational constant using cold atoms.

    PubMed

    Rosi, G; Sorrentino, F; Cacciapuoti, L; Prevedelli, M; Tino, G M

    2014-06-26

    About 300 experiments have tried to determine the value of the Newtonian gravitational constant, G, so far, but large discrepancies in the results have made it impossible to know its value precisely. The weakness of the gravitational interaction and the impossibility of shielding the effects of gravity make it very difficult to measure G while keeping systematic effects under control. Most previous experiments performed were based on the torsion pendulum or torsion balance scheme as in the experiment by Cavendish in 1798, and in all cases macroscopic masses were used. Here we report the precise determination of G using laser-cooled atoms and quantum interferometry. We obtain the value G = 6.67191(99) × 10(-11) m(3) kg(-1) s(-2) with a relative uncertainty of 150 parts per million (the combined standard uncertainty is given in parentheses). Our value differs by 1.5 combined standard deviations from the current recommended value of the Committee on Data for Science and Technology. A conceptually different experiment such as ours helps to identify the systematic errors that have proved elusive in previous experiments, thus improving the confidence in the value of G. There is no definitive relationship between G and the other fundamental constants, and there is no theoretical prediction for its value, against which to test experimental results. Improving the precision with which we know G has not only a pure metrological interest, but is also important because of the key role that G has in theories of gravitation, cosmology, particle physics and astrophysics and in geophysical models. PMID:24965653

  8. Precision measurement of the Newtonian gravitational constant using cold atoms

    NASA Astrophysics Data System (ADS)

    Rosi, G.; Sorrentino, F.; Cacciapuoti, L.; Prevedelli, M.; Tino, G. M.

    2014-06-01

    About 300 experiments have tried to determine the value of the Newtonian gravitational constant, G, so far, but large discrepancies in the results have made it impossible to know its value precisely. The weakness of the gravitational interaction and the impossibility of shielding the effects of gravity make it very difficult to measure G while keeping systematic effects under control. Most previous experiments performed were based on the torsion pendulum or torsion balance scheme as in the experiment by Cavendish in 1798, and in all cases macroscopic masses were used. Here we report the precise determination of G using laser-cooled atoms and quantum interferometry. We obtain the value G = 6.67191(99) × 10-11 m3 kg-1 s-2 with a relative uncertainty of 150 parts per million (the combined standard uncertainty is given in parentheses). Our value differs by 1.5 combined standard deviations from the current recommended value of the Committee on Data for Science and Technology. A conceptually different experiment such as ours helps to identify the systematic errors that have proved elusive in previous experiments, thus improving the confidence in the value of G. There is no definitive relationship between G and the other fundamental constants, and there is no theoretical prediction for its value, against which to test experimental results. Improving the precision with which we know G has not only a pure metrological interest, but is also important because of the key role that G has in theories of gravitation, cosmology, particle physics and astrophysics and in geophysical models.

  9. Precision measurement of the Newtonian gravitational constant using cold atoms.

    PubMed

    Rosi, G; Sorrentino, F; Cacciapuoti, L; Prevedelli, M; Tino, G M

    2014-06-26

    About 300 experiments have tried to determine the value of the Newtonian gravitational constant, G, so far, but large discrepancies in the results have made it impossible to know its value precisely. The weakness of the gravitational interaction and the impossibility of shielding the effects of gravity make it very difficult to measure G while keeping systematic effects under control. Most previous experiments performed were based on the torsion pendulum or torsion balance scheme as in the experiment by Cavendish in 1798, and in all cases macroscopic masses were used. Here we report the precise determination of G using laser-cooled atoms and quantum interferometry. We obtain the value G = 6.67191(99) × 10(-11) m(3) kg(-1) s(-2) with a relative uncertainty of 150 parts per million (the combined standard uncertainty is given in parentheses). Our value differs by 1.5 combined standard deviations from the current recommended value of the Committee on Data for Science and Technology. A conceptually different experiment such as ours helps to identify the systematic errors that have proved elusive in previous experiments, thus improving the confidence in the value of G. There is no definitive relationship between G and the other fundamental constants, and there is no theoretical prediction for its value, against which to test experimental results. Improving the precision with which we know G has not only a pure metrological interest, but is also important because of the key role that G has in theories of gravitation, cosmology, particle physics and astrophysics and in geophysical models.

  10. Fundamental mechanisms of micromachine reliability

    SciTech Connect

    DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.

    2000-01-01

    Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is independent of

  11. Fundamental performance differences between CMOS and CCD imagers: Part II

    NASA Astrophysics Data System (ADS)

    Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff

    2007-09-01

    A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.

  12. Fundamental ignition study for material fire safety improvement, part 2

    NASA Technical Reports Server (NTRS)

    Paciorek, K. L.; Kratzer, R. H.; Kaufman, J.

    1971-01-01

    The autoignition behavior of polymeric compositions in oxidizing media was investigated as well as the nature and relative concentration of the volatiles produced during oxidative decomposition culminating in combustion. The materials investigated were Teflon, Fluorel KF-2140 raw gum and its compounded versions Refset and Ladicote, 45B3 intumenscent paint, and Ames isocyanurate foam. The majority of the tests were conducted using a stagnation burner arrangement which provided a laminar gas flow and allowed the sample block and gas temperatures to be varied independently. The oxidizing atmospheres were essentially air and oxygen, although in the case of the Fluorel family of materials, due to partial blockage of the gas inlet system, some tests were performed unintentionally in enriched air (not oxygen). The 45B3 paint was not amenable to sampling in a dynamic system, due to its highly intumescent nature. Consequently, selected experiments were conducted using a sealed tube technique both in air and oxygen media.

  13. Prevalidation in pharmaceutical analysis. Part I. Fundamentals and critical discussion.

    PubMed

    Grdinić, Vladimir; Vuković, Jadranka

    2004-05-28

    A complete prevalidation, as a basic prevalidation strategy for quality control and standardization of analytical procedure was inaugurated. Fast and simple, the prevalidation methodology based on mathematical/statistical evaluation of a reduced number of experiments (N < or = 24) was elaborated and guidelines as well as algorithms were given in detail. This strategy has been produced for the pharmaceutical applications and dedicated to the preliminary evaluation of analytical methods where linear calibration model, which is very often occurred in practice, could be the most appropriate to fit experimental data. The requirements presented in this paper should therefore help the analyst to design and perform the minimum number of prevalidation experiments needed to obtain all the required information to evaluate and demonstrate the reliability of its analytical procedure. In complete prevalidation process, characterization of analytical groups, checking of two limiting groups, testing of data homogeneity, establishment of analytical functions, recognition of outliers, evaluation of limiting values and extraction of prevalidation parameters were included. Moreover, system of diagnosis for particular prevalidation step was suggested. As an illustrative example for demonstration of feasibility of prevalidation methodology, among great number of analytical procedures, Vis-spectrophotometric procedure for determination of tannins with Folin-Ciocalteu's phenol reagent was selected. Favourable metrological characteristics of this analytical procedure, as prevalidation figures of merit, recognized the metrological procedure as a valuable concept in preliminary evaluation of quality of analytical procedures.

  14. Fundamentals of Mathematics, Part 1. Extended Time Frame. Experimental Edition.

    ERIC Educational Resources Information Center

    Goldberg, Judy, Ed.

    This curriculum guide is an adaptation for students who need to proceed more slowly with new concepts and who also require additional reinforcement. The materials have been designed to assist the teacher in developing plans to be utilized in a variety of classroom settings. The guide can be used to develop both individual and group lessons. In…

  15. Fundamentals and applications of solar energy. Part 2

    NASA Astrophysics Data System (ADS)

    Faraq, I. H.; Melsheimer, S. S.

    Applications of techniques of chemical engineering to the development of materials, production methods, and performance optimization and evaluation of solar energy systems are discussed. Solar thermal storage systems using phase change materials, liquid phase Diels-Alder reactions, aquifers, and hydrocarbon oil were examined. Solar electric systems were explored in terms of a chlorophyll solar cell, the nonequilibrium electric field effects developed at photoelectrode/electrolyte interfaces, and designs for commercial scale processing of solar cells using continuous thin-film coating production methods. Solar coal gasification processes were considered, along with multilayer absorber coatings for solar concentrator receivers, solar thermal industrial applications, the kinetics of anaerobic digestion of crop residues to produce methane, and a procedure for developing a computer simulation of a solar cooling system.

  16. Constants and Variables of Nature

    SciTech Connect

    Sean Carroll

    2009-04-03

    It is conventional to imagine that the various parameters which characterize our physical theories, such as the fine structure constant or Newton’s gravitational constant, are truly “constant”, in the sense that they do not change from place to place or time to time. Recent developments in both theory and observation have led us to re-examine this assumption, and to take seriously the possibility that our supposed constants are actually gradually changing. I will discuss why we might expect these parameters to vary, and what observation and experiment have to say about the issue.

  17. Recommending a value for the Newtonian gravitational constant.

    PubMed

    Wood, Barry M

    2014-10-13

    The primary objective of the CODATA Task Group on Fundamental Constants is 'to periodically provide the scientific and technological communities with a self-consistent set of internationally recommended values of the basic constants and conversion factors of physics and chemistry based on all of the relevant data available at a given point in time'. I discuss why the availability of these recommended values is important and how it simplifies and improves science. I outline the process of determining the recommended values and introduce the principles that are used to deal with discrepant results. In particular, I discuss the specific challenges posed by the present situation of gravitational constant experimental results and how these principles were applied to the most recent 2010 recommended value. Finally, I speculate about what may be expected for the next recommended value of the gravitational constant scheduled for evaluation in 2014.

  18. Intensities of Fundamental and Overtone Vibrational Transitions

    NASA Astrophysics Data System (ADS)

    Kjaergaard, Henrik G.

    2012-06-01

    We have measured and calculated vibrational XH-stretching overtone spectra (where X is C,N,O,S,..) for a range of molecules and hydrated complexes (e.g. water dimer). Spectroscopic studies of such systems are difficult because: vibrational overtone transitions have low intensities, species that exhibit intramolecular hydrogen bonding typically have low vapor pressures and hydrated complexes have small equilibrium constants. The use of coupled cluster theory including perturbative triples, CCSD(T) or CCSD(T)-F12, as well as a large augmented basis, aug-cc-pVTZ or VDZ-F12, is necessary to obtain calculated vibrational spectra of near experimental accuracy. We explain the interesting intensity patterns in terms of an anharmonic oscillator local mode model. The intensity ratio of the fundamental to first XH-stretching overtone covers a wide range. In the past decade, we have used this local mode model to explain observed spectra of both molecules and complexes. I will show recent results for amines and complexes with amines and will illustrate how the ratio of calculated to measured intensity can provide the room temperature equilibrium constant for formation of the binary complex, a quantity that is difficult to calculate accurately.

  19. [Relation between fundamental and realized ecological niche].

    PubMed

    Severtsov, A S

    2012-01-01

    Since species are formed in course of evolutionary process, their ecological niches are formed in the evolutionary process, too. Species exist in a state of evolutionary stasis diring hundreds of thousands and millions years. Stasis in sustained mainly by counterbalance of vectors of directional selection. Niche can be viewed as a multidimensional structure. Multitude of environmental factors acts upon every population, which cause elimination and, by that, selection for adaptation to each eliminating factor. Different directions of these vectors of selection lead to their counteractions; selection in one direction is interfered by selection in an opposite direction. The counterbalance of vectors of selection interferes with progressive evolution thus supporting stasis. During species existence in a stasis condition it endures a whole set of various deterioration of environment. Such deteriorations lead to imbalance of selective processes. Unbalanced vectors of selection form adaptations to extreme conditions of existence. Such adaptations are superfluous as for usual conditions; but they define fitness borders and, by that, borders of a fundamental niche. Realized niche, as well as fundamental one, is a multidimensional structure. Each population occupies a subniche of the specific realized niche. Thus, it occupies habitats where conditions are as close to an ecological optimum as can be admitted by the conditions in the given part of the areal. The sum of all subniches of populations--the specific realized niche--coincides with a part of fundamental niche because only the part of adaptive possibilities of a species sufficient for existence in the given environment is used. Interspecific competition, even when it is capable to restrict consumption of limiting resources, is not the reason of the realized niche limitation. Restriction of one or two of niche parameters does not influece all others parameters of its multidimensional space.

  20. Geophysics Fatally Flawed by False Fundamental Philosophy

    NASA Astrophysics Data System (ADS)

    Myers, L. S.

    2004-05-01

    For two centuries scientists have failed to realize Laplace's nebular hypothesis \\(1796\\) of Earth's creation is false. As a consequence, geophysicists today are misinterpreting and miscalculating many fundamental aspects of the Earth and Solar System. Why scientists have deluded themselves for so long is a mystery. The greatest error is the assumption Earth was created 4.6 billion years ago as a molten protoplanet in its present size, shape and composition. This assumption ignores daily accretion of more than 200 tons/day of meteorites and dust, plus unknown volumes of solar insolation that created coal beds and other biomass that increased Earth's mass and diameter over time! Although the volume added daily is minuscule compared with Earth's total mass, logic and simple addition mandates an increase in mass, diameter and gravity. Increased diameter from accretion is proved by Grand Canyon stratigraphy that shows a one kilometer increase in depth and planetary radius at a rate exceeding three meters \\(10 ft\\) per Ma from start of the Cambrian \\(540 Ma\\) to end of the Permian \\(245 Ma\\)-each layer deposited onto Earth's surface. This is unequivocal evidence of passive external growth by accretion, part of a dual growth and expansion process called "Accreation" \\(creation by accretion\\). Dynamic internal core expansion, the second stage of Accreation, did not commence until the protoplanet reached spherical shape at 500-600 km diameter. At that point, gravity-powered compressive heating initiated core melting and internal expansion. Expansion quickly surpassed the external accretion growth rate and produced surface volcanoes to relieve explosive internal tectonic pressure and transfer excess mass (magma)to the surface. Then, 200-250 Ma, expansion triggered Pangaea's breakup, first sundering Asia and Australia to form the Pacific Ocean, followed by North and South America to form the Atlantic Ocean, by the mechanism of midocean ridges, linear underwater

  1. Simplified fundamental force and mass measurements

    NASA Astrophysics Data System (ADS)

    Robinson, I. A.

    2016-08-01

    The watt balance relates force or mass to the Planck constant h, the metre and the second. It enables the forthcoming redefinition of the unit of mass within the SI by measuring the Planck constant in terms of mass, length and time with an uncertainty of better than 2 parts in 108. To achieve this, existing watt balances require complex and time-consuming alignment adjustments limiting their use to a few national metrology laboratories. This paper describes a simplified construction and operating principle for a watt balance which eliminates the need for the majority of these adjustments and is readily scalable using either electromagnetic or electrostatic actuators. It is hoped that this will encourage the more widespread use of the technique for a wide range of measurements of force or mass. For example: thrust measurements for space applications which would require only measurements of electrical quantities and velocity/displacement.

  2. Gauge unification of fundamental forces

    NASA Astrophysics Data System (ADS)

    Salam, Abdus

    The following sections are included: * I. Fundamental Particles, Fundamental Forces, and Gauge Unification * II. The Emergence of Spontaneously Broken SU(2)×U(1) Gauge Theory * III. The Present and Its Problems * IV. Direct Extrapolation from the Electroweak to the Electronuclear * A. The three ideas * B. Tests of electronuclear grand unification * V. Elementarity: Unification with Gravity and Nature of Charge * A. The quest for elementarity, prequarks (preons and pre-preons * B. Post-Planck physics, supergravity, and Einstein's dreams * C. Extended supergravity, SU(8) preons, and composite gauge fields * Appendix A: Examples of Grand Unifying Groups * Appendix B: Does the Grand Plateau really exist * References

  3. The Hubble Constant and the Expanding Universe

    NASA Astrophysics Data System (ADS)

    Freedman, Wendy

    2003-01-01

    In 1929 Edwin Hubble proved that our universe is expanding by showing that the farther a galaxy is from us, the faster it is speeding away into space. This velocity-distance relation came to be called Hubble's law, and the value that describes the rate of expansion is known as the Hubble constant, or H0 . Like the speed of light, H0 is a fundamental constant, and it is a key parameter needed to estimate both the age and size of the universe. Since the late 1950s astronomers have been arguing for an H0 value between 50 to 100 kilometers per second per megaparsec, a lack of precision that produced an unacceptably wide range of ages for the universe—anywhere from 10 to 20 billion years. Using the Hubble Space Telescope, Freedman and her colleagues measured H0 to an unprecedented level of accuracy, deriving a value of 72, with an uncertainty of 10 percent—a milestone achievement in cosmology. The new result suggests that our universe is about 13 billion years old, give or take a billion years, and it's a value that sits comfortably alongside the 12 billion years estimated for the age of the oldest stars.

  4. Direct Measures of the Hubble Constant

    NASA Astrophysics Data System (ADS)

    Schechter, P. L.

    1999-05-01

    When astronomers talk about Lutz-Kelker corrections, metallicity dependent zeropoints, statistical parallaxes, Tully-Fisher relations, "fundamental" planes, light curve decline rates and, worst of all, Malmquist bias, physicists begin heading for the exits, showing signs of severe allergic reaction. They respond less violently to so-called "direct" methods of measuring distances which bypass the traditional distance ladder. Two of these, gravitational lens time delay measurements (Refsdal's method) and the Sunyaev-Zeldovich (S-Z) effect, give distance measurements to objects at high redshift which appear to rival more traditional approaches. Present, model mediated interpretations of such measurements give low values for the Hubble constant. But as is often the case with new techniques, initial enthusiasm is followed by increasing concern about systematic errors connected with messy astrophysical details. The single largest source of error in modelling lenses is the difficulty in constraining the degree of central concentration of the lensing galaxy. Sources of systematic error in S-Z distances include the clumpiness of intracluster gas, temperature variations within that gas and a bias toward selecting clusters that are elongated along the line of sight. Present best estimates of the Hubble constant, along with best estimates of the systematic uncertainties, and the prospects for improving upon these, will be presented. Support from NSF grant AST96-16866 is gratefully acknowledged.

  5. Constant fields and constant gradients in open ionic channels.

    PubMed

    Chen, D P; Barcilon, V; Eisenberg, R S

    1992-05-01

    Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant.

  6. Constant fields and constant gradients in open ionic channels.

    PubMed Central

    Chen, D P; Barcilon, V; Eisenberg, R S

    1992-01-01

    Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159

  7. Effective cosmological constant induced by stochastic fluctuations of Newton's constant

    NASA Astrophysics Data System (ADS)

    de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi

    2016-09-01

    We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.

  8. Status of Fundamental Physics Program

    NASA Technical Reports Server (NTRS)

    Lee, Mark C.

    2003-01-01

    Update of the Fundamental Physics Program. JEM/EF Slip. 2 years delay. Reduced budget. Community support and advocacy led by Professor Nick Bigelow. Reprogramming led by Fred O Callaghan/JPL team. LTMPF M1 mission (DYNAMX and SUMO). PARCS. Carrier re baselined on JEM/EF.

  9. Fundamental Practices of Curriculum Development.

    ERIC Educational Resources Information Center

    Usova, George M.; Gibson, Marcia

    Designed to give guidance to those involved in the curriculum development process within the Shipyard Training Modernization Program (STMP), this guide provides an understanding of the fundamental practices followed in the curriculum development process. It also demonstrates incorrect and correct approaches to the development of the curriculum…

  10. Light as a Fundamental Particle

    ERIC Educational Resources Information Center

    Weinberg, Steven

    1975-01-01

    Presents two arguments concerning the role of the photon. One states that the photon is just another particle distinguished by a particular value of charge, spin, mass, lifetime, and interaction properties. The second states that the photon plays a fundamental role with a deep relation to ultimate formulas of physics. (GS)

  11. Fundamentals of Microelectronics Processing (VLSI).

    ERIC Educational Resources Information Center

    Takoudis, Christos G.

    1987-01-01

    Describes a 15-week course in the fundamentals of microelectronics processing in chemical engineering, which emphasizes the use of very large scale integration (VLSI). Provides a listing of the topics covered in the course outline, along with a sample of some of the final projects done by students. (TW)

  12. Fundamentals of the Slide Library.

    ERIC Educational Resources Information Center

    Boerner, Susan Zee

    This paper is an introduction to the fundamentals of the art (including architecture) slide library, with some emphasis on basic procedures of the science slide library. Information in this paper is particularly relevant to the college, university, and museum slide library. Topics addressed include: (1) history of the slide library; (2) duties of…

  13. Chronometric cosmology and fundamental fermions

    PubMed Central

    Segal, I. E.

    1982-01-01

    It is proposed that the fundamental fermions of nature are modeled by fields on the chronometric cosmos that are not precisely spinors but become such only in the nonchronometric limit. The imbedding of the scale-extended Poincaré group in the linearizer of the Minkowskian conformal group defines such fields, by induction. PMID:16593266

  14. Museum Techniques in Fundamental Education.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    Some museum techniques and methods can be used in fundamental educational programs without elaborate buildings or equipment; exhibitions should be based on valid presumptions and should take into account the "common sense" beliefs of people for whom the exhibit is designed. They can be used profitably in the economic development of local cultural…

  15. Brake Fundamentals. Automotive Articulation Project.

    ERIC Educational Resources Information Center

    Cunningham, Larry; And Others

    Designed for secondary and postsecondary auto mechanics programs, this curriculum guide contains learning exercises in seven areas: (1) brake fundamentals; (2) brake lines, fluid, and hoses; (3) drum brakes; (4) disc brake system and service; (5) master cylinder, power boost, and control valves; (6) parking brakes; and (7) trouble shooting. Each…

  16. Fundamentals of Welding. Teacher Edition.

    ERIC Educational Resources Information Center

    Fortney, Clarence; And Others

    These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…

  17. Environmental Law: Fundamentals for Schools.

    ERIC Educational Resources Information Center

    Day, David R.

    This booklet outlines the environmental problems most likely to arise in schools. An overview provides a fundamental analysis of environmental issues rather than comprehensive analysis and advice. The text examines the concerns that surround superfund cleanups, focusing on the legal framework, and furnishes some practical pointers, such as what to…

  18. Fundamentals of Environmental Education. Report.

    ERIC Educational Resources Information Center

    1976

    An outline of fundamental definitions, relationships, and human responsibilities related to environment provides a basis from which a variety of materials, programs, and activities can be developed. The outline can be used in elementary, secondary, higher education, or adult education programs. The framework is based on principles of the science…

  19. Critique of Coleman's Theory of the Vanishing Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Susskind, Leonard

    In these lectures I would like to review some of the criticisms to the Coleman worm-hole theory of the vanishing cosmological constant. In particular, I would like to focus on the most fundamental assumption that the path integral over topologies defines a probability for the cosmological constant which has the form EXP(A) with A being the Baum-Hawking-Coleman saddle point. Coleman argues that the euclideam path integral over all geometries may be dominated by special configurations which consist of large smooth "spheres" connected by any number of narrow wormholes. Formally summing up such configurations gives a very divergent expression for the path integral…

  20. Cosmologies with variable gravitational constant

    SciTech Connect

    Narkikar, J.V.

    1983-03-01

    In 1937 Dirac presented an argument, based on the socalled large dimensionless numbers, which led him to the conclusion that the Newtonian gravitational constant G changes with epoch. Towards the end of the last century Ernst Mach had given plausible arguments to link the property of inertia of matter to the large scale structure of the universe. Mach's principle also leads to cosmological models with a variable gravitational constant. Three cosmologies which predict a variable G are discussed in this paper both from theoretical and observational points of view.

  1. Elastic constants for 8-OCB

    NASA Astrophysics Data System (ADS)

    Czechowski, Grzegorz; Zywucki, B.; Jadzyn, Jan

    1993-10-01

    The Frederiks transitions for the n-octyloxycyanobiphenyl (8-OCB) placed in the external magnetic and electric field as a function of the temperature have been studied. On the basis of threshold values Bc and Uc, the elastic constants for splay, bend and twist modes are determined. The magnetic anisotropy of 8-OCB as a function of temperature has been determined. The K11 and K33 elastic constants show the pretransitional nematic- smectic A effect. The values of critical exponents obtained from the temperature dependence of K11 and K33 in the vicinity of N-SA phase transition are discussed.

  2. Fundamentals of Managing Reference Collections

    ERIC Educational Resources Information Center

    Singer, Carol A.

    2012-01-01

    Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…

  3. Determination of the Vibrational Constants of Some Diatomic Molecules: A Combined Infrared Spectroscopic and Quantum Chemical Third Year Chemistry Project.

    ERIC Educational Resources Information Center

    Ford, T. A.

    1979-01-01

    In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…

  4. Boltzmann's constant: A laboratory experiment

    NASA Astrophysics Data System (ADS)

    Kruglak, Haym

    1989-03-01

    The mean-square displacement of a latex microsphere is determined from its projection on a TV monitor. The distribution of displacement is shown to be Gaussian. Boltzmann's constant, calculated from the pooled data of several observers, is in excellent agreement with the accepted value. The experiment is designed for one laboratory period in the advanced undergraduate laboratory.

  5. Ten Thousand Solar Constants Radiometer

    NASA Technical Reports Server (NTRS)

    Kendall, J. M., Sr.

    1985-01-01

    "Radiometer for Accurate (+ or - 1%) Measurement of Solar Irradiances Equal to 10,000 Solar Constants," gives additional information on radiometer described elsewhere. Self-calibrating, water-cooled, thermopile radiometer measures irradiance produced in solar image formed by parabolic reflector or by multiple-mirror solar installation.

  6. Fundamental neutron physics at LANSCE

    SciTech Connect

    Greene, G.

    1995-10-01

    Modern neutron sources and science share a common origin in mid-20th-century scientific investigations concerned with the study of the fundamental interactions between elementary particles. Since the time of that common origin, neutron science and the study of elementary particles have evolved into quite disparate disciplines. The neutron became recognized as a powerful tool for studying condensed matter with modern neutron sources being primarily used (and justified) as tools for neutron scattering and materials science research. The study of elementary particles has, of course, led to the development of rather different tools and is now dominated by activities performed at extremely high energies. Notwithstanding this trend, the study of fundamental interactions using neutrons has continued and remains a vigorous activity at many contemporary neutron sources. This research, like neutron scattering research, has benefited enormously by the development of modern high-flux neutron facilities. Future sources, particularly high-power spallation sources, offer exciting possibilities for continuing this research.

  7. DOE Fundamentals Handbook: Classical Physics

    SciTech Connect

    Not Available

    1992-06-01

    The Classical Physics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of physical forces and their properties. The handbook includes information on the units used to measure physical properties; vectors, and how they are used to show the net effect of various forces; Newton's Laws of motion, and how to use these laws in force and motion applications; and the concepts of energy, work, and power, and how to measure and calculate the energy involved in various applications. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility systems and equipment.

  8. Characterization of a constant current charge detector.

    PubMed

    Mori, Masanobu; Chen, Yongjing; Ohira, Shin-Ichi; Dasgupta, Purnendu K

    2012-12-15

    Ion exchangers are ionic equivalents of doped semiconductors, where cations and anions are equivalents of holes and electrons as charge carriers in solid state semiconductors. We have previously demonstrated an ion exchange membrane (IEM) based electrolyte generator which behaves similar to a light-emitting diode and a charge detector (ChD) which behaves analogous to a p-i-n photodiode. The previous work on the charge detector, operated at a constant voltage, established its unique ability to respond to the charge represented by the analyte ions regardless of their redox properties, rather than to their conductivities. It also suggested that electric field induced dissociation (EFID) of water occurs at one or both ion exchange membranes. A logical extension is to study the behavior of the same device, operated in a constant current mode (ChD(i)). The evidence indicates that in the present operational mode the device also responds to the charge represented by the analytes and not their conductivity. Injection of a base into a charge detector operated in the constant voltage mode was not previously examined; in the constant current mode, base injection appears to inhibit EFID. The effects of applied current, analyte residence time and outer channel fluid composition were individually examined; analyte ions of different mobilities as well as affinities for the respective IEMs were used. While the exact behavior is somewhat dependent on the applied current, strong electrolytes, both acids and salts, respond the highest and in a near-uniform fashion, weak acids and their salts respond in an intermediate fashion and bases produce the lowest responses. A fundamentally asymmetric behavior is observed. Injected bases but not injected acids produce a poor response; the effects of incorporating a strong base as the electrolyte in the anion exchange membrane (AEM) compartment is far greater than incorporating an acid in the cation exchange membrane (CEM) compartment. These

  9. The 1% concordance Hubble constant

    SciTech Connect

    Bennett, C. L.; Larson, D.; Weiland, J. L.; Hinshaw, G.

    2014-10-20

    The determination of the Hubble constant has been a central goal in observational astrophysics for nearly a hundred years. Extraordinary progress has occurred in recent years on two fronts: the cosmic distance ladder measurements at low redshift and cosmic microwave background (CMB) measurements at high redshift. The CMB is used to predict the current expansion rate through a best-fit cosmological model. Complementary progress has been made with baryon acoustic oscillation (BAO) measurements at relatively low redshifts. While BAO data do not independently determine a Hubble constant, they are important for constraints on possible solutions and checks on cosmic consistency. A precise determination of the Hubble constant is of great value, but it is more important to compare the high and low redshift measurements to test our cosmological model. Significant tension would suggest either uncertainties not accounted for in the experimental estimates or the discovery of new physics beyond the standard model of cosmology. In this paper we examine in detail the tension between the CMB, BAO, and cosmic distance ladder data sets. We find that these measurements are consistent within reasonable statistical expectations and we combine them to determine a best-fit Hubble constant of 69.6 ± 0.7 km s{sup –1} Mpc{sup –1}. This value is based upon WMAP9+SPT+ACT+6dFGS+BOSS/DR11+H {sub 0}/Riess; we explore alternate data combinations in the text. The combined data constrain the Hubble constant to 1%, with no compelling evidence for new physics.

  10. Varying Fine-Structure Constant and the Cosmological Constant Problem

    NASA Astrophysics Data System (ADS)

    Fujii, Yasunori

    We start with a brief account of the latest analysis of the Oklo phenomenon providing the still most stringent constraint on time variability of the fine-structure constant α. Comparing this with the recent result from the measurement of distant QSO's appears to indicate a non-uniform time-dependence, which we argue to be related to another recent finding of the accelerating universe. This view is implemented in terms of the scalar-tensor theory, applied specifically to the small but nonzero cosmological constant. Our detailed calculation shows that these two phenomena can be understood in terms of a common origin, a particular behavior of the scalar field, dilaton. We also sketch how this theoretical approach makes it appropriate to revisit non-Newtonian gravity featuring small violation of Weak Equivalence Principle at medium distances.

  11. The spectroscopic constants and anharmonic force field of AgSH: An ab initio study

    NASA Astrophysics Data System (ADS)

    Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang

    2016-07-01

    The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH.

  12. The Not so Constant Gravitational "Constant" G as a Function of Quantum Vacuum

    NASA Astrophysics Data System (ADS)

    Maxmilian Caligiuri, Luigi

    Gravitation is still the less understood among the fundamental forces of Nature. The ultimate physical origin of its ruling constant G could give key insights in this understanding. According to the Einstein's Theory of General Relativity, a massive body determines a gravitational potential that alters the speed of light, the clock's rate and the particle size as a function of the distance from its own center. On the other hand, it has been shown that the presence of mass determines a modification of Zero-Point Field (ZPF) energy density within its volume and in the space surrounding it. All these considerations strongly suggest that also the constant G could be expressed as a function of quantum vacuum energy density somehow depending on the distance from the mass whose presence modifies the ZPF energy structure. In this paper, starting from a constitutive medium-based picture of space, it has been formulated a model of gravitational constant G as a function of Planck's time and Quantum Vacuum energy density in turn depending on the radial distance from center of the mass originating the gravitational field, supposed as spherically symmetric. According to this model, in which gravity arises from the unbalanced physical vacuum pressure, gravitational "constant" G is not truly unchanging but slightly varying as a function of the distance from the mass source of gravitational potential itself. An approximate analytical form of such dependence has been discussed. The proposed model, apart from potentially having deep theoretical consequences on the commonly accepted picture of physical reality (from cosmology to matter stability), could also give the theoretical basis for unthinkable applications related, for example, to the field of gravity control and space propulsion.

  13. Electrochemical metallization memories—fundamentals, applications, prospects

    NASA Astrophysics Data System (ADS)

    Valov, Ilia; Waser, Rainer; Jameson, John R.; Kozicki, Michael N.

    2011-06-01

    This review focuses on electrochemical metallization memory cells (ECM), highlighting their advantages as the next generation memories. In a brief introduction, the basic switching mechanism of ECM cells is described and the historical development is sketched. In a second part, the full spectra of materials and material combinations used for memory device prototypes and for dedicated studies are presented. In a third part, the specific thermodynamics and kinetics of nanosized electrochemical cells are described. The overlapping of the space charge layers is found to be most relevant for the cell properties at rest. The major factors determining the functionality of the ECM cells are the electrode reaction and the transport kinetics. Depending on electrode and/or electrolyte material electron transfer, electro-crystallization or slow diffusion under strong electric fields can be rate determining. In the fourth part, the major device characteristics of ECM cells are explained. Emphasis is placed on switching speed, forming and SET/RESET voltage, RON to ROFF ratio, endurance and retention, and scaling potentials. In the last part, circuit design aspects of ECM arrays are discussed, including the pros and cons of active and passive arrays. In the case of passive arrays, the fundamental sneak path problem is described and as well as a possible solution by two anti-serial (complementary) interconnected resistive switches per cell. Furthermore, the prospects of ECM with regard to further scalability and the ability for multi-bit data storage are addressed.

  14. New clinical findings on the longevity gene in disease, health, & longevity: Sirtuin 1 often decreases with advanced age & serious diseases in most parts of the human body, while relatively high & constant Sirtuin 1 regardless of age was first found in the hippocampus of supercentenarians.

    PubMed

    Omura, Yoshiaki; Lu, Dominic P; Jones, Marilyn; O'Young, Brian; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu

    2011-01-01

    The expression of the longevity gene, Sirtuin 1, was non-invasively measured using Electro-Magnetic Field (EMF) resonance phenomenon between a known amount of polyclonal antibody of the C-terminal of Sirtuin 1 & Sirtuin 1 molecule inside of the body. Our measurement of over 100 human adult males and females, ranging between 20-122 years old, indicated that the majority of subjects had Sirtuin 1 levels of 5-10 pg BDORT units in most parts of the body. When Sirtuin 1 was less than 1 pg, the majority of the people had various degrees of tumors or other serious diseases. When Sirtuin 1 levels were less than 0.25 pg BDORT units, a high incidence of AIDS was also detected. Very few people had Sirtuin 1 levels of over 25 pg BDORT units in most parts of the body. We selected 7 internationally recognized supercentenarians who lived between 110-122 years old. To our surprise, most of their body Sirtuin 1 levels were between 2.5-10 pg BDORT units. However, by evaluating different parts of the brain, we found that both sides of the Hippocampus had a much higher amount of Sirtuin 1, between 25-100 pg BDORT units. With most subjects, Sirtuin 1 was found to be higher in the Hippocampus than in the rest of the body and remains relatively constant regardless of age. We found that Aspartame, plastic eye contact lenses, and asbestos in dental apparatuses, which reduce normal cell telomeres, also significantly reduce Sirtuin 1. In addition, we found that increasing normal cell telomere by electrical or mechanical stimulation of True ST-36 increases the expression of the Sirtuin 1 gene in people in which expression is low. This measurement of Sirtuin 1 in the Hippocampus has become a reliable indicator for detecting potential longevity of an individual.

  15. How does Planck’s constant influence the macroscopic world?

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2016-09-01

    In physics, Planck’s constant is a fundamental physical constant accounting for the energy-quantization phenomenon in the microscopic world. The value of Planck’s constant also determines in which length scale the quantum phenomenon will become conspicuous. Some students think that if Planck’s constant were to have a larger value than it has now, the quantum effect would only become observable in a world with a larger size, whereas the macroscopic world might remain almost unchanged. After reasoning from some basic physical principles and theories, we found that doubling Planck’s constant might result in a radical change on the geometric sizes and apparent colors of macroscopic objects, the solar spectrum and luminosity, the climate and gravity on Earth, as well as energy conversion between light and materials such as the efficiency of solar cells and light-emitting diodes. From the discussions in this paper, students can appreciate how Planck’s constant affects various aspects of the world in which we are living now.

  16. Three pion nucleon coupling constants

    NASA Astrophysics Data System (ADS)

    Ruiz Arriola, E.; Amaro, J. E.; Navarro Pérez, R.

    2016-08-01

    There exist four pion nucleon coupling constants, fπ0pp, - fπ0nn, fπ+pn/2 and fπ-np/2 which coincide when up and down quark masses are identical and the electron charge is zero. While there is no reason why the pion-nucleon-nucleon coupling constants should be identical in the real world, one expects that the small differences might be pinned down from a sufficiently large number of independent and mutually consistent data. Our discussion provides a rationale for our recent determination fp2 = 0.0759(4),f 02 = 0.079(1),f c2 = 0.0763(6), based on a partial wave analysis of the 3σ self-consistent nucleon-nucleon Granada-2013 database comprising 6713 published data in the period 1950-2013.

  17. Quaternions as astrometric plate constants

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.

    1987-01-01

    A new method for solving problems in relative astrometry is proposed. In it, the relationship between the measured quantities and the components of the position vector of a star is modeled using quaternions, in effect replacing the plate constants of a standard four-plate-constant solution with the four components of a quaternion. The method allows a direct solution for the position vectors of the stars, and hence for the equatorial coordinates. Distortions, magnitude, and color effects are readily incorporated into the formalism, and the method is directly applicable to overlapping-plate problems. The advantages of the method include the simplicity of the resulting equations, their freedom from singularities, and the fact that trigonometric functions and tangential point transformations are not needed to model the plate material. A global solution over the entire sky is possible.

  18. Three pion nucleon coupling constants

    NASA Astrophysics Data System (ADS)

    Ruiz Arriola, E.; Amaro, J. E.; Navarro Pérez, R.

    2016-08-01

    There exist four pion nucleon coupling constants, fπ0pp, ‑ fπ0nn, fπ+pn/2 and fπ‑np/2 which coincide when up and down quark masses are identical and the electron charge is zero. While there is no reason why the pion-nucleon-nucleon coupling constants should be identical in the real world, one expects that the small differences might be pinned down from a sufficiently large number of independent and mutually consistent data. Our discussion provides a rationale for our recent determination fp2 = 0.0759(4),f 02 = 0.079(1),f c2 = 0.0763(6), based on a partial wave analysis of the 3σ self-consistent nucleon-nucleon Granada-2013 database comprising 6713 published data in the period 1950-2013.

  19. Fundamental Limits to Cellular Sensing

    NASA Astrophysics Data System (ADS)

    ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew

    2016-03-01

    In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade

  20. Solid Lubrication Fundamentals and Applications

    NASA Technical Reports Server (NTRS)

    Miyoshi, Kazuhisa

    2001-01-01

    Solid Lubrication Fundamentals and Applications description of the adhesion, friction, abrasion, and wear behavior of solid film lubricants and related tribological materials, including diamond and diamond-like solid films. The book details the properties of solid surfaces, clean surfaces, and contaminated surfaces as well as discussing the structures and mechanical properties of natural and synthetic diamonds; chemical-vapor-deposited diamond film; surface design and engineering toward wear-resistant, self-lubricating diamond films and coatings. The author provides selection and design criteria as well as applications for synthetic and natural coatings in the commercial, industrial and aerospace industries..

  1. Reconstruction of fundamental SUSY parameters

    SciTech Connect

    P. M. Zerwas et al.

    2003-09-25

    We summarize methods and expected accuracies in determining the basic low-energy SUSY parameters from experiments at future e{sup +}e{sup -} linear colliders in the TeV energy range, combined with results from LHC. In a second step we demonstrate how, based on this set of parameters, the fundamental supersymmetric theory can be reconstructed at high scales near the grand unification or Planck scale. These analyses have been carried out for minimal supergravity [confronted with GMSB for comparison], and for a string effective theory.

  2. Are the Truly Constant Constants of Nature? How is the Real Material Space and its Structure?

    SciTech Connect

    Luz Montero Garcia, Jose de la; Novoa Blanco, Jesus Francisco

    2007-04-28

    In a concise and simplified way, some matters of authors' theories -Unified Theory of the Physical and Mathematical Universal Constants and Quantum Cellular Structural Geometry-, an only one theoretical main body MN2. This investigation has as objective the search of the last cells that base the existence, unicity and harmony of matter, as well as its structural-formal and dynamic-functional diversity. The quantitative hypothesis is demonstrated that 'World is one, is one; but it is one Arithmetic-Geometric-Topological-Dimensional and Structural-Cellular-Dynamic one, simultaneously'. In the Frontiers of Fundamental Physics such last cells are the cells of own Real Material Space of whose whole accretion, interactive and staggered all the existing one at all the hierarchic levels arises, cells these below which make no sense to speak of structure and, therefore, of existence. The cells of the Real Material Space are its 'Atoms'. Law of Planetary Systems or '4th Kepler's Law'.

  3. Reflectance Spectra and Optical Constants of Iron Sulfates For Mars

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Noe Dobrea, E. Z.; Jamieson, C. S.; Dalton, J. B.; Abbey, W. J.

    2012-12-01

    In this work, we present visible and near-infrared (VNIR, λ=0.35 - 5 μm) laboratory reflectance spectra obtained at Mars-relevant temperatures and corresponding optical constants (real and imaginary refractive indices) for iron sulfates that have been observed on Mars, e.g., via Mars Reconnaissance Orbiter CRISM and Mars Express OMEGA spectrometers. Fe-sulfates have also been found by the MER rovers in a variety of forms in Meridiani Planum and Gusev Crater, suggesting acidic aqueous, evaporation, and dessication processes were at work in these locations. We focus first on the Fe-sulfates szomolnokite and natural samples of jarosite, which have been found as distinct layers within polyhydrated non-Fe sulfate material at Columbus Crater on Mars and as outcrops at Mawrth Vallis. We also present data on five of the following Fe-sulfates in our library: butlerite, copiapite, coquimbite, ferricopiapite, melanterite, parabutlerite, rozenite, and rhomboclase. Determining the exact type of Mars sulfates (Fe- vs. Mg-rich) may lead to more information on the epoch of formation or humidity conditions on Mars during their formation. Therefore, these data will help to fully distinguish between and constrain the abundance and distribution of sulfates on the martian surface, which will lead to improvements in understanding the pressure, temperature, and humidity conditions and how active frost, groundwater, and atmospheric processes once were on Mars. This work was supported by NASA's Mars Fundamental Research Program (NNX10AP78G: PI Pitman) and partly performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.

  4. Constants of motion of the four-particle Calogero model

    SciTech Connect

    Saghatelian, A.

    2012-10-15

    We present the explicit expressions of the complete set of constants of motion of four-particle Calogero model with excluded center of mass, i.e. of the A{sub 3} rational Calogero model. Then we find the constants of motion of its spherical part, defining two-dimensional 12-center spherical oscillator, with the force centers located at the vertexes of cuboctahedron.

  5. Dielectric constant of liquid alkanes and hydrocarbon mixtures

    NASA Technical Reports Server (NTRS)

    Sen, A. D.; Anicich, V. G.; Arakelian, T.

    1992-01-01

    The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.

  6. Dielectric constant of liquid alkanes and hydrocarbon mixtures.

    PubMed

    Sen, A D; Anicich, V G; Arakelian, T

    1992-01-01

    The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.

  7. Two Fundamental Principles of Nature's Interactions

    NASA Astrophysics Data System (ADS)

    Ma, Tian; Wang, Shouhong

    2014-03-01

    In this talk, we present two fundamental principles of nature's interactions, the principle of interaction dynamics (PID) and the principle of representation invariance (PRI). Intuitively, PID takes the variation of the action functional under energy-momentum conservation constraint. PID offers a completely different and natural way of introducing Higgs fields. PRI requires that physical laws be independent of representations of the gauge groups. These two principles give rise to a unified field model for four interactions, which can be naturally decoupled to study individual interactions. With these two principles, we are able to derive 1) a unified theory for dark matter and dark energy, 2) layered strong and weak interaction potentials, and 3) the energy levels of subatomic particles. Supported in part by NSF, ONR and Chinese NSF.

  8. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  9. An Alcohol Test for Drifting Constants

    NASA Astrophysics Data System (ADS)

    Jansen, P.; Bagdonaite, J.; Ubachs, W.; Bethlem, H. L.; Kleiner, I.; Xu, L.-H.

    2013-06-01

    The Standard Model of physics is built on the fundamental constants of nature, however without providing an explanation for their values, nor requiring their constancy over space and time. Molecular spectroscopy can address this issue. Recently, we found that microwave transitions in methanol are extremely sensitive to a variation of the proton-to-electron mass ratio μ, due to a fortuitous interplay between classically forbidden internal rotation and rotation of the molecule as a whole. In this talk, we will explain the origin of this effect and how the sensitivity coefficients in methanol are calculated. In addition, we set a limit on a possible cosmological variation of μ by comparing transitions in methanol observed in the early Universe with those measured in the laboratory. Based on radio-astronomical observations of PKS1830-211, we deduce a constraint of Δμ/μ=(0.0± 1.0)× 10^{-7} at redshift z = 0.89, corresponding to a look-back time of 7 billion years. While this limit is more constraining and systematically more robust than previous ones, the methanol method opens a new search territory for probing μ-variation on cosmological timescales. P. Jansen, L.-H. Xu, I. Kleiner, W. Ubachs, and H.L. Bethlem Phys. Rev. Lett. {106}(100801) 2011. J. Bagdonaite, P. Jansen, C. Henkel, H.L. Bethlem, K.M. Menten, and W. Ubachs Science {339}(46) 2013.

  10. Chandra Independently Determines Hubble Constant

    NASA Astrophysics Data System (ADS)

    2006-08-01

    A critically important number that specifies the expansion rate of the Universe, the so-called Hubble constant, has been independently determined using NASA's Chandra X-ray Observatory. This new value matches recent measurements using other methods and extends their validity to greater distances, thus allowing astronomers to probe earlier epochs in the evolution of the Universe. "The reason this result is so significant is that we need the Hubble constant to tell us the size of the Universe, its age, and how much matter it contains," said Max Bonamente from the University of Alabama in Huntsville and NASA's Marshall Space Flight Center (MSFC) in Huntsville, Ala., lead author on the paper describing the results. "Astronomers absolutely need to trust this number because we use it for countless calculations." Illustration of Sunyaev-Zeldovich Effect Illustration of Sunyaev-Zeldovich Effect The Hubble constant is calculated by measuring the speed at which objects are moving away from us and dividing by their distance. Most of the previous attempts to determine the Hubble constant have involved using a multi-step, or distance ladder, approach in which the distance to nearby galaxies is used as the basis for determining greater distances. The most common approach has been to use a well-studied type of pulsating star known as a Cepheid variable, in conjunction with more distant supernovae to trace distances across the Universe. Scientists using this method and observations from the Hubble Space Telescope were able to measure the Hubble constant to within 10%. However, only independent checks would give them the confidence they desired, considering that much of our understanding of the Universe hangs in the balance. Chandra X-ray Image of MACS J1149.5+223 Chandra X-ray Image of MACS J1149.5+223 By combining X-ray data from Chandra with radio observations of galaxy clusters, the team determined the distances to 38 galaxy clusters ranging from 1.4 billion to 9.3 billion

  11. Henry's law constants of polyols

    NASA Astrophysics Data System (ADS)

    Compernolle, S.; Müller, J.-F.

    2014-05-01

    Henry's law constants (HLC) are derived for several polyols bearing between 2 and 6 hydroxyl groups, based on literature data for water activity, vapour pressure and/or solubility. Depending on the case, infinite dilution activity coefficients (IDACs), solid state pressures or activity coefficient ratios are obtained as intermediary results. For most compounds, these are the first values reported, while others compare favourably with literature data in most cases. Using these values and those from a previous work (Compernolle and Müller, 2014), an assessment is made on the partitioning of polyols, diacids and hydroxy acids to droplet and aqueous aerosol.

  12. Fundamental Travel Demand Model Example

    NASA Technical Reports Server (NTRS)

    Hanssen, Joel

    2010-01-01

    Instances of transportation models are abundant and detailed "how to" instruction is available in the form of transportation software help documentation. The purpose of this paper is to look at the fundamental inputs required to build a transportation model by developing an example passenger travel demand model. The example model reduces the scale to a manageable size for the purpose of illustrating the data collection and analysis required before the first step of the model begins. This aspect of the model development would not reasonably be discussed in software help documentation (it is assumed the model developer comes prepared). Recommendations are derived from the example passenger travel demand model to suggest future work regarding the data collection and analysis required for a freight travel demand model.

  13. Fundamental base closure environmental principles

    SciTech Connect

    Yim, R.A.

    1994-12-31

    Military base closures present a paradox. The rate, scale and timing of military base closures is historically unique. However, each base itself typically does not present unique problems. Thus, the challenge is to design innovative solutions to base redevelopment and remediation issues, while simultaneously adopting common, streamlined or pre-approved strategies to shared problems. The author presents six environmental principles that are fundamental to base closure. They are: remediation not clean up; remediation will impact reuse; reuse will impact remediation; remediation and reuse must be coordinated; environmental contamination must be evaluated as any other initial physical constraint on development, not as an overlay after plans are created; and remediation will impact development, financing and marketability.

  14. Fundamental reaction pathways during coprocessing

    SciTech Connect

    Stock, L.M.; Gatsis, J.G.

    1992-12-01

    The objective of this research was to investigate the fundamental reaction pathways in coal petroleum residuum coprocessing. Once the reaction pathways are defined, further efforts can be directed at improving those aspects of the chemistry of coprocessing that are responsible for the desired results such as high oil yields, low dihydrogen consumption, and mild reaction conditions. We decided to carry out this investigation by looking at four basic aspects of coprocessing: (1) the effect of fossil fuel materials on promoting reactions essential to coprocessing such as hydrogen atom transfer, carbon-carbon bond scission, and hydrodemethylation; (2) the effect of varied mild conditions on the coprocessing reactions; (3) determination of dihydrogen uptake and utilization under severe conditions as a function of the coal or petroleum residuum employed; and (4) the effect of varied dihydrogen pressure, temperature, and residence time on the uptake and utilization of dihydrogen and on the distribution of the coprocessed products. Accomplishments are described.

  15. Holographic viscosity of fundamental matter.

    PubMed

    Mateos, David; Myers, Robert C; Thomson, Rowan M

    2007-03-01

    A holographic dual of a finite-temperature SU(Nc) gauge theory with a small number of flavors Nf or =1/4pi. Given the known results for the entropy density, the contribution of the fundamental matter eta fund is therefore enhanced at strong 't Hooft coupling lambda; for example, eta fund approximately lambda NcNfT3 in four dimensions. Other transport coefficients are analogously enhanced. These results hold with or without a baryon number chemical potential. PMID:17358523

  16. [INFORMATION, A FUNDAMENTAL PATIENT RIGHT?].

    PubMed

    Mémeteau, Gérard

    2015-03-01

    Although expressed before the "Lambert" case, which has led us to think about refusal and assent in the context of internal rights, conventional rights--and in the context of the patient's bed!--these simple remarks present the patient's right to medical information as a so-called fundamental right. But it can only be understood with a view to a treatment or other medical act; otherwise it has no reason to be and is only an academic exercise, however exciting, but not much use by itself. What if we reversed the terms of the problem: the right of the doctor to information? (The beautiful thesis of Ph. Gaston, Paris 8, 2 December 2014).

  17. Cognition is … Fundamentally Cultural

    PubMed Central

    Bender, Andrea; Beller, Sieghard

    2013-01-01

    A prevailing concept of cognition in psychology is inspired by the computer metaphor. Its focus on mental states that are generated and altered by information input, processing, storage and transmission invites a disregard for the cultural dimension of cognition, based on three (implicit) assumptions: cognition is internal, processing can be distinguished from content, and processing is independent of cultural background. Arguing against each of these assumptions, we point out how culture may affect cognitive processes in various ways, drawing on instances from numerical cognition, ethnobiological reasoning, and theory of mind. Given the pervasive cultural modulation of cognition—on all of Marr’s levels of description—we conclude that cognition is indeed fundamentally cultural, and that consideration of its cultural dimension is essential for a comprehensive understanding. PMID:25379225

  18. Fundamental issues in questionnaire design.

    PubMed

    Murray, P

    1999-07-01

    The questionnaire is probably the most common form of data collection tool used in nursing research. There is a misconception that anyone with a clear grasp of English and a modicum of common sense can design an effective questionnaire. Contrary to such common belief, this article will demonstrate that questionnaire design is a complex and time consuming process, but a necessary labour to ensure valid and reliable data is collected. In addition, meticulous construction is more likely to yield data that can be utilized in the pursuit of objective, quantitative and generalizable truths, upon which practice and policy decisions can be formulated. This article examines a myriad of fundamental issues surrounding questionnaire design, which encompass question wording, question order, presentation, administration and data collection, amongst other issues.

  19. Fundamentals of air quality systems

    SciTech Connect

    Noll, K.E.

    1999-08-01

    The book uses numerous examples to demonstrate how basic design concepts can be applied to the control of air emissions from industrial sources. It focuses on the design of air pollution control devices for the removal of gases and particles from industrial sources, and provides detailed, specific design methods for each major air pollution control system. Individual chapters provide design methods that include both theory and practice with emphasis on the practical aspect by providing numerous examples that demonstrate how air pollution control devices are designed. Contents include air pollution laws, air pollution control devices; physical properties of air, gas laws, energy concepts, pressure; motion of airborne particles, filter and water drop collection efficiency; fundamentals of particulate emission control; cyclones; fabric filters; wet scrubbers; electrostatic precipitators; control of volatile organic compounds; adsorption; incineration; absorption; control of gaseous emissions from motor vehicles; practice problems (with solutions) for the P.E. examination in environmental engineering. Design applications are featured throughout.

  20. Fundamental enabling issues in nanotechnology :

    SciTech Connect

    Floro, Jerrold Anthony; Foiles, Stephen Martin; Hearne, Sean Joseph; Hoyt, Jeffrey John; Seel, Steven Craig; Webb, Edmund Blackburn,; Morales, Alfredo Martin; Zimmerman, Jonathan A.

    2007-10-01

    To effectively integrate nanotechnology into functional devices, fundamental aspects of material behavior at the nanometer scale must be understood. Stresses generated during thin film growth strongly influence component lifetime and performance; stress has also been proposed as a mechanism for stabilizing supported nanoscale structures. Yet the intrinsic connections between the evolving morphology of supported nanostructures and stress generation are still a matter of debate. This report presents results from a combined experiment and modeling approach to study stress evolution during thin film growth. Fully atomistic simulations are presented predicting stress generation mechanisms and magnitudes during all growth stages, from island nucleation to coalescence and film thickening. Simulations are validated by electrodeposition growth experiments, which establish the dependence of microstructure and growth stresses on process conditions and deposition geometry. Sandia is one of the few facilities with the resources to combine experiments and modeling/theory in this close a fashion. Experiments predicted an ongoing coalescence process that generates signficant tensile stress. Data from deposition experiments also supports the existence of a kinetically limited compressive stress generation mechanism. Atomistic simulations explored island coalescence and deposition onto surfaces intersected by grain boundary structures to permit investigation of stress evolution during later growth stages, e.g. continual island coalescence and adatom incorporation into grain boundaries. The predictive capabilities of simulation permit direct determination of fundamental processes active in stress generation at the nanometer scale while connecting those processes, via new theory, to continuum models for much larger island and film structures. Our combined experiment and simulation results reveal the necessary materials science to tailor stress, and therefore performance, in

  1. Stability constant estimator user`s guide

    SciTech Connect

    Hay, B.P.; Castleton, K.J.; Rustad, J.R.

    1996-12-01

    The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.

  2. Separate Einstein-Eddington Spaces and the Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Azri, Hemza

    2016-07-01

    In affine variational principle, a symmetric linear connection is taken as a fundamental field. The metric tensor is generated dynamically, and it appears as a canonically conjugate to the connection. From this picture, Einstein's gravity with a cosmological constant can be obtained by a covariant Legendre transformation of the affine Lagrangian. In this talk, we apply this formalism (first proposed by Kijowski) to product spaces and the cosmological constant problem. From pure affine variational principle, we derive the separate Einstein space described by its Ricci tensor. The derived equations spite into two field equations of motion that describe two maximally symmetric spaces with two non independent cosmological constants. We propose that the invariance of the bi-field equations under projections on the separate spaces, may render one of the cosmological constants to zero. We also formulate the model in the presence of matter fields. The resulted separate Einstein-Eddington spaces maybe considered as two states that describe the universe before and after inflation. A possibly interesting affine action for a general perfect fluid is also proposed. It turns out that the condition which leads to zero cosmological constant in the vacuum case, eliminates here the effects of the gravitational mass density of the perfect fluid, and the dynamic of the universe in its final state is governed by only the inertial mass density of the fluid. We present no new solutions to the problems associated with inflation.

  3. Fundamental plant biology enabled by the space shuttle.

    PubMed

    Paul, Anna-Lisa; Wheeler, Ray M; Levine, Howard G; Ferl, Robert J

    2013-01-01

    The relationship between fundamental plant biology and space biology was especially synergistic in the era of the Space Shuttle. While all terrestrial organisms are influenced by gravity, the impact of gravity as a tropic stimulus in plants has been a topic of formal study for more than a century. And while plants were parts of early space biology payloads, it was not until the advent of the Space Shuttle that the science of plant space biology enjoyed expansion that truly enabled controlled, fundamental experiments that removed gravity from the equation. The Space Shuttle presented a science platform that provided regular science flights with dedicated plant growth hardware and crew trained in inflight plant manipulations. Part of the impetus for plant biology experiments in space was the realization that plants could be important parts of bioregenerative life support on long missions, recycling water, air, and nutrients for the human crew. However, a large part of the impetus was that the Space Shuttle enabled fundamental plant science essentially in a microgravity environment. Experiments during the Space Shuttle era produced key science insights on biological adaptation to spaceflight and especially plant growth and tropisms. In this review, we present an overview of plant science in the Space Shuttle era with an emphasis on experiments dealing with fundamental plant growth in microgravity. This review discusses general conclusions from the study of plant spaceflight biology enabled by the Space Shuttle by providing historical context and reviews of select experiments that exemplify plant space biology science.

  4. Fundamental plant biology enabled by the space shuttle.

    PubMed

    Paul, Anna-Lisa; Wheeler, Ray M; Levine, Howard G; Ferl, Robert J

    2013-01-01

    The relationship between fundamental plant biology and space biology was especially synergistic in the era of the Space Shuttle. While all terrestrial organisms are influenced by gravity, the impact of gravity as a tropic stimulus in plants has been a topic of formal study for more than a century. And while plants were parts of early space biology payloads, it was not until the advent of the Space Shuttle that the science of plant space biology enjoyed expansion that truly enabled controlled, fundamental experiments that removed gravity from the equation. The Space Shuttle presented a science platform that provided regular science flights with dedicated plant growth hardware and crew trained in inflight plant manipulations. Part of the impetus for plant biology experiments in space was the realization that plants could be important parts of bioregenerative life support on long missions, recycling water, air, and nutrients for the human crew. However, a large part of the impetus was that the Space Shuttle enabled fundamental plant science essentially in a microgravity environment. Experiments during the Space Shuttle era produced key science insights on biological adaptation to spaceflight and especially plant growth and tropisms. In this review, we present an overview of plant science in the Space Shuttle era with an emphasis on experiments dealing with fundamental plant growth in microgravity. This review discusses general conclusions from the study of plant spaceflight biology enabled by the Space Shuttle by providing historical context and reviews of select experiments that exemplify plant space biology science. PMID:23281389

  5. Holographic dark energy with cosmological constant

    NASA Astrophysics Data System (ADS)

    Hu, Yazhou; Li, Miao; Li, Nan; Zhang, Zhenhui

    2015-08-01

    Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ωhde are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ2min=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain -0.07<ΩΛ0<0.68 and correspondingly 0.04<Ωhde0<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.

  6. Holographic dark energy with cosmological constant

    SciTech Connect

    Hu, Yazhou; Li, Nan; Zhang, Zhenhui; Li, Miao E-mail: mli@itp.ac.cn E-mail: zhangzhh@mail.ustc.edu.cn

    2015-08-01

    Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ω{sub hde} are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ{sup 2}{sub min}=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain −0.07<Ω{sub Λ0}<0.68 and correspondingly 0.04<Ω{sub hde0}<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.

  7. BOOK REVIEWS: Quantum Mechanics: Fundamentals

    NASA Astrophysics Data System (ADS)

    Whitaker, A.

    2004-02-01

    This review is of three books, all published by Springer, all on quantum theory at a level above introductory, but very different in content, style and intended audience. That of Gottfried and Yan is of exceptional interest, historical and otherwise. It is a second edition of Gottfried’s well-known book published by Benjamin in 1966. This was written as a text for a graduate quantum mechanics course, and has become one of the most used and respected accounts of quantum theory, at a level mathematically respectable but not rigorous. Quantum mechanics was already solidly established by 1966, but this second edition gives an indication of progress made and changes in perspective over the last thirty-five years, and also recognises the very substantial increase in knowledge of quantum theory obtained at the undergraduate level. Topics absent from the first edition but included in the second include the Feynman path integral, seen in 1966 as an imaginative but not very useful formulation of quantum theory. Feynman methods were given only a cursory mention by Gottfried. Their practical importance has now been fully recognised, and a substantial account of them is provided in the new book. Other new topics include semiclassical quantum mechanics, motion in a magnetic field, the S matrix and inelastic collisions, radiation and scattering of light, identical particle systems and the Dirac equation. A topic that was all but totally neglected in 1966, but which has flourished increasingly since, is that of the foundations of quantum theory. John Bell’s work of the mid-1960s has led to genuine theoretical and experimental achievement, which has facilitated the development of quantum optics and quantum information theory. Gottfried’s 1966 book played a modest part in this development. When Bell became increasingly irritated with the standard theoretical approach to quantum measurement, Viki Weisskopf repeatedly directed him to Gottfried’s book. Gottfried had devoted a

  8. Normal and torsional spring constants of atomic force microscope cantilevers

    NASA Astrophysics Data System (ADS)

    Green, Christopher P.; Lioe, Hadi; Cleveland, Jason P.; Proksch, Roger; Mulvaney, Paul; Sader, John E.

    2004-06-01

    Two methods commonly used to measure the normal spring constants of atomic force microscope cantilevers are the added mass method of Cleveland et al. [J. P. Cleveland et al., Rev. Sci. Instrum. 64, 403 (1993)], and the unloaded resonance technique of Sader et al. [J. E. Sader, J. W. M. Chon, and P. Mulvaney, Rev. Sci. Instrum. 70, 3967 (1999)]. The added mass method involves measuring the change in resonant frequency of the fundamental mode of vibration upon the addition of known masses to the free end of the cantilever. In contrast, the unloaded resonance technique requires measurement of the unloaded resonant frequency and quality factor of the fundamental mode of vibration, as well as knowledge of the plan view dimensions of the cantilever and properties of the fluid. In many applications, such as frictional force microscopy, the torsional spring constant is often required. Consequently, in this article, we extend both of these techniques to allow simultaneous calibration of both the normal and torsional spring constants. We also investigate the validity and applicability of the unloaded resonance method when a mass is attached to the free end of the cantilever due to its importance in practice.

  9. Asympotics with positive cosmological constant

    NASA Astrophysics Data System (ADS)

    Bonga, Beatrice; Ashtekar, Abhay; Kesavan, Aruna

    2014-03-01

    Since observations to date imply that our universe has a positive cosmological constant, one needs an extension of the theory of isolated systems and gravitational radiation in full general relativity from the asymptotically flat to asymptotically de Sitter space-times. In current definitions, one mimics the boundary conditions used in asymptotically AdS context to conclude that the asymptotic symmetry group is the de Sitter group. However, these conditions severely restricts radiation and in fact rules out non-zero flux of energy, momentum and angular momentum carried by gravitational waves. Therefore, these formulations of asymptotically de Sitter space-times are uninteresting beyond non-radiative spacetimes. The situation is compared and contrasted with conserved charges and fluxes at null infinity in asymptotically flat space-times.

  10. Henry's law constants of polyols

    NASA Astrophysics Data System (ADS)

    Compernolle, S.; Müller, J.-F.

    2014-12-01

    Henry's law constants (HLC) are derived for several polyols bearing between 2 and 6 hydroxyl groups, based on literature data for water activity, vapour pressure and/or solubility. While deriving HLC and depending on the case, also infinite dilution activity coefficients (IDACs), solid state vapour pressures or activity coefficient ratios are obtained as intermediate results. An error analysis on the intermediate quantities and the obtained HLC is included. For most compounds, these are the first values reported, while others compare favourably with literature data in most cases. Using these values and those from a previous work (Compernolle and Müller, 2014), an assessment is made on the partitioning of polyols, diacids and hydroxy acids to droplet and aqueous aerosol.

  11. Philicities, Fugalities, and Equilibrium Constants.

    PubMed

    Mayr, Herbert; Ofial, Armin R

    2016-05-17

    The mechanistic model of Organic Chemistry is based on relationships between rate and equilibrium constants. Thus, strong bases are generally considered to be good nucleophiles and poor nucleofuges. Exceptions to this rule have long been known, and the ability of iodide ions to catalyze nucleophilic substitutions, because they are good nucleophiles as well as good nucleofuges, is just a prominent example for exceptions from the general rule. In a reaction series, the Leffler-Hammond parameter α = δΔG(⧧)/δΔG° describes the fraction of the change in the Gibbs energy of reaction, which is reflected in the change of the Gibbs energy of activation. It has long been considered as a measure for the position of the transition state; thus, an α value close to 0 was associated with an early transition state, while an α value close to 1 was considered to be indicative of a late transition state. Bordwell's observation in 1969 that substituent variation in phenylnitromethanes has a larger effect on the rates of deprotonation than on the corresponding equilibrium constants (nitroalkane anomaly) triggered the breakdown of this interpretation. In the past, most systematic investigations of the relationships between rates and equilibria of organic reactions have dealt with proton transfer reactions, because only for few other reaction series complementary kinetic and thermodynamic data have been available. In this Account we report on a more general investigation of the relationships between Lewis basicities, nucleophilicities, and nucleofugalities as well as between Lewis acidities, electrophilicities, and electrofugalities. Definitions of these terms are summarized, and it is suggested to replace the hybrid terms "kinetic basicity" and "kinetic acidity" by "protophilicity" and "protofugality", respectively; in this way, the terms "acidity" and "basicity" are exclusively assigned to thermodynamic properties, while "philicity" and "fugality" refer to kinetics

  12. Do goldfish miss the fundamental?

    NASA Astrophysics Data System (ADS)

    Fay, Richard R.

    2003-10-01

    The perception of harmonic complexes was studied in goldfish using classical respiratory conditioning and a stimulus generalization paradigm. Groups of animals were initially conditioned to several harmonic complexes with a fundamental frequency (f0) of 100 Hz. ln some cases the f0 component was present, and in other cases, the f0 component was absent. After conditioning, animals were tested for generalization to novel harmonic complexes having different f0's, some with f0 present and some with f0 absent. Generalization gradients always peaked at 100 Hz, indicating that the pitch value of the conditioning complexes was consistent with the f0, whether or not f0 was present in the conditioning or test complexes. Thus, goldfish do not miss the fundmental with respect to a pitch-like perceptual dimension. However, generalization gradients tended to have different skirt slopes for the f0-present and f0-absent conditioning and test stimuli. This suggests that goldfish distinguish between f0 present/absent stimuli, probably on the basis of a timbre-like perceptual dimension. These and other results demonstrate that goldfish respond to complex sounds as if they possessed perceptual dimensions similar to pitch and timbre as defined for human and other vertebrate listeners. [Work supported by NIH/NIDCD.

  13. Levitated Optomechanics for Fundamental Physics

    NASA Astrophysics Data System (ADS)

    Rashid, Muddassar; Bateman, James; Vovrosh, Jamie; Hempston, David; Ulbricht, Hendrik

    2015-05-01

    Optomechanics with levitated nano- and microparticles is believed to form a platform for testing fundamental principles of quantum physics, as well as find applications in sensing. We will report on a new scheme to trap nanoparticles, which is based on a parabolic mirror with a numerical aperture of 1. Combined with achromatic focussing, the setup is a cheap and readily straightforward solution to trapping nanoparticles for further study. Here, we report on the latest progress made in experimentation with levitated nanoparticles; these include the trapping of 100 nm nanodiamonds (with NV-centres) down to 1 mbar as well as the trapping of 50 nm Silica spheres down to 10?4 mbar without any form of feedback cooling. We will also report on the progress to implement feedback stabilisation of the centre of mass motion of the trapped particle using digital electronics. Finally, we argue that such a stabilised particle trap can be the particle source for a nanoparticle matterwave interferometer. We will present our Talbot interferometer scheme, which holds promise to test the quantum superposition principle in the new mass range of 106 amu. EPSRC, John Templeton Foundation.

  14. Fluorescence lifetimes: fundamentals and interpretations.

    PubMed

    Noomnarm, Ulai; Clegg, Robert M

    2009-01-01

    Fluorescence measurements have been an established mainstay of photosynthesis experiments for many decades. Because in the photosynthesis literature the basics of excited states and their fates are not usually described, we have presented here an easily understandable text for biology students in the style of a chapter in a text book. In this review we give an educational overview of fundamental physical principles of fluorescence, with emphasis on the temporal response of emission. Escape from the excited state of a molecule is a dynamic event, and the fluorescence emission is in direct kinetic competition with several other pathways of de-excitation. It is essentially through a kinetic competition between all the pathways of de-excitation that we gain information about the fluorescent sample on the molecular scale. A simple probability allegory is presented that illustrates the basic ideas that are important for understanding and interpreting most fluorescence experiments. We also briefly point out challenges that confront the experimenter when interpreting time-resolved fluorescence responses.

  15. Fundamental studies of fusion plasmas

    SciTech Connect

    Aamodt, R.E.; Catto, P.J.; D'Ippolito, D.A.; Myra, J.R.; Russell, D.A.

    1992-05-26

    The major portion of this program is devoted to critical ICH phenomena. The topics include edge physics, fast wave propagation, ICH induced high frequency instabilities, and a preliminary antenna design for Ignitor. This research was strongly coordinated with the world's experimental and design teams at JET, Culham, ORNL, and Ignitor. The results have been widely publicized at both general scientific meetings and topical workshops including the speciality workshop on ICRF design and physics sponsored by Lodestar in April 1992. The combination of theory, empirical modeling, and engineering design in this program makes this research particularly important for the design of future devices and for the understanding and performance projections of present tokamak devices. Additionally, the development of a diagnostic of runaway electrons on TEXT has proven particularly useful for the fundamental understanding of energetic electron confinement. This work has led to a better quantitative basis for quasilinear theory and the role of magnetic vs. electrostatic field fluctuations on electron transport. An APS invited talk was given on this subject and collaboration with PPPL personnel was also initiated. Ongoing research on these topics will continue for the remainder fo the contract period and the strong collaborations are expected to continue, enhancing both the relevance of the work and its immediate impact on areas needing critical understanding.

  16. Simulating Supercapacitors: Can We Model Electrodes As Constant Charge Surfaces?

    PubMed

    Merlet, Céline; Péan, Clarisse; Rotenberg, Benjamin; Madden, Paul A; Simon, Patrice; Salanne, Mathieu

    2013-01-17

    Supercapacitors based on an ionic liquid electrolyte and graphite or nanoporous carbon electrodes are simulated using molecular dynamics. We compare a simplified electrode model in which a constant, uniform charge is assigned to each carbon atom with a realistic model in which a constant potential is applied between the electrodes (the carbon charges are allowed to fluctuate). We show that the simulations performed with the simplified model do not provide a correct description of the properties of the system. First, the structure of the adsorbed electrolyte is partly modified. Second, dramatic differences are observed for the dynamics of the system during transient regimes. In particular, upon application of a constant applied potential difference, the increase in the temperature, due to the Joule effect, associated with the creation of an electric current across the cell follows Ohm's law, while unphysically high temperatures are rapidly observed when constant charges are assigned to each carbon atom. PMID:26283432

  17. Simulating Supercapacitors: Can We Model Electrodes As Constant Charge Surfaces?

    PubMed

    Merlet, Céline; Péan, Clarisse; Rotenberg, Benjamin; Madden, Paul A; Simon, Patrice; Salanne, Mathieu

    2013-01-17

    Supercapacitors based on an ionic liquid electrolyte and graphite or nanoporous carbon electrodes are simulated using molecular dynamics. We compare a simplified electrode model in which a constant, uniform charge is assigned to each carbon atom with a realistic model in which a constant potential is applied between the electrodes (the carbon charges are allowed to fluctuate). We show that the simulations performed with the simplified model do not provide a correct description of the properties of the system. First, the structure of the adsorbed electrolyte is partly modified. Second, dramatic differences are observed for the dynamics of the system during transient regimes. In particular, upon application of a constant applied potential difference, the increase in the temperature, due to the Joule effect, associated with the creation of an electric current across the cell follows Ohm's law, while unphysically high temperatures are rapidly observed when constant charges are assigned to each carbon atom.

  18. On the fundamental role of dynamics in quantum physics

    NASA Astrophysics Data System (ADS)

    Hofmann, Holger F.

    2016-05-01

    Quantum theory expresses the observable relations between physical properties in terms of probabilities that depend on the specific context described by the "state" of a system. However, the laws of physics that emerge at the macroscopic level are fully deterministic. Here, it is shown that the relation between quantum statistics and deterministic dynamics can be explained in terms of ergodic averages over complex valued probabilities, where the fundamental causality of motion is expressed by an action that appears as the phase of the complex probability multiplied with the fundamental constant ħ. Importantly, classical physics emerges as an approximation of this more fundamental theory of motion, indicating that the assumption of a classical reality described by differential geometry is merely an artefact of an extrapolation from the observation of macroscopic dynamics to a fictitious level of precision that does not exist within our actual experience of the world around us. It is therefore possible to completely replace the classical concepts of trajectories with the more fundamental concept of action phase probabilities as a universally valid description of the deterministic causality of motion that is observed in the physical world.

  19. Fundamental Principles of Proper Space Kinematics

    NASA Astrophysics Data System (ADS)

    Wade, Sean

    It is desirable to understand the movement of both matter and energy in the universe based upon fundamental principles of space and time. Time dilation and length contraction are features of Special Relativity derived from the observed constancy of the speed of light. Quantum Mechanics asserts that motion in the universe is probabilistic and not deterministic. While the practicality of these dissimilar theories is well established through widespread application inconsistencies in their marriage persist, marring their utility, and preventing their full expression. After identifying an error in perspective the current theories are tested by modifying logical assumptions to eliminate paradoxical contradictions. Analysis of simultaneous frames of reference leads to a new formulation of space and time that predicts the motion of both kinds of particles. Proper Space is a real, three-dimensional space clocked by proper time that is undergoing a densification at the rate of c. Coordinate transformations to a familiar object space and a mathematical stationary space clarify the counterintuitive aspects of Special Relativity. These symmetries demonstrate that within the local universe stationary observers are a forbidden frame of reference; all is in motion. In lieu of Quantum Mechanics and Uncertainty the use of the imaginary number i is restricted for application to the labeling of mass as either material or immaterial. This material phase difference accounts for both the perceived constant velocity of light and its apparent statistical nature. The application of Proper Space Kinematics will advance more accurate representations of microscopic, oscopic, and cosmological processes and serve as a foundation for further study and reflection thereafter leading to greater insight.

  20. Fundamental structures of dynamic social networks.

    PubMed

    Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune

    2016-09-01

    Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision.

  1. The dependency of timbre on fundamental frequency.

    PubMed

    Marozeau, Jeremy; de Cheveigné, Alain; McAdams, Stephen; Winsberg, Suzanne

    2003-11-01

    The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0's, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0's. PMID:14650028

  2. Fundamental structures of dynamic social networks

    PubMed Central

    Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune

    2016-01-01

    Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision. PMID:27555584

  3. Fundamental structures of dynamic social networks.

    PubMed

    Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune

    2016-09-01

    Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision. PMID:27555584

  4. Searching for space-time variation of the fine structure constant using QSO spectra: overview and future prospects

    NASA Astrophysics Data System (ADS)

    Berengut, J. C.; Dzuba, V. A.; Flambaum, V. V.; King, J. A.; Kozlov, M. G.; Murphy, M. T.; Webb, J. K.

    2010-11-01

    Current theories that seek to unify gravity with the other fundamental interactions suggest that spatial and temporal variation of fundamental constants is a possibility, or even a necessity, in an expanding Universe. Several studies have tried to probe the values of constants at earlier stages in the evolution of the Universe, using tools such as big-bang nucleosynthesis, the Oklo natural nuclear reactor, quasar absorption spectra, and atomic clocks (see, e.g. Flambaum & Berengut (2009)).

  5. Is There a Cosmological Constant?

    NASA Technical Reports Server (NTRS)

    Kochanek, Christopher; Oliversen, Ronald J. (Technical Monitor)

    2002-01-01

    The grant contributed to the publication of 18 refereed papers and 5 conference proceedings. The primary uses of the funding have been for page charges, travel for invited talks related to the grant research, and the support of a graduate student, Charles Keeton. The refereed papers address four of the primary goals of the proposal: (1) the statistics of radio lenses as a probe of the cosmological model (#1), (2) the role of spiral galaxies as lenses (#3), (3) the effects of dust on statistics of lenses (#7, #8), and (4) the role of groups and clusters as lenses (#2, #6, #10, #13, #15, #16). Four papers (#4, #5, #11, #12) address general issues of lens models, calibrations, and the relationship between lens galaxies and nearby galaxies. One considered cosmological effects in lensing X-ray sources (#9), and two addressed issues related to the overall power spectrum and theories of gravity (#17, #18). Our theoretical studies combined with the explosion in the number of lenses and the quality of the data obtained for them is greatly increasing our ability to characterize and understand the lens population. We can now firmly conclude both from our study of the statistics of radio lenses and our survey of extinctions in individual lenses that the statistics of optically selected quasars were significantly affected by extinction. However, the limits on the cosmological constant remain at lambda < 0.65 at a 2-sigma confidence level, which is in mild conflict with the results of the Type la supernova surveys. We continue to find that neither spiral galaxies nor groups and clusters contribute significantly to the production of gravitational lenses. The lack of group and cluster lenses is strong evidence for the role of baryonic cooling in increasing the efficiency of galaxies as lenses compared to groups and clusters of higher mass but lower central density. Unfortunately for the ultimate objective of the proposal, improved constraints on the cosmological constant, the next

  6. Fundamental Solutions and Optimal Control of Neutral Systems

    NASA Astrophysics Data System (ADS)

    Liu, Kai

    In this work, we shall consider standard optimal control problems for a class of neutral functional differential equations in Banach spaces. As the basis of a systematic theory of neutral models, the fundamental solution is constructed and a variation of constants formula of mild solutions is established. Necessary conditions in terms of the solutions of neutral adjoint systems are established to deal with the fixed time integral convex cost problem of optimality. Based on optimality conditions, the maximum principle for time varying control domain is presented.

  7. Fundamental questions relating to ion conduction in disordered solids

    NASA Astrophysics Data System (ADS)

    Dyre, Jeppe C.; Maass, Philipp; Roling, Bernhard; Sidebottom, David L.

    2009-04-01

    A number of basic scientific questions relating to ion conduction in homogeneously disordered solids are discussed. The questions deal with how to define the mobile ion density, what can be learnt from electrode effects, what the ion transport mechanism is, the role of dimensionality and what the origins of the mixed-alkali effect, the time-temperature superposition, and the nearly constant loss are. Answers are suggested to some of these questions, but the main purpose of the paper is to draw attention to the fact that this field of research still presents several fundamental challenges.

  8. Fundamental Mechanisms of Interface Roughness

    SciTech Connect

    Randall L. Headrick

    2009-01-06

    Publication quality results were obtained for several experiments and materials systems including: (i) Patterning and smoothening of sapphire surfaces by energetic Ar+ ions. Grazing Incidence Small Angle X-ray Scattering (GISAXS) experiments were performed in the system at the National Synchrotron Light Source (NSLS) X21 beamline. Ar+ ions in the energy range from 300 eV to 1000 eV were used to produce ripples on the surfaces of single-crystal sapphire. It was found that the ripple wavelength varies strongly with the angle of incidence of the ions, which increase significantly as the angle from normal is varied from 55° to 35°. A smooth region was found for ion incidence less than 35° away from normal incidence. In this region a strong smoothening mechanism with strength proportional to the second derivative of the height of the surface was found to be responsible for the effect. The discovery of this phase transition between stable and unstable regimes as the angle of incidence is varied has also stimulated new work by other groups in the field. (ii) Growth of Ge quantum dots on Si(100) and (111). We discovered the formation of quantum wires on 4° misoriented Si(111) using real-time GISAXS during the deposition of Ge. The results represent the first time-resolved GISAXS study of Ge quantum dot formation. (iii) Sputter deposition of amorphous thin films and multilayers composed of WSi2 and Si. Our in-situ GISAXS experiments reveal fundamental roughening and smoothing phenomena on surfaces during film deposition. The main results of this work is that the WSi2 layers actually become smoother during deposition due to the smoothening effect of energetic particles in the sputter deposition process.

  9. Constant strain frequency sweep measurements on granite rock.

    PubMed

    Haller, Kristian C E; Hedberg, Claes M

    2008-02-15

    Like many materials, granite exhibits both nonlinear acoustic distortion and slow nonequilibrium dynamics. Measurements to date have shown a response from both phenomena simultaneously, thus cross-contaminating the results. In this Letter, constant strain frequency sweep measurements eliminate the slow dynamics and, for the first time, permit evaluation of nonlinearity by itself characterized by lower resonance frequencies and a steeper slope. Measurements such as these are necessary for the fundamental understanding of material dynamics, and for the creation and validation of descriptive models.

  10. Ground-state rotational constants of 12CH 3D

    NASA Astrophysics Data System (ADS)

    Chackerian, C.; Guelachvili, G.

    1980-12-01

    An analysis of ground-state combination differences in the ν2( A1) fundamental band of 12CH 3D ( ν0 = 2200.03896 cm -1) has been made to yield values for the rotational constants B0, D0J, D0JK, H0JJJ, H0JJK, H0JKK, LJJJJ, L0JJJK, and order of magnitude values for L0JJKK and L0JKKK. These constants should be useful in assisting radio searches for this molecule in astrophysical sources. In addition, splittings of A1A2 levels ( J ≥ 17, K = 3) have been measured in both the ground and excited vibrational states of this band.

  11. Induced cosmological constant and other features of asymmetric brane embedding

    SciTech Connect

    Shtanov, Yuri; Sahni, Varun; Shafieloo, Arman; Toporensky, Alexey E-mail: varun@iucaa.ernet.in E-mail: lesha@xray.sai.msu.ru

    2009-04-15

    We investigate the cosmological properties of an 'induced gravity' brane scenario in the absence of mirror symmetry with respect to the brane. We find that brane evolution can proceed along one of four distinct branches. By contrast, when mirror symmetry is imposed, only two branches exist, one of which represents the self-accelerating brane, while the other is the so-called normal branch. This model incorporates many of the well-known possibilities of brane cosmology including phantom acceleration (w < -1), self-acceleration, transient acceleration, quiescent singularities, and cosmic mimicry. Significantly, the absence of mirror symmetry also provides an interesting way of inducing a sufficiently small cosmological constant on the brane. A small (positive) {Lambda}-term in this case is induced by a small asymmetry in the values of bulk fundamental constants on the two sides of the brane.

  12. The Fine-Structure Constant and Wavelength Calibration

    NASA Astrophysics Data System (ADS)

    Whitmore, Jonathan

    The fine-structure constant is a fundamental constant of the universe--and widely thought to have an unchanging value. However, the past decade has witnessed a controversy unfold over the claimed detection that the fine-structure constant had a different value in the distant past. These astrophysical measurements were made with spectrographs at the world's largest optical telescopes. The spectrographs make precise measurements of the wavelength spacing of absorption lines in the metals in the gas between the quasar background source and our telescopes on Earth. The wavelength spacing gives a snapshot of the atomic physics at the time of the interaction. Whether the fine-structure constant has changed is determined by comparing the atomic physics in the distant past with the atomic physics of today. We present our contribution to the discussion by analyzing three nights data taken with the HIRES instrument (High Resolution Echelle Spectrograph) on the Keck telescope. We provide an independent measurement on the fine-structure constant from the Damped Lyman alpha system at a redshift of z =2.309 (10.8 billion years ago) quasar PHL957. We developed a new method for calibrating the wavelength scale of a quasar exposure to a much higher precision than previously achieved. In our subsequent analysis, we discovered unexpected wavelength calibration errors that has not been taken into account in the previously reported measurements. After characterizing the wavelength miscalibrations on the Keck-HIRES instrument, we obtained several nights of data from the main competing instrument, the VLT (Very Large Telescope) with UVES (Ultraviolet and Visual Echelle Spectrograph). We applied our new wavelength calibration method and uncovered similar in nature systematic errors as found on Keck-HIRES. Finally, we make a detailed Monte Carlo exploration of the effects that these miscalibrations have on making precision fine-structure constant measurements.

  13. Capacitive Cells for Dielectric Constant Measurement

    ERIC Educational Resources Information Center

    Aguilar, Horacio Munguía; Maldonado, Rigoberto Franco

    2015-01-01

    A simple capacitive cell for dielectric constant measurement in liquids is presented. As an illustrative application, the cell is used for measuring the degradation of overheated edible oil through the evaluation of their dielectric constant.

  14. High voltage compliance constant current ballast

    NASA Technical Reports Server (NTRS)

    Rosenthal, L. A.

    1976-01-01

    A ballast circuit employing a constant current diode and a vacuum tube that can provide a constant current over a voltage range of 1000 volts. The simple circuit can prove useful in studying voltage breakdown characteristics.

  15. Astronomia Motivadora no Ensino Fundamental

    NASA Astrophysics Data System (ADS)

    Melo, J.; Voelzke, M. R.

    2008-09-01

    O objetivo principal deste trabalho é procurar desenvolver o interesse dos alunos pelas ciências através da Astronomia. Uma pesquisa com perguntas sobre Astronomia foi realizada junto a 161 alunos do Ensino Fundamental, com o intuito de descobrir conhecimentos prévios dos alunos sobre o assunto. Constatou-se, por exemplo, que 29,3% da 6ª série responderam corretamente o que é eclipse, 30,0% da 8ª série acertaram o que a Astronomia estuda, enquanto 42,3% dos alunos da 5ª série souberam definir o Sol. Pretende-se ampliar as turmas participantes e trabalhar, principalmente de forma prática com: dimensões e escalas no Sistema Solar, construção de luneta, questões como dia e noite, estações do ano e eclipses. Busca-se abordar, também, outros conteúdos de Física tais como a óptica na construção da luneta, e a mecânica no trabalho com escalas e medidas, e ao utilizar uma luminária para representar o Sol na questão do eclipse, e de outras disciplinas como a Matemática na transformação de unidades, regras de três; Artes na modelagem ou desenho dos planetas; a própria História com relação à busca pela origem do universo, e a Informática que possibilita a busca mais rápida por informações, além de permitir simulações e visualizações de imagens importantes. Acredita-se que a Astronomia é importante no processo ensino aprendizagem, pois permite a discussão de temas curiosos como, por exemplo, a origem do universo, viagens espaciais a existência ou não de vida em outros planetas, além de temas atuais como as novas tecnologias.

  16. Elastic constants of Ultrasonic Additive Manufactured Al 3003-H18.

    PubMed

    Foster, D R; Dapino, M J; Babu, S S

    2013-01-01

    Ultrasonic Additive Manufacturing (UAM), also known as Ultrasonic Consolidation (UC), is a layered manufacturing process in which thin metal foils are ultrasonically bonded to a previously bonded foil substrate to create a net part. Optimization of process variables (amplitude, normal load and velocity) is done to minimize voids along the bonded interfaces. This work pertains to the evaluation of bonds in UAM builds through ultrasonic testing of a build's elastic constants. Results from ultrasonic testing on UAM parts indicate orthotropic material symmetry and a reduction of up to 48% in elastic constant values compared to a control sample. The reduction in elastic constant values is attributed to interfacial voids. In addition, the elastic constants in the plane of the Al foils have nearly the same value, while the constants normal to the foil direction have much different values. In contrast, measurements from builds made with Very High Power Ultrasonic Additive Manufacturing (VHP UAM) show a drastic improvement in elastic properties, approaching values similar to that of bulk aluminum.

  17. Fundamentals of clinical outcomes assessment for spinal disorders: study designs, methodologies, and analyses.

    PubMed

    Vavken, Patrick; Ganal-Antonio, Anne Kathleen B; Shen, Francis H; Chapman, Jens R; Samartzis, Dino

    2015-04-01

    Study Design A broad narrative review. Objective Management of spinal disorders is continuously evolving, with new technologies being constantly developed. Regardless, assessment of patient outcomes is key in understanding the safety and efficacy of various therapeutic interventions. As such, evidence-based spine care is an essential component to the armamentarium of the spine specialist in an effort to critically analyze the reported literature and execute studies in an effort to improve patient care and change clinical practice. The following article, part one of a two-part series, is meant to bring attention to the pros and cons of various study designs, their methodological issues, as well as statistical considerations. Methods An extensive review of the peer-reviewed literature was performed, irrespective of language of publication, addressing study designs and their methodologies as well as statistical concepts. Results Numerous articles and concepts addressing study designs and their methodological considerations as well as statistical analytical concepts have been reported. Their applications in the context of spine-related conditions and disorders were noted. Conclusion Understanding the fundamental principles of study designs and their methodological considerations as well as statistical analyses can further advance and improve future spine-related research.

  18. Empirical Examination of Fundamental Indexation in the German Market

    NASA Astrophysics Data System (ADS)

    Mihm, Max; Locarek-Junge, Hermann

    Index Funds, Exchange Traded Funds and Derivatives give investors easy access to well diversified index portfolios. These index-based investment products exhibit low fees, which make them an attractive alternative to actively managed funds. Against this background, a new class of stock indices has been established based on the concept of “Fundamental Indexation”. The selection and weighting of index constituents is conducted by means of fundamental criteria like total assets, book value or number of employees. This paper examines the performance of fundamental indices in the German equity market. For this purpose, a backtest of five fundamental indices is conducted over the last 20 years. Furthermore the index returns are analysed under the assumption of an efficient as well as an inefficient market. Index returns in efficient markets are explained by applying the three factor model for stock returns of Fama and French (J Financ Econ 33(1):3-56, 1993). The results show that the outperformance of fundamental indices is partly due to a higher risk exposure, particularly to companies with a low price to book ratio. By relaxing the assumption of market efficiency, a return drag of capitalisation weighted indices can be deduced. Given a mean-reverting movement of prices, a direct connection between market capitalisation and index weighting leads to inferior returns.

  19. Ultralight porous metals: From fundamentals to applications

    NASA Astrophysics Data System (ADS)

    Tianjian, Lu

    2002-10-01

    Over the past few years a number of low cost metallic foams have been produced and used as the core of sandwich panels and net shaped parts. The main aim is to develop lightweight structures which are stiff, strong, able to absorb large amount of energy and cheap for application in the transport and construction industries. For example, the firewall between the engine and passenger compartment of an automobile must have adequate mechanical strength, good energy and sound absorbing properties, and adequate fire retardance. Metal foams provide all of these features, and are under serious consideration for this applications by a number of automobile manufacturers (e.g., BMW and Audi). Additional specialized applications for foam-cored sandwich panels range from heat sinks for electronic devices to crash barriers for automobiles, from the construction panels in lifts on aircraft carriers to the luggage containers of aircraft, from sound proofing walls along railway tracks and highways to acoustic absorbers in lean premixed combustion chambers. But there is a problem. Before metallic foams can find a widespread application, their basic properties must be measured, and ideally modeled as a function of microstructural details, in order to be included in a design. This work aims at reviewing the recent progress and presenting some new results on fundamental research regarding the micromechanical origins of the mechanical, thermal, and acoustic properties of metallic foams.

  20. Fundamentals of materials accounting for nuclear safeguards

    SciTech Connect

    Pillay, K.K.S.

    1989-04-01

    Materials accounting is essential to providing the necessary assurance for verifying the effectiveness of a safeguards system. The use of measurements, analyses, records, and reports to maintain knowledge of the quantities of nuclear material present in a defined area of a facility and the use of physical inventories and materials balances to verify the presence of special nuclear materials are collectively known as materials accounting for nuclear safeguards. This manual, prepared as part of the resource materials for the Safeguards Technology Training Program of the US Department of Energy, addresses fundamental aspects of materials accounting, enriching and complementing them with the first-hand experiences of authors from varied disciplines. The topics range from highly technical subjects to site-specific system designs and policy discussions. This collection of papers is prepared by more than 25 professionals from the nuclear safeguards field. Representing research institutions, industries, and regulatory agencies, the authors create a unique resource for the annual course titled ''Materials Accounting for Nuclear Safeguards,'' which is offered at the Los Alamos National Laboratory.

  1. Remote Sensing of Salinity: The Dielectric Constant of Sea Water

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Lang, R.; Utku, C.; Tarkocin, Y.

    2011-01-01

    Global monitoring of sea surface salinity from space requires an accurate model for the dielectric constant of sea water as a function of salinity and temperature to characterize the emissivity of the surface. Measurements are being made at 1.413 GHz, the center frequency of the Aquarius radiometers, using a resonant cavity and the perturbation method. The cavity is operated in a transmission mode and immersed in a liquid bath to control temperature. Multiple measurements are made at each temperature and salinity. Error budgets indicate a relative accuracy for both real and imaginary parts of the dielectric constant of about 1%.

  2. Moving-Gradient Furnace With Constant-Temperature Cold Zone

    NASA Technical Reports Server (NTRS)

    Gernert, Nelson J.; Shaubach, Robert M.

    1993-01-01

    Outer heat pipe helps in controlling temperature of cold zone of furnace. Part of heat-pipe furnace that includes cold zone surrounded by another heat pipe equipped with heater at one end and water cooling coil at other end. Temperature of heat pipe maintained at desired constant value by controlling water cooling. Serves as constant-temperature heat source or heat sink, as needed, for gradient of temperature as gradient region moved along furnace. Proposed moving-gradient heat-pipe furnace used in terrestrial or spaceborne experiments on directional solidification in growth of crystals.

  3. Fundamental limits in heat-assisted magnetic recording and methods to overcome it with exchange spring structures

    SciTech Connect

    Suess, D.; Abert, C.; Bruckner, F.; Windl, R.; Vogler, C.; Breth, L.; Fidler, J.

    2015-04-28

    The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePt film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5–0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10{sup −4} if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.

  4. Statistical Modelling of the Soil Dielectric Constant

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy

    2010-05-01

    the soil type, and that way it enables clear comparing to results from other soil type dependent models. The paper is focused on proper representing possible range of porosity in commonly existing soils. This work is done with aim of implementing the statistical-physical model of the dielectric constant to a use in the model CMEM (Community Microwave Emission Model), applicable to SMOS (Soil Moisture and Ocean Salinity ESA Mission) data. The input data to the model clearly accepts definition of soil fractions in common physical measures, and in opposition to other empirical models, does not need calibrating. It is not dependent on recognition of the soil by type, but instead it offers the control of accuracy by proper determination of the soil compound fractions. SMOS employs CMEM being funded only by the sand-clay-silt composition. Common use of the soil data, is split on tens or even hundreds soil types depending on the region. We hope that only by determining three element compounds of sand-clay-silt, in few fractions may help resolving the question of relevance of soil data to the input of CMEM, for SMOS. Now, traditionally employed soil types are converted on sand-clay-silt compounds, but hardly cover effects of other specific properties like the porosity. It should bring advantageous effects in validating SMOS observation data, and is taken for the aim in the Cal/Val project 3275, in the campaigns for SVRT (SMOS Validation and Retrieval Team). Acknowledgements. This work was funded in part by the PECS - Programme for European Cooperating States, No. 98084 "SWEX/R - Soil Water and Energy Exchange/Research".

  5. Investigating the Fundamental Theorem of Calculus

    ERIC Educational Resources Information Center

    Johnson, Heather L.

    2010-01-01

    The fundamental theorem of calculus, in its simplified complexity, connects differential and integral calculus. The power of the theorem comes not merely from recognizing it as a mathematical fact but from using it as a systematic tool. As a high school calculus teacher, the author developed and taught lessons on this fundamental theorem that were…

  6. Fundamentals of fossil simulator instructor training

    SciTech Connect

    Not Available

    1984-01-01

    This single-volume, looseleaf text introduces the beginning instructor to fundamental instructor training principles, and then shows how to apply those principles to fossil simulator training. Topics include the fundamentals of classroom instruction, the learning process, course development, and the specifics of simulator training program development.

  7. Individual differences in fundamental social motives.

    PubMed

    Neel, Rebecca; Kenrick, Douglas T; White, Andrew Edward; Neuberg, Steven L

    2016-06-01

    Motivation has long been recognized as an important component of how people both differ from, and are similar to, each other. The current research applies the biologically grounded fundamental social motives framework, which assumes that human motivational systems are functionally shaped to manage the major costs and benefits of social life, to understand individual differences in social motives. Using the Fundamental Social Motives Inventory, we explore the relations among the different fundamental social motives of Self-Protection, Disease Avoidance, Affiliation, Status, Mate Seeking, Mate Retention, and Kin Care; the relationships of the fundamental social motives to other individual difference and personality measures including the Big Five personality traits; the extent to which fundamental social motives are linked to recent life experiences; and the extent to which life history variables (e.g., age, sex, childhood environment) predict individual differences in the fundamental social motives. Results suggest that the fundamental social motives are a powerful lens through which to examine individual differences: They are grounded in theory, have explanatory value beyond that of the Big Five personality traits, and vary meaningfully with a number of life history variables. A fundamental social motives approach provides a generative framework for considering the meaning and implications of individual differences in social motivation. (PsycINFO Database Record

  8. Fundamentals of Physics, Problem Supplement No. 1

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2000-05-01

    No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.

  9. Fundamentals of Physics, 7th Edition

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-05-01

    No other book on the market today can match the 30-year success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving. This book offers a unique combination of authoritative content and stimulating applications.

  10. Fundamentals of Physics, Student's Solutions Manual

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2000-07-01

    No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.

  11. Dimensionless constants, cosmology, and other dark matters

    SciTech Connect

    Tegmark, Max; Aguirre, Anthony; Rees, Martin J.; Wilczek, Frank

    2006-01-15

    We identify 31 dimensionless physical constants required by particle physics and cosmology, and emphasize that both microphysical constraints and selection effects might help elucidate their origin. Axion cosmology provides an instructive example, in which these two kinds of arguments must both be taken into account, and work well together. If a Peccei-Quinn phase transition occurred before or during inflation, then the axion dark matter density will vary from place to place with a probability distribution. By calculating the net dark matter halo formation rate as a function of all four relevant cosmological parameters and assessing other constraints, we find that this probability distribution, computed at stable solar systems, is arguably peaked near the observed dark matter density. If cosmologically relevant weakly interacting massive particle (WIMP) dark matter is discovered, then one naturally expects comparable densities of WIMPs and axions, making it important to follow up with precision measurements to determine whether WIMPs account for all of the dark matter or merely part of it.

  12. Fundamental Frequency Variation with an Electrolarynx Improves Speech Understanding: A Case Study

    ERIC Educational Resources Information Center

    Watson, Peter J.; Schlauch, Robert S.

    2009-01-01

    Purpose: This study examined the effect of fundamental frequency (F0) variation on the intelligibility of speech in an alaryngeal talker who used an electrolarynx (EL). Method: One experienced alaryngeal talker produced variable F0 and a constant F0 with his EL as he read sentences aloud. As a control, a group of sentences with variable F0 was…

  13. Emergent cosmological constant from colliding electromagnetic waves

    SciTech Connect

    Halilsoy, M.; Mazharimousavi, S. Habib; Gurtug, O. E-mail: habib.mazhari@emu.edu.tr

    2014-11-01

    In this study we advocate the view that the cosmological constant is of electromagnetic (em) origin, which can be generated from the collision of em shock waves coupled with gravitational shock waves. The wave profiles that participate in the collision have different amplitudes. It is shown that, circular polarization with equal amplitude waves does not generate cosmological constant. We also prove that the generation of the cosmological constant is related to the linear polarization. The addition of cross polarization generates no cosmological constant. Depending on the value of the wave amplitudes, the generated cosmological constant can be positive or negative. We show additionally that, the collision of nonlinear em waves in a particular class of Born-Infeld theory also yields a cosmological constant.

  14. The gaseous explosive reaction at constant pressure : the reaction order and reaction rate

    NASA Technical Reports Server (NTRS)

    Stevens, F W

    1931-01-01

    The data given in this report covers the explosive limits of hydrocarbon fuels. Incidental to the purpose of the investigation here reported, the explosive limits will be found to be expressed for the condition of constant pressure, in the fundamental terms of concentrations (partial pressures) of fuel and oxygen.

  15. Effective elastic constants of polycrystalline aggregates

    NASA Astrophysics Data System (ADS)

    Bonilla, Luis L.

    A METHOD is presented for the determination of the effective elastic constants of a transversely isotropic aggregate of weakly anisotropic crystallites with cubic symmetry. The results obtained generalize those given in the literature for the second and third order elastic constants. In addition, the second moments and the binary angular correlations of the second order stiffnesses are obtained. It is also explained how these moments can be used to find the two-point correlations of the elastic constants.

  16. Constant voltage electro-slag remelting control

    DOEpatents

    Schlienger, Max E.

    1996-01-01

    A system for controlling electrode gap in an electro-slag remelt furnace has a constant regulated voltage and an eletrode which is fed into the slag pool at a constant rate. The impedance of the circuit through the slag pool is directly proportional to the gap distance. Because of the constant voltage, the system current changes are inversely proportional to changes in gap. This negative feedback causes the gap to remain stable.

  17. Constant voltage electro-slag remelting control

    DOEpatents

    Schlienger, M.E.

    1996-10-22

    A system for controlling electrode gap in an electro-slag remelt furnace has a constant regulated voltage and an electrode which is fed into the slag pool at a constant rate. The impedance of the circuit through the slag pool is directly proportional to the gap distance. Because of the constant voltage, the system current changes are inversely proportional to changes in gap. This negative feedback causes the gap to remain stable. 1 fig.

  18. BOOK REVIEWS: Quantum Mechanics: Fundamentals

    NASA Astrophysics Data System (ADS)

    Whitaker, A.

    2004-02-01

    This review is of three books, all published by Springer, all on quantum theory at a level above introductory, but very different in content, style and intended audience. That of Gottfried and Yan is of exceptional interest, historical and otherwise. It is a second edition of Gottfried’s well-known book published by Benjamin in 1966. This was written as a text for a graduate quantum mechanics course, and has become one of the most used and respected accounts of quantum theory, at a level mathematically respectable but not rigorous. Quantum mechanics was already solidly established by 1966, but this second edition gives an indication of progress made and changes in perspective over the last thirty-five years, and also recognises the very substantial increase in knowledge of quantum theory obtained at the undergraduate level. Topics absent from the first edition but included in the second include the Feynman path integral, seen in 1966 as an imaginative but not very useful formulation of quantum theory. Feynman methods were given only a cursory mention by Gottfried. Their practical importance has now been fully recognised, and a substantial account of them is provided in the new book. Other new topics include semiclassical quantum mechanics, motion in a magnetic field, the S matrix and inelastic collisions, radiation and scattering of light, identical particle systems and the Dirac equation. A topic that was all but totally neglected in 1966, but which has flourished increasingly since, is that of the foundations of quantum theory. John Bell’s work of the mid-1960s has led to genuine theoretical and experimental achievement, which has facilitated the development of quantum optics and quantum information theory. Gottfried’s 1966 book played a modest part in this development. When Bell became increasingly irritated with the standard theoretical approach to quantum measurement, Viki Weisskopf repeatedly directed him to Gottfried’s book. Gottfried had devoted a

  19. Modification of the characteristic gravitational constants

    NASA Astrophysics Data System (ADS)

    Vujičić, V. A.

    2006-08-01

    In the educational and scientific literature the numerical values of gravitational constants are seen as only approximately correct. The numerical values are different in work by various researchers, as also are the formulae and definitions of constants employed. In this paper, on the basis of Newton’s laws and Kepler’s laws we prove that it is necessary to modify the characteristic gravitational constants and their definitions. The formula for the geocentric gravitational constant of the satellites Kosmos N and the Moon are calculated.

  20. Variación temporal de las constantes fundamentales

    NASA Astrophysics Data System (ADS)

    Landau, S. J.; Vucetich, H.

    La variación temporal de las constantes fundamentales es un problema que ha motivado numerosos trabajos teóricos y experimentales desde la hipótesis de los grandes números de Dirac en 1937. Entre los métodos experimentales y observacionales para establecer restricciones sobre la variación de las constantes fundamentes es importante mencionar: comparación entre relojes atómicos[1], métodos geofísicos[2][3], análisis de sistemas de absorción en quasares[4][5][6] y cotas provenientes de la nucleosíntesis primordial[7]. En un trabajo reciente[5], se reportó una significativa variación en la constante de estructura fina. Intentos de unificar las cuatro interacciones fundamentales dieron como resultado teorías con múltiples dimensiones como las teorías de Kaluza-Klein y teorías de supercuerdas. Estas teorías proporcionan un marco teórico natural para el estudio de la variación temporal de las constantes fundamentales. A su vez, un modelo sencillo para estudiar la variación de la constante de estructura fina, fue propuesto en [8], a partir de premisas muy generales como ser covarianza, invarianza de gauge, causalidad y invarianza ante reversiones temporales en el electromagnetismo. Diferentes versiones de las teorías antes mencionadas coinciden en predecir variaciones temporales de las constantes fundamentales pero difieren en la forma de esta variación[9][10]. De esta manera, las restricciones establecidas experimentalmente sobre la variación de las constantes fundamentales pueden ser una herramienta importante para testear estas diferentes teorías. En este trabajo, utilizamos las cotas provenientes de diversas técnicas experimentales, para testear si las mismas son consistentes con alguna de las teorías antes mencionadas. En particular, establecemos cotas sobre la variación de los parámentros libres de las diferentes teorías como por ejemplo el radio de las dimensiones extras en las teorías tipo Kaluza-Klein.

  1. Varying constant cosmologies and cosmic singularities

    NASA Astrophysics Data System (ADS)

    Dabrowski, Mariusz P.; Marosek, Konrad

    2013-02-01

    We review standard and non-standard cosmological singularities paying special attention onto those which are of a weak type and do not necessarily exhibit geodesic incompletness. Then, we discuss how these singularities can be weakened, strengthened, or avoided due to the time-variation of the physical constants such as the speed of light c and the gravitational constant G.

  2. Theoretical Analysis of One-Dimensional Pressure Diffusion from a Constant Upstream Pressure to a Constant Downstream Storage

    NASA Astrophysics Data System (ADS)

    Song, Insun

    2016-05-01

    The one-dimensional diffusion equation was solved to understand the pressure and flow behaviors along a cylindrical rock specimen for experimental boundary conditions of constant upstream pressure and constant downstream storage. The solution consists of a time-constant asymptotic part and a transient part that is a negative exponential function of time. This means that the transient flow exponentially decays with time and is eventually followed by a steady-state condition. For a given rock sample, the transient stage is shortest when the downstream storage is minimized. For this boundary condition, a simple equation was derived from the analytic solution to determine the hydraulic permeability from the initial flow rate during the transient stage. The specific storage of a rock sample can be obtained simply from the total flow into the sample during the entire transient stage if there is no downstream storage. In theory, both of these hydraulic properties could be obtained simultaneously from transient-flow stage measurements without a complicated curve fitting or inversion process. Sensitivity analysis showed that the derived permeability is more reliable for lower-permeability rock samples. In conclusion, the constant head method with no downstream storage might be more applicable to extremely low-permeability rocks if the upstream flow rate is measured precisely upstream.

  3. String theory, cosmology and varying constants

    NASA Astrophysics Data System (ADS)

    Damour, Thibault

    In string theory the coupling `constants' appearing in the low-energy effective Lagrangian are determined by the vacuum expectation values of some (a priori) massless scalar fields (dilaton, moduli). This naturally leads one to expect a correlated variation of all the coupling constants, and an associated violation of the equivalence principle. We review some string-inspired theoretical models which incorporate such a spacetime variation of coupling constants while remaining naturally compatible both with phenomenological constraints coming from geochemical data (Oklo; Rhenium decay) and with present equivalence principle tests. Barring a very unnatural fine-tuning of parameters, a variation of the fine-structure constant as large as that recently `observed' by Webb et al. in quasar absorption spectra appears to be incompatible with these phenomenological constraints. Independently of any model, it is emphasized that the best experimental probe of varying constants are high-precision tests of the universality of free fall, such as MICROSCOPE and STEP.

  4. Observables in loop quantum gravity with a cosmological constant

    NASA Astrophysics Data System (ADS)

    Dupuis, Maïté; Girelli, Florian

    2014-11-01

    In many quantum gravity approaches, the cosmological constant is introduced by deforming the gauge group into a quantum group. In three dimensions, the quantization of the Chern-Simons formulation of gravity provided the first example of such a deformation. The Turaev-Viro model, which is an example of a spin-foam model, is also defined in terms of a quantum group. By extension, it is believed that in four dimensions, a quantum group structure could encode the presence of Λ ≠0 . In this article, we introduce by hand the quantum group Uq(s u (2 )) into the loop quantum gravity (LQG) framework; that is, we deal with Uq(s u (2 )) -spin networks. We explore some of the consequences, focusing in particular on the structure of the observables. Our fundamental tools are tensor operators for Uq(s u (2 )). We review their properties and give an explicit realization of the spinorial and vectorial ones. We construct the generalization of the U (N ) formalism in this deformed case, which is given by the quantum group Uq(u (N )). We are then able to build geometrical observables, such as the length, area or angle operators, etc. We show that these operators characterize a quantum discrete hyperbolic geometry in the three-dimensional LQG case. Our results confirm that a quantum group structure in LQG can be a tool to introduce a nonzero cosmological constant into the theory. Our construction is both relevant for three-dimensional Euclidian quantum gravity with a negative cosmological constant and four-dimensional Lorentzian quantum gravity with a positive cosmological constant.

  5. Fundamental Insights into Combustion Instability Predictions in Aerospace Propulsion

    NASA Astrophysics Data System (ADS)

    Huang, Cheng

    in conjunction with a Galerkin procedure to reduce the governing partial differential equation to an ordinary differential equation, which constitutes the ROM. Once the ROM is established, it can then be used as a lower-order test-bed to predict detailed results within certain parametric ranges at a fraction of the cost of solving the full governing equations. A detailed assessment is performed on the method in two parts. In part one, a one-dimensional scalar reaction-advection model equation is used for fundamental investigations, which include verification of the POD eigen-basis calculation and of the ROM development procedure. Moreover, certain criteria during ROM development are established: 1. a necessary number of POD modes that should be included to guarantee a stable ROM; 2. the need for the numerical discretization scheme to be consistent between the original CFD and the developed ROM. Furthermore, the predictive capabilities of the resulting ROM are evaluated to test its limits and to validate the values of applying broadband forcing in improving the ROM performance. In part two, the exploration is extended to a vector system of equations. Using the one-dimensional Euler equation is used as a model equation. A numerical stability issue is identified during the ROM development, the cause of which is further studied and attributed to the normalization methods implemented to generate coupled POD eigen-bases for vector variables. (Abstract shortened by UMI.).

  6. Vicinal coupling constants and protein dynamics.

    PubMed

    Hoch, J C; Dobson, C M; Karplus, M

    1985-07-16

    The effects of motional averaging on the analysis of vicinal spin-spin coupling constants derived from proton NMR studies of proteins have been examined. Trajectories obtained from molecular dynamics simulations of bovine pancreatic trypsin inhibitor and of hen egg white lysozyme were used in conjunction with an expression for the dependence of the coupling constant on the intervening dihedral angle to calculate the time-dependent behavior of the coupling constants. Despite large fluctuations, the time-average values of the coupling constants are not far from those computed for the average structure in the cases where fluctuations occur about a single potential well. The calculated differences show a high correlation with the variation in the magnitude of the fluctuations of individual dihedral angles. For the cases where fluctuations involve multiple sites, large differences are found between the time-average values and the average structure values for the coupling constants. Comparison of the simulation results with the experimental trends suggests that side chains with more than one position are more common in proteins than is inferred from X-ray results. It is concluded that for the main chain, motional effects do not introduce significant errors where vicinal coupling constants are used in structure determinations; however, for side chains, the motional average can alter deductions about the structure. Accurately measured coupling constants are shown to provide information concerning the magnitude of dihedral angle fluctuations.

  7. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants

    ERIC Educational Resources Information Center

    Vargas, Francisco M.

    2014-01-01

    The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…

  8. Challenging fundamental limits in the fabrication of vector vortex waveplates

    NASA Astrophysics Data System (ADS)

    Hakobyan, R. S.; Tabiryan, N. V.; Serabyn, E.

    Vector vortex waveplates (VVWs) are in the heart of vortex coronagraphs aimed at exoplanet detection close to bright stars. VVWs made of liquid crystal polymers (LCPs) provide structural continuity, opportunity of high order singularities, large area, and inexpensive manufacturing technology. However, to date, the performance of such devices is compromised by imperfections in singularity area that allow some residual starlight leakage. Reducing the singularity to subwavelength sizes increases the energy of elastic deformations of the LC. As a result, the azimuthally symmetric orientation pattern gives way to 3D deformations that reduce the elastic energy of the LC. The stability of radial orientation is determined by elastic constants of the LC, the thickness of the layer and the boundary conditions. In the current paper, we examin the role of those factors to determine the fundamental limits the singularity area could be reduced to for LCP VVWs.

  9. Fundamental theories of waves and particles formulated without classical mass

    NASA Astrophysics Data System (ADS)

    Fry, J. L.; Musielak, Z. E.

    2010-12-01

    Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.

  10. The role of orbital mechanics in fundamental physics

    NASA Astrophysics Data System (ADS)

    Exertier, Pierre; Metris, Gilles

    The contribution of space techniques to fundamental physics is at two levels. First, very interesting results have been obtained using precise tracking and orbitography of natural bodies or space probes not initially designed for this aim; this is the case, for example, of the precise estimation of the GM gravitational constant and of some PPN parameters, of the confirmation of the Lense-Thirring effect, and of the test of the strong Equivalence Principle. Second, dedicated missions have been settled to perform in space, experiments which cannot be realized on the ground, at least at the same level of precision; this is in particular the case of the time transfer experiment T2L2 and of the MicroSCOPE mission for the test of the weak EP.

  11. Fundamental experiments on hydride reorientation in zircaloy

    NASA Astrophysics Data System (ADS)

    Colas, Kimberly B.

    In the current study, an in-situ X-ray diffraction technique using synchrotron radiation was used to follow directly the kinetics of hydride dissolution and precipitation during thermomechanical cycles. This technique was combined with conventional microscopy (optical, SEM and TEM) to gain an overall understanding of the process of hydride reorientation. Thus this part of the study emphasized the time-dependent nature of the process, studying large volume of hydrides in the material. In addition, a micro-diffraction technique was also used to study the spatial distribution of hydrides near stress concentrations. This part of the study emphasized the spatial variation of hydride characteristics such as strain and morphology. Hydrided samples in the shape of tensile dog-bones were used in the time-dependent part of the study. Compact tension specimens were used during the spatial dependence part of the study. The hydride elastic strains from peak shift and size and strain broadening were studied as a function of time for precipitating hydrides. The hydrides precipitate in a very compressed state of stress, as measured by the shift in lattice spacing. As precipitation proceeds the average shift decreases, indicating average stress is reduced, likely due to plastic deformation and morphology changes. When nucleation ends the hydrides follow the zirconium matrix thermal contraction. When stress is applied below the threshold stress for reorientation, hydrides first nucleate in a very compressed state similar to that of unstressed hydrides. After reducing the average strain similarly to unstressed hydrides, the average hydride strain reaches a constant value during cool-down to room temperature. This could be due to a greater ease of deforming the matrix due to the applied far-field strain which would compensate for the strains due to thermal contraction. Finally when hydrides reorient, the average hydride strains become tensile during the first precipitation regime and

  12. On geometrically unified fields and universal constants

    NASA Astrophysics Data System (ADS)

    Fabbri, Luca

    2013-07-01

    We consider the Cartan extension of Riemann geometry as the basis upon which to build the Sciama-Kibble completion of Einstein gravity, developing the most general theory in which torsion and metric have two independent coupling constants: the main problem of the ESK theory was that torsion, having the Newton constant, was negligible beyond the Planck scale, but in this {ESK}2 theory torsion, with its own coupling constant, may be relevant much further Planck scales; further consequences of these torsionally-induced interactions will eventually be discussed.

  13. Laser Propulsion and the Constant Momentum Mission

    NASA Astrophysics Data System (ADS)

    Larson, C. William; Mead, Franklin B.; Knecht, Sean D.

    2004-03-01

    We show that perfect propulsion requires a constant momentum mission, as a consequence of Newton's second law. Perfect propulsion occurs when the velocity of the propelled mass in the inertial frame of reference matches the velocity of the propellant jet in the rocket frame of reference. We compare constant momentum to constant specific impulse propulsion, which, for a given specification of the mission delta V, has an optimum specific impulse that maximizes the propelled mass per unit jet kinetic energy investment. We also describe findings of more than 50 % efficiency for conversion of laser energy into jet kinetic energy by ablation of solids.

  14. Laser Propulsion and the Constant Momentum Mission

    SciTech Connect

    Larson, C. William; Mead, Franklin B. Jr.; Knecht, Sean D.

    2004-03-30

    We show that perfect propulsion requires a constant momentum mission, as a consequence of Newton's second law. Perfect propulsion occurs when the velocity of the propelled mass in the inertial frame of reference matches the velocity of the propellant jet in the rocket frame of reference. We compare constant momentum to constant specific impulse propulsion, which, for a given specification of the mission delta V, has an optimum specific impulse that maximizes the propelled mass per unit jet kinetic energy investment. We also describe findings of more than 50 % efficiency for conversion of laser energy into jet kinetic energy by ablation of solids.

  15. Constants and Pseudo-Constants of Coupled Beam Motion in the PEP-II Rings

    SciTech Connect

    Decker, F.J.; Colocho, W.S.; Wang, M.H.; Yan, Y.T.; Yocky, G.; /SLAC

    2011-11-01

    Constants of beam motion help as cross checks to analyze beam diagnostics and the modeling procedure. Pseudo-constants, like the betatron mismatch parameter or the coupling parameter det C, are constant till certain elements in the beam line change them. This can be used to visually find the non-desired changes, pinpointing errors compared with the model.

  16. A constant size extension drives bacterial cell size homeostasis

    PubMed Central

    Campos, Manuel; Surovtsev, Ivan V.; Kato, Setsu; Paintdakhi, Ahmad; Beltran, Bruno; Ebmeier, Sarah E.; Jacobs-Wagner, Christine

    2014-01-01

    Cell size control is an intrinsic feature of the cell cycle. In bacteria, cell growth and division are thought to be coupled through a cell size threshold. Here, we provide direct experimental evidence disproving the critical size paradigm. Instead, we show through single-cell microscopy and modeling that the evolutionarily distant bacteria Escherichia coli and Caulobacter crescentus achieve cell size homeostasis by growing on average the same amount between divisions, irrespective of cell length at birth. This simple mechanism provides a remarkably robust cell size control without the need of being precise, abating size deviations exponentially within a few generations. This size homeostasis mechanism is broadly applicable for symmetric and asymmetric divisions as well as for different growth rates. Furthermore, our data suggest that constant size extension is implemented at or close to division. Altogether, our findings provide fundamentally distinct governing principles for cell size and cell cycle control in bacteria. PMID:25480302

  17. Microwave measurement of the permittivity for high dielectric constant materials using an extra-cavity evanescent waveguide

    NASA Astrophysics Data System (ADS)

    Ni, Erhu; Jiang, Xing

    2002-11-01

    This article is concerned to a TE01n mode resonant cavity coupled through a hole located in the center of the end wall to a cylindrical waveguide (equal in diameter to hole) supporting the evanescent TE01 mode. When the evanescent guide contains a dielectric sample, propagation of the TE01 wave will be permitted in the dielectric filled part. The air filled part in front of the sample is used to adjust the coupling level; the air filled part of the sufficient length behind the sample is used to form a matched reactance termination, otherwise a metal block is inserted to form a short-circuit termination or a reactance termination. It is shown that while using these arrangements quite a large change in the resonant length (or resonant frequency) and Q factor of the cavity resonator will be obtained, when the sample possessed suitable electric thickness is inserted into the evanescent guide. Therefore, it should be capable of yielding accurate values of the complex permittivity for high dielectric constant materials. Fundamental principles and theoretical error of measurements of the complex permittivity as a function of the electric thickness in the sample due to the uncertainty of measurements in the resonant length and Q factor are discussed. The measured results at some frequencies of the X band and Ka band on two ceramics are given. The technique is compared with the parallel-plate method, showing that the dielectric properties have comparable values in both methods.

  18. Faculty beliefs on fundamental dimensions of scholarship

    NASA Astrophysics Data System (ADS)

    Finnegan, Brian

    scholarship, the policies, activities, and rewards of institutions must reflect a similar belief on the part of faculty. By understanding faculty beliefs on the fundamental dimensions of scholarship, an important step in building this new culture can be taken.

  19. How the cosmological constant affects gravastar formation

    SciTech Connect

    Chan, R.; Silva, M.F.A. da; Rocha, P. E-mail: mfasnic@gmail.com

    2009-12-01

    Here we generalized a previous model of gravastar consisted of an internal de Sitter spacetime, a dynamical infinitely thin shell with an equation of state, but now we consider an external de Sitter-Schwarzschild spacetime. We have shown explicitly that the final output can be a black hole, a ''bounded excursion'' stable gravastar, a stable gravastar, or a de Sitter spacetime, depending on the total mass of the system, the cosmological constants, the equation of state of the thin shell and the initial position of the dynamical shell. We have found that the exterior cosmological constant imposes a limit to the gravastar formation, i.e., the exterior cosmological constant must be smaller than the interior cosmological constant. Besides, we have also shown that, in the particular case where the Schwarzschild mass vanishes, no stable gravastar can be formed, but we still have formation of black hole.

  20. The Solar Constant: A Take Home Lab

    ERIC Educational Resources Information Center

    Eaton, B. G.; And Others

    1977-01-01

    Describes a method that uses energy from the sun, absorbed by aluminum discs, to melt ice, and allows the determination of the solar constant. The take-home equipment includes Styrofoam cups, a plastic syringe, and aluminum discs. (MLH)

  1. Dielectric constant of water in the interface.

    PubMed

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2016-07-01

    We define the dielectric constant (susceptibility) that should enter the Maxwell boundary value problem when applied to microscopic dielectric interfaces polarized by external fields. The dielectric constant (susceptibility) of the interface is defined by exact linear-response equations involving correlations of statistically fluctuating interface polarization and the Coulomb interaction energy of external charges with the dielectric. The theory is applied to the interface between water and spherical solutes of altering size studied by molecular dynamics (MD) simulations. The effective dielectric constant of interfacial water is found to be significantly lower than its bulk value, and it also depends on the solute size. For TIP3P water used in MD simulations, the interface dielectric constant changes from 9 to 4 when the solute radius is increased from ∼5 to 18 Å.

  2. The Rate Constant for Fluorescence Quenching

    ERIC Educational Resources Information Center

    Legenza, Michael W.; Marzzacco, Charles J.

    1977-01-01

    Describes an experiment that utilizes fluorescence intensity measurements from a Spectronic 20 to determine the rate constant for the fluorescence quenching of various aromatic hydrocarbons by carbon tetrachloride in an ethanol solvent. (MLH)

  3. Measurements of the gravitational constant - why we need new ideas

    NASA Astrophysics Data System (ADS)

    Schlamminger, Stephan

    2016-03-01

    In this presentation, I will summarize measurements of the Newtonian constant of gravitation, big G, that have been carried out in the last 30 years. I will describe key techniques that were used by researchers around the world to determine G. Unfortunately, the data set is inconsistent with itself under the assumption that the gravitational constant does not vary in space or time, an assumption that has been tested by other experiments. Currently, several research groups have reported measurements with relative uncertainties below 2 ×10-5 , however, the relative difference between the smallest and largest reported number exceeds 5 ×10-4 . It is embarrassing that after over 200 years of measuring the gravitational constant, we do not have a better understanding of the numerical value of this constant. Clearly, we need new ideas to tackle this problem and now is the time to come forward with new ideas. The National Science Foundation is currently soliciting proposals for an Ideas Lab on measuring big G. In the second part of the presentation, I will introduce the Ideas Lab on big G and I am hoping to motivate the audience to think about new ideas to measure G and encourage them to apply to participate in the Ideas Lab.

  4. Controlled Crystallinity and Fundamental Coupling Interactions in Nanocrystals

    NASA Astrophysics Data System (ADS)

    Ouyang, Min

    2009-03-01

    Metal and semiconductor nanocrystals show many unusual properties and functionalities, and can serve as model system to explore fundamental quantum and classical coupling interactions as well as building blocks of many practical applications. However, because of their small size, these nanoparticles typically exhibit different crystalline properties as compared with their bulk counterpart, and controlling crystallinity (and structural defects) within nanoparticles has posed significant technical challenges. In this talk, I will firstly apply silver metal nanoparticles as an example and present a novel chemical synthetic technique to achieve unprecedented crystallinity control at the nanoscale. This engineering of nanocrystallinity enables manipulation of intrinsic chemical functionalities, physical properties as well as nano-device performance [1]. For example, I will highlight that electron- phonon coupling constant can be significantly reduced by about four times and elastic modulus is increased ˜40% in perfect single crystalline silver nanoparticles as compared with those in disordered twinned nanoparticles. One important application of metal nanoparticles is nanoscale sensors. I will thus demonstrate that performance of nanoparticles based molecular sensing devices can be optimized with three times improvement of figure-of-merit if perfect single crystalline nanoparticles are applied. Lastly, I will present our related studies on semiconductor nanocrystals as well as their hybrid heterostructures. These discussions should offer important implications for our understanding of the fundamental properties at nanoscale and potential applications of metal nanoparticles. [4pt] [1] Yun Tang and Min Ouyang, Nature Materials, 6, 754, 2007.

  5. Effect of speed matching on fundamental diagram of pedestrian flow

    NASA Astrophysics Data System (ADS)

    Fu, Zhijian; Luo, Lin; Yang, Yue; Zhuang, Yifan; Zhang, Peitong; Yang, Lizhong; Yang, Hongtai; Ma, Jian; Zhu, Kongjin; Li, Yanlai

    2016-09-01

    Properties of pedestrian may change along their moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study the speed matching effect (a pedestrian adjusts his velocity constantly to the average velocity of his neighbors) and its influence on the density-velocity relationship (a pedestrian adjust his velocity to the surrounding density), known as the fundamental diagram of the pedestrian flow. By the means of the cellular automaton, the simulation results fit well with the empirical data, indicating the great advance of the discrete model for pedestrian dynamics. The results suggest that the system velocity and flow rate increase obviously under a big noise, i.e., a diverse composition of pedestrian crowd, especially in the region of middle or high density. Because of the temporary effect, the speed matching has little influence on the fundamental diagram. Along the entire density, the relationship between the step length and the average pedestrian velocity is a piecewise function combined two linear functions. The number of conflicts reaches the maximum with the pedestrian density of 2.5 m-2, while decreases by 5.1% with the speed matching.

  6. Traffic dynamics: Its impact on the Macroscopic Fundamental Diagram

    NASA Astrophysics Data System (ADS)

    Knoop, Victor L.; van Lint, Hans; Hoogendoorn, Serge P.

    2015-11-01

    Literature shows that-under specific conditions-the Macroscopic Fundamental Diagram (MFD) describes a crisp relationship between the average flow (production) and the average density in an entire network. The limiting condition is that traffic conditions must be homogeneous over the whole network. Recent works describe hysteresis effects: systematic deviations from the MFD as a result of loading and unloading. This article proposes a two dimensional generalization of the MFD, the so-called Generalized Macroscopic Fundamental Diagram (GMFD), which relates the average flow to both the average density and the (spatial) inhomogeneity of density. The most important contribution is that we show this is a continuous function, of which the MFD is a projection. Using the GMFD, we can describe the mentioned hysteresis patterns in the MFD. The underlying traffic phenomenon explaining the two dimensional surface described by the GMFD is that congestion concentrates (and subsequently spreads out) around the bottlenecks that oversaturate first. We call this the nucleation effect. Due to this effect, the network flow is not constant for a fixed number of vehicles as predicted by the MFD, but decreases due to local queueing and spill back processes around the congestion "nuclei". During this build up of congestion, the production hence decreases, which gives the hysteresis effects.

  7. Inflation with a constant rate of roll

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.

  8. RNA structure and scalar coupling constants

    SciTech Connect

    Tinoco, I. Jr.; Cai, Z.; Hines, J.V.; Landry, S.M.; SantaLucia, J. Jr.; Shen, L.X.; Varani, G.

    1994-12-01

    Signs and magnitudes of scalar coupling constants-spin-spin splittings-comprise a very large amount of data that can be used to establish the conformations of RNA molecules. Proton-proton and proton-phosphorus splittings have been used the most, but the availability of {sup 13}C-and {sup 15}N-labeled molecules allow many more coupling constants to be used for determining conformation. We will systematically consider the torsion angles that characterize a nucleotide unit and the coupling constants that depend on the values of these torsion angles. Karplus-type equations have been established relating many three-bond coupling constants to torsion angles. However, one- and two-bond coupling constants can also depend on conformation. Serianni and coworkers measured carbon-proton coupling constants in ribonucleosides and have calculated their values as a function of conformation. The signs of two-bond coupling can be very useful because it is easier to measure a sign than an accurate magnitude.

  9. A Postulation of a Concept in Fundamental Physics

    NASA Astrophysics Data System (ADS)

    Goradia, Shantilal

    2006-10-01

    I am postulating that all fermions have a quantum mouth (Planck size) that radiates a flux density of gravitons as a function of the mass of the particle. Nucleons are not hard balls like light bulbs radiating photons challenging Newtonian concepts of centers and surfaces. The hardball analogy is implicit in coupling constants that compare strong force relative to gravity. The radiating mouth is not localized at the center like a hypothetical point size filament of a light bulb with a hard surface. A point invokes mass of zero volume. It is too precise, inconsistent and illogical. Nothing can be localized with more accuracy that Planck length. Substituting the hard glass bulb surface with flexible plastic surface would clearly make the interacting mouths of particles approach each other as close as possible, but no less than the quantum limit of Planck length. Therefore, surface distance in Newtonian gravity would be a close approximation at particle scale and fits Feynman's road map [1]. My postulation reflected by Fig. 2 of gr-qc/0507130 explains observations of increasing values of coupling constants resulting from decreasing values of Planck length (See physics/0210040 v1). Since Planck length is the fundamental unit of length of nature, its variation can impact our observation of the universe and the evolutionary process.

  10. Fundamental Interventions: How Clinicians Can Address the Fundamental Causes of Disease.

    PubMed

    Reich, Adam D; Hansen, Helena B; Link, Bruce G

    2016-06-01

    In order to enhance the "structural competency" of medicine-the capability of clinicians to address social and institutional determinants of their patients' health-physicians need a theoretical lens to see how social conditions influence health and how they might address them. We consider one such theoretical lens, fundamental cause theory, and propose how it might contribute to a more structurally competent medical profession. We first describe fundamental cause theory and how it makes the social causes of disease and health visible. We then outline the sorts of "fundamental interventions" that physicians might make in order to address the fundamental causes. PMID:27022923

  11. Fundamental Vocabulary Selection Based on Word Familiarity

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi; Kasahara, Kaname; Kanasugi, Tomoko; Amano, Shigeaki

    This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.

  12. DOE Fundamentals Handbook: Electrical Science, Volume 4

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive transformers; and electrical test components; batteries; AC and DC voltage regulators; instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  13. DOE Fundamentals Handbook: Electrical Science, Volume 1

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of electrical theory, terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  14. Fundamental ethical principles in health care.

    PubMed

    Thompson, I E

    1987-12-01

    In an attempt to clarify which requirements of morality are logically primary to the ethics of health care, two questions are examined: is there sufficient common ground among the medical, nursing, paramedical, chaplaincy, and social work professions to justify looking for ethical principles common to health care? Do sufficient logical grounds or consensus among health workers and the public exist to speak of "fundamental ethical principles in health care"? While respect for persons, justice, and beneficence are fundamental principles in a formal sense, how we view these principles in practice will depend on our particular culture and experience and the kinds of metaethical criteria we use for applying these principles.

  15. Fundamentals of Pharmacogenetics in Personalized, Precision Medicine.

    PubMed

    Valdes, Roland; Yin, DeLu Tyler

    2016-09-01

    This article introduces fundamental principles of pharmacogenetics as applied to personalized and precision medicine. Pharmacogenetics establishes relationships between pharmacology and genetics by connecting phenotypes and genotypes in predicting the response of therapeutics in individual patients. We describe differences between precision and personalized medicine and relate principles of pharmacokinetics and pharmacodynamics to applications in laboratory medicine. We also review basic principles of pharmacogenetics, including its evolution, how it enables the practice of personalized therapeutics, and the role of the clinical laboratory. These fundamentals are a segue for understanding specific clinical applications of pharmacogenetics described in subsequent articles in this issue.

  16. DOE Fundamentals Handbook: Electrical Science, Volume 2

    SciTech Connect

    Not Available

    1992-06-01

    The Electrical Science Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding terminology, and application. The handbook includes information on alternating current (AC) and direct current (DC) theory, circuits, motors, and generators; AC power and reactive components; batteries; AC and DC voltage regulators; transformers; and electrical test instruments and measuring devices. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility electrical equipment.

  17. Dark Energy: A Crisis for Fundamental Physics

    SciTech Connect

    Stubbs, Christopher

    2010-04-12

    Astrophysical observations provide robust evidence that our current picture of fundamental physics is incomplete. The discovery in 1998 that the expansion of the Universe is accelerating (apparently due to gravitational repulsion between regions of empty space!) presents us with a profound challenge, at the interface between gravity and quantum mechanics. This "Dark Energy" problem is arguably the most pressing open question in modern fundamental physics. The first talk will describe why the Dark Energy problem constitutes a crisis, with wide-reaching ramifications. One consequence is that we should probe our understanding of gravity at all accessible scales, and the second talk will present experiments and observations that are exploring this issue.

  18. DOE Fundamentals Handbook: Mathematics, Volume 2

    SciTech Connect

    Not Available

    1992-06-01

    The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.

  19. Fundamentals of Pharmacogenetics in Personalized, Precision Medicine.

    PubMed

    Valdes, Roland; Yin, DeLu Tyler

    2016-09-01

    This article introduces fundamental principles of pharmacogenetics as applied to personalized and precision medicine. Pharmacogenetics establishes relationships between pharmacology and genetics by connecting phenotypes and genotypes in predicting the response of therapeutics in individual patients. We describe differences between precision and personalized medicine and relate principles of pharmacokinetics and pharmacodynamics to applications in laboratory medicine. We also review basic principles of pharmacogenetics, including its evolution, how it enables the practice of personalized therapeutics, and the role of the clinical laboratory. These fundamentals are a segue for understanding specific clinical applications of pharmacogenetics described in subsequent articles in this issue. PMID:27514461

  20. Dark Energy: A Crisis for Fundamental Physics

    ScienceCinema

    Stubbs, Christopher [Harvard University, Cambridge, Massachusetts, USA

    2016-07-12

    Astrophysical observations provide robust evidence that our current picture of fundamental physics is incomplete. The discovery in 1998 that the expansion of the Universe is accelerating (apparently due to gravitational repulsion between regions of empty space!) presents us with a profound challenge, at the interface between gravity and quantum mechanics. This "Dark Energy" problem is arguably the most pressing open question in modern fundamental physics. The first talk will describe why the Dark Energy problem constitutes a crisis, with wide-reaching ramifications. One consequence is that we should probe our understanding of gravity at all accessible scales, and the second talk will present experiments and observations that are exploring this issue.

  1. The efficiency of combustion turbines with constant-pressure combustion

    NASA Technical Reports Server (NTRS)

    Piening, Werner

    1941-01-01

    Of the two fundamental cycles employed in combustion turbines, namely, the explosion (or constant-volume) cycle and the constant-pressure cycle, the latter is considered more in detail and its efficiency is derived with the aid of the cycle diagrams for the several cases with adiabatic and isothermal compression and expansion strokes and with and without utilization of the exhaust heat. Account is also taken of the separate efficiencies of the turbine and compressor and of the pressure losses and heat transfer in the piping. The results show that without the utilization of the exhaust heat the efficiencies for the two cases of adiabatic and isothermal compression is offset by the increase in the heat supplied. It may be seen from the curves that it is necessary to attain separate efficiencies of at least 80 percent in order for useful results to be obtained. There is further shown the considerable effect on the efficiency of pressure losses in piping or heat exchangers.

  2. Variations of a Constant -- On the History of Precession

    NASA Astrophysics Data System (ADS)

    Kokott, W.

    The precession of the equinoxes, the phenomenon which defines one of the fundamental constants of astronomy, has been with us for more than two millennia. Discovered by Hipparchos who did notice a systematic difference of his star positions as compared with older observations, subsequently adopted by Ptolemaios, its correct value became the object of prolonged controversy. The apparent variability of the precession led to the superimposition of a so-called ''trepidation``, an oscillation of typically +/- 9 deg amplitude and 7000 years period, over a linear precession of only 26 arcsec per annum. This construction, finalized in the Alfonsine Tables (ca. 1280), did work for less than two centuries. The motion of the vernal equinox, at 39 arcsec p.a. too small from the outset, decreases according to this theory to 34 arcsec in the year 1475, the first year covered by the printed version of Johannes Regiomontanus' Ephemerides. Regiomontanus had to re-adjust his longitudes to the real situation, but the difficulties caused by the apparent nonlinearity did persist, leading to a prolonged debate which was finally put to rest by Tycho Brahe. Subsequent to Edmond Halley's successful derivation of a modern value of the precessional constant, again by comparing contemporary star positions with the Almagest catalogue, and Bradley's discovery of the nutation, the last long-term comparison of modern with Ptolemaic coordinates was published by Bode (1795). Shortly after, the analytical theory of precession was established by Bessel in his Fundamenta Astronomiae (1818).

  3. Intrinsic fundamental frequency of vowels is moderated by regional dialect.

    PubMed

    Jacewicz, Ewa; Fox, Robert Allen

    2015-10-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  4. Intrinsic fundamental frequency of vowels is moderated by regional dialect.

    PubMed

    Jacewicz, Ewa; Fox, Robert Allen

    2015-10-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts.

  5. Radiation scales on which standard values of the solar constant and solar spectral irradiance are based

    NASA Technical Reports Server (NTRS)

    Thekaekara, M. P.

    1972-01-01

    The question of radiation scales is critically examined. There are two radiation scales which are of fundamental validity and there are several calibration standards and radiation scales which have been set up for practical convenience. The interrelation between these scales is investigated. It is shown that within the limits of accuracy of irradiance measurements in general and solar irradiance measurements in particular, the proposed standard values of the solar constant and solar spectrum should be considered to be on radiation scales of fundamental validity; those based on absolute electrical units and on the thermodynamic Kelvin temperature scale.

  6. Initial conditions of inhomogeneous universe and the cosmological constant problem

    NASA Astrophysics Data System (ADS)

    Totani, Tomonori

    2016-06-01

    Deriving the Einstein field equations (EFE) with matter fluid from the action principle is not straightforward, because mass conservation must be added as an additional constraint to make rest-frame mass density variable in reaction to metric variation. This can be avoided by introducing a constraint 0δ(√-g) = to metric variations δ gμν, and then the cosmological constant Λ emerges as an integration constant. This is a removal of one of the four constraints on initial conditions forced by EFE at the birth of the universe, and it may imply that EFE are unnecessarily restrictive about initial conditions. I then adopt a principle that the theory of gravity should be able to solve time evolution starting from arbitrary inhomogeneous initial conditions about spacetime and matter. The equations of gravitational fields satisfying this principle are obtained, by setting four auxiliary constraints on δ gμν to extract six degrees of freedom for gravity. The cost of achieving this is a loss of general covariance, but these equations constitute a consistent theory if they hold in the special coordinate systems that can be uniquely specified with respect to the initial space-like hypersurface when the universe was born. This theory predicts that gravity is described by EFE with non-zero Λ in a homogeneous patch of the universe created by inflation, but Λ changes continuously across different patches. Then both the smallness and coincidence problems of the cosmological constant are solved by the anthropic argument. This is just a result of inhomogeneous initial conditions, not requiring any change of the fundamental physical laws in different patches.

  7. Initial conditions of inhomogeneous universe and the cosmological constant problem

    NASA Astrophysics Data System (ADS)

    Totani, Tomonori

    2016-06-01

    Deriving the Einstein field equations (EFE) with matter fluid from the action principle is not straightforward, because mass conservation must be added as an additional constraint to make rest-frame mass density variable in reaction to metric variation. This can be avoided by introducing a constraint 0δ(√‑g) = to metric variations δ gμν, and then the cosmological constant Λ emerges as an integration constant. This is a removal of one of the four constraints on initial conditions forced by EFE at the birth of the universe, and it may imply that EFE are unnecessarily restrictive about initial conditions. I then adopt a principle that the theory of gravity should be able to solve time evolution starting from arbitrary inhomogeneous initial conditions about spacetime and matter. The equations of gravitational fields satisfying this principle are obtained, by setting four auxiliary constraints on δ gμν to extract six degrees of freedom for gravity. The cost of achieving this is a loss of general covariance, but these equations constitute a consistent theory if they hold in the special coordinate systems that can be uniquely specified with respect to the initial space-like hypersurface when the universe was born. This theory predicts that gravity is described by EFE with non-zero Λ in a homogeneous patch of the universe created by inflation, but Λ changes continuously across different patches. Then both the smallness and coincidence problems of the cosmological constant are solved by the anthropic argument. This is just a result of inhomogeneous initial conditions, not requiring any change of the fundamental physical laws in different patches.

  8. Modulation of fundamental frequency by laryngeal muscles during vibrato.

    PubMed

    Hsiao, T Y; Solomon, N P; Luschei, E S; Titze, I R

    1994-09-01

    The variations in voice fundamental frequency (F0) that occur during vibrato production may be produced, at least in part, by modulation of laryngeal muscle activity. We have quantified this relation by using a cross-correlation analysis of the changes in F0 during vibrato and the changes either in motor unit firing rate or in gross electromyographic activity from the cricothyroid (CT) and the thyroarytenoid (TA) muscles. Two trained amateur tenors provided the data. Correlations were generally quite strong (mean r for the CT was 0.72 for singer 1 and 0.50 for singer 2; mean r for the TA was 0.31 for singer 2), thus providing support for previous evidence that fundamental frequency modulation in vibrato involves active modulation of the laryngeal motoneuron pool, especially by the CT muscle. In addition, phase delays between muscle modulation and changes in fundamental frequency were substantial (averaging approximately 130 degrees for the CT and 140 degrees for the TA). This finding may help provide insight regarding the mechanisms responsible for the production of vibrato. PMID:7987424

  9. Course Objectives: Electronic Fundamentals, EL16.

    ERIC Educational Resources Information Center

    Wilson, David H.

    The general objective, recommended text, and specific objectives of a course titled "Electronic Fundamentals," as offered at St. Lawrence College of Applied Arts and Technology, are provided. The general objective of the course is "to acquire an understanding of diodes, transistors, and tubes, and so be able to analyze the operation of single…

  10. Getting a Better Grasp on Flu Fundamentals

    MedlinePlus

    ... a Better Grasp on Flu Fundamentals Inside Life Science View All Articles | Inside Life Science Home Page Getting a Better Grasp on Flu ... Seasonal Flu Patterns? Forecasting Flu This Inside Life Science article also appears on LiveScience . Learn about related ...

  11. Uncovering Racial Bias in Nursing Fundamentals Textbooks.

    ERIC Educational Resources Information Center

    Byrne, Michelle M.

    2001-01-01

    The portrayal of African Americans in nursing fundamentals textbooks was analyzed, resulting in 11 themes in the areas of history, culture, and physical assessment. Few African American leaders were included, and racial bias and stereotyping were apparent. Differences were often discussed using Eurocentric norms, and language tended to minimize…

  12. Fundamental Theorems of Algebra for the Perplexes

    ERIC Educational Resources Information Center

    Poodiak, Robert; LeClair, Kevin

    2009-01-01

    The fundamental theorem of algebra for the complex numbers states that a polynomial of degree n has n roots, counting multiplicity. This paper explores the "perplex number system" (also called the "hyperbolic number system" and the "spacetime number system") In this system (which has extra roots of +1 besides the usual [plus or minus]1 of the…

  13. Solar Energy: Solar System Design Fundamentals.

    ERIC Educational Resources Information Center

    Knapp, Henry H., III

    This module on solar system design fundamentals is one of six in a series intended for use as supplements to currently available materials on solar energy and energy conservation. Together with the recommended texts and references (sources are identified), these modules provide an effective introduction to energy conservation and solar energy…

  14. Fundamental Concepts Bridging Education and the Brain

    ERIC Educational Resources Information Center

    Masson, Steve; Foisy, Lorie-Marlène Brault

    2014-01-01

    Although a number of papers have already discussed the relevance of brain research for education, the fundamental concepts and discoveries connecting education and the brain have not been systematically reviewed yet. In this paper, four of these concepts are presented and evidence concerning each one is reviewed. First, the concept of…

  15. The Case for Fundamentals of Oral Communication

    ERIC Educational Resources Information Center

    Emanuel, Richard

    2005-01-01

    Dozens of studies support the fact that communication skills are essential for success in a number of areas. After citing a sampling of these studies, a survey of the communication course offerings in Alabama's 2-year-college system reveals that most students are required to take only one communication course-either Fundamentals of Oral…

  16. Measurement and Fundamental Processes in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Jaeger, Gregg

    2015-07-01

    In the standard mathematical formulation of quantum mechanics, measurement is an additional, exceptional fundamental process rather than an often complex, but ordinary process which happens also to serve a particular epistemic function: during a measurement of one of its properties which is not already determined by a preceding measurement, a measured system, even if closed, is taken to change its state discontinuously rather than continuously as is usual. Many, including Bell, have been concerned about the fundamental role thus given to measurement in the foundation of the theory. Others, including the early Bohr and Schwinger, have suggested that quantum mechanics naturally incorporates the unavoidable uncontrollable disturbance of physical state that accompanies any local measurement without the need for an exceptional fundamental process or a special measurement theory. Disturbance is unanalyzable for Bohr, but for Schwinger it is due to physical interactions' being borne by fundamental particles having discrete properties and behavior which is beyond physical control. Here, Schwinger's approach is distinguished from more well known treatments of measurement, with the conclusion that, unlike most, it does not suffer under Bell's critique of quantum measurement. Finally, Schwinger's critique of measurement theory is explicated as a call for a deeper investigation of measurement processes that requires the use of a theory of quantum fields.

  17. Drafting Fundamentals. Drafting Module 1. Instructor's Guide.

    ERIC Educational Resources Information Center

    Missouri Univ., Columbia. Instructional Materials Lab.

    This Missouri Vocational Instruction Management System instructor's drafting guide has been keyed to the drafting competency profile developed by state industry and education professionals. The guide contains a cross-reference table of instructional materials. Ten units cover drafting fundamentals: (1) introduction to drafting; (2) general safety;…

  18. Fundamentals of Athletic Training. Second Edition.

    ERIC Educational Resources Information Center

    Behling, Fred L.; And Others

    This book provides an authoritative reference on the fundamentals of athletic training for people with varied backgrounds but a common interest in the health and education of the high school athlete. The book is designed especially for the novice athletic trainer. Section 1 of the book concerns the organization and administration of athletic…

  19. Mathematical Literacy--It's Become Fundamental

    ERIC Educational Resources Information Center

    McCrone, Sharon Soucy; Dossey, John A.

    2007-01-01

    The rising tide of numbers and statistics in daily life signals a need for a fundamental broadening of the concept of literacy: mathematical literacy assuming a coequal role in the curriculum alongside language-based literacy. Mathematical literacy is not about studying higher levels of formal mathematics, but about making math relevant and…

  20. A Fundamental Theorem on Particle Acceleration

    SciTech Connect

    Xie, Ming

    2003-05-01

    A fundamental theorem on particle acceleration is derived from the reciprocity principle of electromagnetism and a rigorous proof of the theorem is presented. The theorem establishes a relation between acceleration and radiation, which is particularly useful for insightful understanding of and practical calculation about the first order acceleration in which energy gain of the accelerated particle is linearly proportional to the accelerating field.

  1. Euler and the Fundamental Theorem of Algebra.

    ERIC Educational Resources Information Center

    Duham, William

    1991-01-01

    The complexity of the proof of the Fundamental Theorem of Algebra makes it inaccessible to lower level students. Described are more understandable attempts of proving the theorem and a historical account of Euler's efforts that relates the progression of the mathematical process used and indicates some of the pitfalls encountered. (MDH)

  2. The equivalent fundamental-mode source

    SciTech Connect

    Spriggs, G.D.; Busch, R.D.; Sakurai, Takeshi; Okajima, Shigeaki

    1997-02-01

    In 1960, Hansen analyzed the problem of assembling fissionable material in the presence of a weak neutron source. Using point kinetics, he defined the weak source condition and analyzed the consequences of delayed initiation during ramp reactivity additions. Although not clearly stated in Hansen`s work, the neutron source strength that appears in the weak source condition corresponds to the equivalent fundamental-mode source. In this work, we describe the concept of an equivalent fundamental-mode source and we derive a deterministic expression for a factor, g*, that converts any arbitrary source distribution to an equivalent fundamental-mode source. We also demonstrate a simplified method for calculating g* in subcritical systems. And finally, we present a new experimental method that can be employed to measure the equivalent fundamental-mode source strength in a multiplying assembly. We demonstrate the method on the zero-power, XIX-1 assembly at the Fast Critical Assembly (FCA) Facility, Japan Atomic Energy Research Institute (JAERI).

  3. Fundamental studies on passivity and passivity breakdown

    SciTech Connect

    Macdonald, D.D.; Urquidi-Macdonald, M.

    1993-06-01

    Using photoelectrochemical impedance and admittance spectroscopies, a fundamental and quantitative understanding of the mechanisms for the growth and breakdown of passive films on metal and alloy surfaces in contact with aqueous environments is being developed. A point defect model has been extended to explain the breakdown of passive films, leading to pitting and crack growth and thus development of damage due to localized corrosion.

  4. Retention of Electronic Fundamentals: Differences Among Topics.

    ERIC Educational Resources Information Center

    Johnson, Kirk A.

    Criterion-referenced tests were used to measure the learning and retention of a sample of material taught by means of programed instruction in the Avionics Fundamentals Course, Class A. It was found that the students knew about 30 percent of the material before reading the programs, that mastery rose to a very high level on the immediate posttest,…

  5. Why quarks cannot be fundamental particles

    NASA Astrophysics Data System (ADS)

    Kalman, C. S.

    2005-05-01

    Many reasons why quarks should be considered composite particles are found in the book Preons by D'Souza and Kalman. One reason not found in the book is that all the quarks except for the u quark decay. The electron and the electron neutrino do not decay. A model of fundamental particles based upon the weak charge is presented.

  6. Fundamental and Gradient Differences in Language Development

    ERIC Educational Resources Information Center

    Herschensohn, Julia

    2009-01-01

    This article reexamines Bley-Vroman's original (1990) and evolved fundamental difference hypothesis that argues that differences in path and endstate of first language acquisition and adult foreign language learning result from differences in the acquisition procedure (i.e., language faculty and cognitive strategies, respectively). The evolved…

  7. Fundamental Movement Skills and Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Staples, Kerri L.; Reid, Greg

    2010-01-01

    Delays and deficits may both contribute to atypical development of movement skills by children with ASD. Fundamental movement skills of 25 children with autism spectrum disorders (ASD) (ages 9-12 years) were compared to three typically developing groups using the "Test of Gross Motor Development" ("TGMD-2"). The group matched on chronological age…

  8. On some fundamental concepts of galactic dynamics

    NASA Astrophysics Data System (ADS)

    Ossipkov, L. P.

    2013-10-01

    We discuss the following fundamental concepts of galactic dynamics: (a) regular (smoothed) and irregular (random) forces, (b) truncation of the impact parameter, (c) the invariance of the Maxwellian velocity distribution, and (d) the Jeans theorem. Dedicated to Felix Alexandrovich Tsitsin (1931-2005)

  9. Reversing: A Fundamental Idea in Computer Science

    ERIC Educational Resources Information Center

    Armoni, Michal; Ginat, David

    2008-01-01

    Reversing is the notion of thinking or working in reverse. Computer science textbooks and tutors recognize it primarily in the form of recursion. However, recursion is only one form of reversing. Reversing appears in the computer science curriculum in many other forms, at various intellectual levels, in a variety of fundamental courses. As such,…

  10. Radio and Television Repairer Fundamentals. Student's Manual.

    ERIC Educational Resources Information Center

    Maul, Chuck

    This self-contained student manual on fundamentals of radio and television repair is designed to help trade and industrial students relate work experience on the job to information studied at school. Designed for individualized instruction under the supervision of a coordinator or instructor, the manual has 9 sections, each containing 2 to 10…

  11. Fundamental Plane of Sunyaev-Zel'dovich Clusters

    NASA Astrophysics Data System (ADS)

    Afshordi, Niayesh

    2008-10-01

    Sunyaev-Zel'dovich (SZ) cluster surveys are considered among the most promising methods for probing dark energy up to large redshifts. However, their premise is hinged on an accurate mass-observable relationship, which could be affected by the (rather poorly understood) physics of the intracluster gas. In this paper, using a semianalytic model of the intracluster gas that accommodates various theoretical uncertainties, I develop a fundamental plane relationship between the observed size, thermal energy, and mass of galaxy clusters. In particular, I find that M propto (YSZ/RSZ ,2)3/4, where M is the mass, YSZ is the total SZ flux or thermal energy, and RSZ ,2 is the SZ half-light radius of the cluster. I first show that, within this model, using the fundamental plane relationship reduces the (systematic+random) errors in mass estimates to 14%, from 22% for a simple mass-flux relationship. Since measurement of the cluster sizes is an inevitable part of observing the SZ clusters, the fundamental plane relationship can be used to reduce the error of the cluster mass estimates by ~34%, improving the accuracy of the resulting cosmological constraints without any extra cost. I then argue why our fundamental plane is distinctly different from the virial relationship that one may naively expect between the cluster parameters. Finally, I argue that while including more details of the observed SZ profile cannot significantly improve the accuracy of mass estimates, a better understanding of the impact of nongravitational heating/cooling processes on the outskirts of the intracluster medium (apart from external calibrations) might be the best way to reduce these errors.

  12. Absolute radiometry and the solar constant

    NASA Technical Reports Server (NTRS)

    Willson, R. C.

    1974-01-01

    A series of active cavity radiometers (ACRs) are described which have been developed as standard detectors for the accurate measurement of irradiance in absolute units. It is noted that the ACR is an electrical substitution calorimeter, is designed for automatic remote operation in any environment, and can make irradiance measurements in the range from low-level IR fluxes up to 30 solar constants with small absolute uncertainty. The instrument operates in a differential mode by chopping the radiant flux to be measured at a slow rate, and irradiance is determined from two electrical power measurements together with the instrumental constant. Results are reported for measurements of the solar constant with two types of ACRs. The more accurate measurement yielded a value of 136.6 plus or minus 0.7 mW/sq cm (1.958 plus or minus 0.010 cal/sq cm per min).

  13. Optimizing constant wavelength neutron powder diffractometers

    NASA Astrophysics Data System (ADS)

    Cussen, Leo D.

    2016-06-01

    This article describes an analytic method to optimize constant wavelength neutron powder diffractometers. It recasts the accepted mathematical description of resolution and intensity in terms of new variables and includes terms for vertical divergence, wavelength and some sample scattering effects. An undetermined multiplier method is applied to the revised equations to minimize the RMS value of resolution width at constant intensity and fixed wavelength. A new understanding of primary spectrometer transmission (presented elsewhere) can then be applied to choose beam elements to deliver an optimum instrument. Numerical methods can then be applied to choose the best wavelength.

  14. Dielectric constants of soils at microwave frequencies

    NASA Technical Reports Server (NTRS)

    Geiger, F. E.; Williams, D.

    1972-01-01

    A knowledge of the complex dielectric constant of soils is essential in the interpretation of microwave airborne radiometer data of the earth's surface. Measurements were made at 37 GHz on various soils from the Phoenix, Ariz., area. Extensive data have been obtained for dry soil and soil with water content in the range from 0.6 to 35 percent by dry weight. Measurements were made in a two arm microwave bridge and results were corrected for reflections at the sample interfaces by solution of the parallel dielectric plate problem. The maximum dielectric constants are about a factor of 3 lower than those reported for similar soils at X-band frequencies.

  15. Microfabricated microengine with constant rotation rate

    SciTech Connect

    Romero, L.A.; Dickey, F.M.

    1999-09-21

    A microengine uses two synchronized linear actuators as a power source and converts oscillatory motion from the actuators into constant rotational motion via direct linkage connection to an output gear or wheel. The microengine provides output in the form of a continuously rotating output gear that is capable of delivering drive torque at a constant rotation to a micromechanism. The output gear can have gear teeth on its outer perimeter for directly contacting a micromechanism requiring mechanical power. The gear is retained by a retaining means which allows said gear to rotate freely. The microengine is microfabricated of polysilicon on one wafer using surface micromachining batch fabrication.

  16. Atomic Weights No Longer Constants of Nature

    SciTech Connect

    Coplen, T.B.; Holden, N.

    2011-03-01

    Many of us grew up being taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis has changed the way we view atomic weights and why they are no longer constants of nature.

  17. Atomic weights: no longer constants of nature

    USGS Publications Warehouse

    Coplen, Tyler B.; Holden, Norman E.

    2011-01-01

    Many of us were taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis have changed the way we view atomic weights and why they are no longer constants of nature

  18. TOPICAL REVIEW The cosmological constant puzzle

    NASA Astrophysics Data System (ADS)

    Bass, Steven D.

    2011-04-01

    The accelerating expansion of the Universe points to a small positive vacuum energy density and negative vacuum pressure. A strong candidate is the cosmological constant in Einstein's equations of general relativity. Possible contributions are zero-point energies and the condensates associated with spontaneous symmetry breaking. The vacuum energy density extracted from astrophysics is 1056 times smaller than the value expected from quantum fields and standard model particle physics. Is the vacuum energy density time dependent? We give an introduction to the cosmological constant puzzle and ideas how to solve it.

  19. Cosmological constant in scale-invariant theories

    SciTech Connect

    Foot, Robert; Kobakhidze, Archil; Volkas, Raymond R.

    2011-10-01

    The incorporation of a small cosmological constant within radiatively broken scale-invariant models is discussed. We show that phenomenologically consistent scale-invariant models can be constructed which allow a small positive cosmological constant, providing certain relation between the particle masses is satisfied. As a result, the mass of the dilaton is generated at two-loop level. Another interesting consequence is that the electroweak symmetry-breaking vacuum in such models is necessarily a metastable ''false'' vacuum which, fortunately, is not expected to decay on cosmological time scales.

  20. Environmental dependence of masses and coupling constants

    SciTech Connect

    Olive, Keith A.; Pospelov, Maxim

    2008-02-15

    We construct a class of scalar field models coupled to matter that lead to the dependence of masses and coupling constants on the ambient matter density. Such models predict a deviation of couplings measured on the Earth from values determined in low-density astrophysical environments, but do not necessarily require the evolution of coupling constants with the redshift in the recent cosmological past. Additional laboratory and astrophysical tests of {delta}{alpha} and {delta}(m{sub p}/m{sub e}) as functions of the ambient matter density are warranted.

  1. Our Universe from the cosmological constant

    SciTech Connect

    Barrau, Aurélien; Linsefors, Linda E-mail: linda.linsefors@lpsc.in2p3.fr

    2014-12-01

    The issue of the origin of the Universe and of its contents is addressed in the framework of bouncing cosmologies, as described for example by loop quantum gravity. If the current acceleration is due to a true cosmological constant, this constant is naturally conserved through the bounce and the Universe should also be in a (contracting) de Sitter phase in the remote past. We investigate here the possibility that the de Sitter temperature in the contracting branch fills the Universe with radiation that causes the bounce and the subsequent inflation and reheating. We also consider the possibility that this gives rise to a cyclic model of the Universe and suggest some possible tests.

  2. Microfabricated microengine with constant rotation rate

    DOEpatents

    Romero, Louis A.; Dickey, Fred M.

    1999-01-01

    A microengine uses two synchronized linear actuators as a power source and converts oscillatory motion from the actuators into constant rotational motion via direct linkage connection to an output gear or wheel. The microengine provides output in the form of a continuously rotating output gear that is capable of delivering drive torque at a constant rotation to a micromechanism. The output gear can have gear teeth on its outer perimeter for directly contacting a micromechanism requiring mechanical power. The gear is retained by a retaining means which allows said gear to rotate freely. The microengine is microfabricated of polysilicon on one wafer using surface micromachining batch fabrication.

  3. Relation of the diffuse reflectance remission function to the fundamental optical parameters.

    NASA Technical Reports Server (NTRS)

    Simmons, E. L.

    1972-01-01

    The Kubelka-Munk equations describing the diffuse reflectance of a powdered sample were compared to equations obtained using a uniformly-sized rough-surfaced spherical particle model. The comparison resulted in equations relating the remission function and the Kubelka-Munk constants to the index of refraction, the absorption coefficient, and the average particle diameter of a powdered sample. Published experimental results were used to test the equation relating to the remission function to the fundamental optical parameters.

  4. The Rates of Change of the Fundamental and Overtone Periods of Sx-Phoenicis

    NASA Astrophysics Data System (ADS)

    Coates, D. W.; Halprin, L.; Thompson, K.

    1982-04-01

    Using existing data for SX Phe, we calculate the rates of change in the fundamental period and the overtone period assuming that both are changing at a constant rate. We find values: dP0 / dt = (-87±5) × 10-13 dP1 / dt = (-174±9) × 10-13 The method used is applicable to any pulsating variable star whose times of maximum light exhibit measurable multiperiodicity.

  5. Constant capacitance in nanopores of carbon monoliths.

    PubMed

    García-Gómez, Alejandra; Moreno-Fernández, Gelines; Lobato, Belén; Centeno, Teresa A

    2015-06-28

    The results obtained for binder-free electrodes made of carbon monoliths with narrow micropore size distributions confirm that the specific capacitance in the electrolyte (C2H5)4NBF4/acetonitrile does not depend significantly on the micropore size and support the foregoing constant result of 0.094 ± 0.011 F m(-2).

  6. Damping constant estimation in magnetoresistive readers

    SciTech Connect

    Stankiewicz, Andrzej Hernandez, Stephanie

    2015-05-07

    The damping constant is a key design parameter in magnetic reader design. Its value can be derived from bulk or sheet film ferromagnetic resonance (FMR) line width. However, dynamics of nanodevices is usually defined by presence of non-uniform modes. It triggers new damping mechanisms and produces stronger damping than expected from traditional FMR. This work proposes a device-level technique for damping evaluation, based on time-domain analysis of thermally excited stochastic oscillations. The signal is collected using a high bandwidth oscilloscope, by direct probing of a biased reader. Recorded waveforms may contain different noise signals, but free layer FMR is usually a dominating one. The autocorrelation function is a reflection of the damped oscillation curve, averaging out stochastic contributions. The damped oscillator formula is fitted to autocorrelation data, producing resonance frequency and damping constant values. Restricting lag range allows for mitigation of the impact of other phenomena (e.g., reader instability) on the damping constant. For a micromagnetically modeled reader, the technique proves to be much more accurate than the stochastic FMR line width approach. Application to actual reader waveforms yields a damping constant of ∼0.03.

  7. Variations of the Solar Constant. [conference

    NASA Technical Reports Server (NTRS)

    Sofia, S. (Editor)

    1981-01-01

    The variations in data received from rocket-borne and balloon-borne instruments are discussed. Indirect techniques to measure and monitor the solar constant are presented. Emphasis is placed on the correlation of data from the Solar Maximum Mission and the Nimbus 7 satellites.

  8. Unified Technical Concepts. Module 12: Time Constants.

    ERIC Educational Resources Information Center

    Technical Education Research Center, Waco, TX.

    This concept module on time constants is one of thirteen modules that provide a flexible, laboratory-based physics instructional package designed to meet the specialized needs of students in two-year, postsecondary technical schools. Each of the thirteen concept modules discusses a single physics concept and how it is applied to each energy…

  9. The ideal Kolmogorov inertial range and constant

    NASA Technical Reports Server (NTRS)

    Zhou, YE

    1993-01-01

    The energy transfer statistics measured in numerically simulated flows are found to be nearly self-similar for wavenumbers in the inertial range. Using the measured self-similar form, an 'ideal' energy transfer function and the corresponding energy flux rate were deduced. From this flux rate, the Kolmogorov constant was calculated to be 1.5, in excellent agreement with experiments.

  10. The Elastic Constants for Wrought Aluminum Alloys

    NASA Technical Reports Server (NTRS)

    Templin, R L; Hartmann, E C

    1945-01-01

    There are several constants which have been devised as numerical representations of the behavior of metals under the action of loadings which stress the metal within the range of elastic action. Some of these constants, such as Young's modulus of elasticity in tension and compression, shearing modulus of elasticity, and Poisson's ratio, are regularly used in engineering calculations. Precise tests and experience indicate that these elastic constants are practically unaffected by many of the factors which influence the other mechanical properties of materials and that a few careful determinations under properly controlled conditions are more useful and reliable than many determinations made under less favorable conditions. It is the purpose of this paper to outline the methods employed by the Aluminum Research Laboratories for the determination of some of these elastic constants, to list the values that have been determined for some of the wrought aluminum alloys, and to indicate the variations in the values that may be expected for some of the commercial products of these alloys.

  11. The Cosmological Constant and its Interpretation

    NASA Astrophysics Data System (ADS)

    Liddle, A.; Murdin, P.

    2002-12-01

    The cosmological constant was first introduced into the equations of general relativity by Einstein himself, who later famously criticized this move as his `greatest blunder'. His main motivation had been to allow cosmological models featuring a static universe, but this possibility swiftly became redundant with Edwin Hubble's discovery of the expansion of the universe. Despite this, it has period...

  12. Spray Gun With Constant Mixing Ratio

    NASA Technical Reports Server (NTRS)

    Simpson, William G.

    1987-01-01

    Conceptual mechanism mounted in handle of spray gun maintains constant ratio between volumetric flow rates in two channels leading to spray head. With mechanism, possible to keep flow ratio near 1:1 (or another desired ratio) over range of temperatures, orifice or channel sizes, or clogging conditions.

  13. Asymptotically Vanishing Cosmological Constant in the Multiverse

    NASA Astrophysics Data System (ADS)

    Kawai, Hikaru; Okada, Takashi

    We study the problem of the cosmological constant in the context of the multiverse in Lorentzian space-time, and show that the cosmological constant will vanish in the future. This sort of argument was started by Sidney Coleman in 1989, and he argued that the Euclidean wormholes make the multiverse partition function a superposition of various values of the cosmological constant Λ, which has a sharp peak at Λ = 0. However, the implication of the Euclidean analysis to our Lorentzian space-time is unclear. With this motivation, we analyze the quantum state of the multiverse in Lorentzian space-time by the WKB method, and calculate the density matrix of our universe by tracing out the other universes. Our result predicts vanishing cosmological constant. While Coleman obtained the enhancement at Λ = 0 through the action itself, in our Lorentzian analysis the similar enhancement arises from the front factor of eiS in the universe wave function, which is in the next leading order in the WKB approximation.

  14. A tunable CMOS constant current source

    NASA Technical Reports Server (NTRS)

    Thelen, D.

    1991-01-01

    A constant current source has been designed which makes use of on chip electrically erasable memory to adjust the magnitude and temperature coefficient of the output current. The current source includes a voltage reference based on the difference between enhancement and depletion transistor threshold voltages. Accuracy is +/- 3% over the full range of power supply, process variations, and temperature using eight bits for tuning.

  15. Factorization of the constants of motion

    NASA Astrophysics Data System (ADS)

    Nash, P. L.; Chen, L. Y.

    2006-08-01

    A complete set of first integrals, or constants of motion, for a model system is constructed using "factorization", as described below. The system is described by the effective Feynman Lagrangian L = 1/4 [m(x)double over dot(t) + 2m lambda(x)over dot(t) + partial derivative V-x(x(t))](2), with one of the simplest, nontrivial, potentials V (x) = 1/2m omega(2)x(2) selected for study. Four new, explicitly time-dependent, constants of the motion c(i +/-), i = 1, 2 are defined for this system. While partial derivative/partial derivative tc(i +/-) not equal 0, d/tc(i +/-) = partial derivative/partial derivative tc(i +/-) + (x)over dot partial derivative/partial derivative xc(i +/-) + (x)double over dot partial derivative/partial derivative xci +/- + ... = along an extremal of L. The Hamiltonian H is shown to equal a sum of products of the c(i +/-), and verifies partial derivative H/partial derivative t = 0. A second, functionally independent constant of motion is also constructed as a sum of the quadratic products of c(i +/-). It is shown that these derived constants of motion are in involution.

  16. Teaching Nanochemistry: Madelung Constants of Nanocrystals

    ERIC Educational Resources Information Center

    Baker, Mark D.; Baker, A. David

    2010-01-01

    The Madelung constants for binary ionic nanoparticles are determined. The computational method described here sums the Coulombic interactions of each ion in the particle without the use of partial charges commonly used for bulk materials. The results show size-dependent lattice energies. This is a useful concept in teaching how properties such as…

  17. Fourier and Gegenbauer expansions for a fundamental solution of the Laplacian in the hyperboloid model of hyperbolic geometry

    NASA Astrophysics Data System (ADS)

    Cohl, H. S.; Kalnins, E. G.

    2012-04-01

    Due to the isotropy of d-dimensional hyperbolic space, there exists a spherically symmetric fundamental solution for its corresponding Laplace-Beltrami operator. The R-radius hyperboloid model of hyperbolic geometry with R > 0 represents a Riemannian manifold with negative-constant sectional curvature. We obtain a spherically symmetric fundamental solution of Laplace’s equation on this manifold in terms of its geodesic radius. We give several matching expressions for this fundamental solution including a definite integral over reciprocal powers of the hyperbolic sine, finite summation expressions over hyperbolic functions, Gauss hypergeometric functions and in terms of the associated Legendre function of the second kind with order and degree given by d/2 - 1 with real argument greater than unity. We also demonstrate uniqueness for a fundamental solution of Laplace’s equation on this manifold in terms of a vanishing decay at infinity. In rotationally invariant coordinate systems, we compute the azimuthal Fourier coefficients for a fundamental solution of Laplace’s equation on the R-radius hyperboloid. For d ⩾ 2, we compute the Gegenbauer polynomial expansion in geodesic polar coordinates for a fundamental solution of Laplace’s equation on this negative-constant curvature Riemannian manifold. In three dimensions, an addition theorem for the azimuthal Fourier coefficients of a fundamental solution for Laplace’s equation is obtained through comparison with its corresponding Gegenbauer expansion.

  18. Fundamental Physics for Probing and Imaging

    NASA Astrophysics Data System (ADS)

    Allison, Wade

    2006-12-01

    This book addresses the question 'What is physics for?' Physics has provided many answers for mankind by extending his ability to see. Modern technology has enabled the power of physics to see into objects to be used in archaeology, medicine including therapy, geophysics, forensics and other spheres important to the good of society. The book looks at the fundamental physics of the various methods and how they are used by technology. These methods are magnetic resonance, ionising radiation and sound. By taking a broad view over the whole field it encourages comparisons, but also addresses questions of risk and benefit to society from a fundamental viewpoint. This textbook has developed from a course given to third year students at Oxford and is written so that it can be used coherently as a basis for shortened courses by omitting a number of chapters.

  19. DOE fundamentals handbook: Material science. Volume 1

    SciTech Connect

    Not Available

    1993-01-01

    The Mechanical Science Handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mechanical components and mechanical science. The handbook includes information on diesel engines, heat exchangers, pumps, valves, and miscellaneous mechanical components. This information will provide personnel with a foundation for understanding the construction and operation of mechanical components that are associated with various DOE nuclear facility operations and maintenance.

  20. [Reduction of biology to fundamental physics].

    PubMed

    Okhonin, V A

    2001-01-01

    It was shown that, while interpreting life as a physical phenomenon, fundamental physics allows for the following alternatives: relativity of animate and inanimate upon canonical transformations; the impossibility of the change from animate to inanimate state of isolated systems; the abandonment of attempts to reduce biology to the physics of isolated systems. The possibility of reducing biology to phenomenological physics was considered. A number of equations for the general phenomenological dynamics of density matrix was proposed.

  1. Baryogenesis and its implications to fundamental physics

    SciTech Connect

    Yoshimura, M.

    2008-08-08

    In this talk I shall explain some basic concepts of baryogenesis and leptogenesis theory, and a new idea of experimental method of verification of fundamental ingredients of leptogenesis theory; the Majorana nature and the absolute magnitude of neutrino masses. Both of these are important to the quest of physics beyond the standard theory, and have far reaching implications irrespective of any particular medel of leptogenesis. If this new method works ideally, there is even a further possibility of detecting relic neutrinos.

  2. Fundamental plasma emission involving ion sound waves

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.

    1987-01-01

    The theory for fundamental plasma emission by the three-wave processes L + or - S to T (where L, S and T denote Langmuir, ion sound and transverse waves, respectively) is developed. Kinematic constraints on the characteristics and growth lengths of waves participating in the wave processes are identified. In addition the rates, path-integrated wave temperatures, and limits on the brightness temperature of the radiation are derived.

  3. DOE fundamentals handbook: Material science. Volume 1

    SciTech Connect

    Not Available

    1993-01-01

    This handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of the structure and properties of metals. This volume contains the two modules: structure of metals (bonding, common lattic types, grain structure/boundary, polymorphis, alloys, imperfections in metals) and properties of metals (stress, strain, Young modulus, stress-strain relation, physical properties, working of metals, corrosion, hydrogen embrittlement, tritium/material compatibility).

  4. DOE fundamentals handbook: Mechanical science. Volume 2

    SciTech Connect

    Not Available

    1993-01-01

    The Mechanical Science Handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mechanical components and mechanical science. The handbook includes information diesel engines, heat exchangers, pumps, valves, and miscellaneous mechanical components. This information will provide personnel with a foundation for understanding the construction and operation of mechanical components that are associated with various DOE nuclear facility operations and maintenance.

  5. Fundamental Processes in Plasmas. Final report

    SciTech Connect

    O'Neil, Thomas M.; Driscoll, C. Fred

    2009-11-30

    This research focuses on fundamental processes in plasmas, and emphasizes problems for which precise experimental tests of theory can be obtained. Experiments are performed on non-neutral plasmas, utilizing three electron traps and one ion trap with a broad range of operating regimes and diagnostics. Theory is focused on fundamental plasma and fluid processes underlying collisional transport and fluid turbulence, using both analytic techniques and medium-scale numerical simulations. The simplicity of these systems allows a depth of understanding and a precision of comparison between theory and experiment which is rarely possible for neutral plasmas in complex geometry. The recent work has focused on three areas in basic plasma physics. First, experiments and theory have probed fundamental characteristics of plasma waves: from the low-amplitude thermal regime, to inviscid damping and fluid echoes, to cold fluid waves in cryogenic ion plasmas. Second, the wide-ranging effects of dissipative separatrices have been studied experimentally and theoretically, finding novel wave damping and coupling effects and important plasma transport effects. Finally, correlated systems have been investigated experimentally and theoretically: UCSD experients have now measured the Salpeter correlation enhancement, and theory work has characterized the 'guiding center atoms of antihydrogen created at CERN.

  6. Fundamentals of Physics, Extended 7th Edition

    NASA Astrophysics Data System (ADS)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2004-05-01

    No other book on the market today can match the 30-year success of Halliday, Resnick and Walker's Fundamentals of Physics! Fundamentals of Physics, 7th Edition and the Extended Version, 7th Edition offer a solid understanding of fundamental physics concepts, helping readers apply this conceptual understanding to quantitative problem solving, in a breezy, easy-to-understand style. A unique combination of authoritative content and stimulating applications. * Numerous improvements in the text, based on feedback from the many users of the sixth edition (both instructors and students) * Several thousand end-of-chapter problems have been rewritten to streamline both the presentations and answers * 'Chapter Puzzlers' open each chapter with an intriguing application or question that is explained or answered in the chapter * Problem-solving tactics are provided to help beginning Physics students solve problems and avoid common error * The first section in every chapter introduces the subject of the chapter by asking and answering, "What is Physics?" as the question pertains to the chapter * Numerous supplements available to aid teachers and students The extended edition provides coverage of developments in Physics in the last 100 years, including: Einstein and Relativity, Bohr and others and Quantum Theory, and the more recent theoretical developments like String Theory.

  7. Construction of Lines of Constant Density and Constant Refractive Index for Ternary Liquid Mixtures.

    ERIC Educational Resources Information Center

    Tasic, Aleksandar Z.; Djordjevic, Bojan D.

    1983-01-01

    Demonstrates construction of density constant and refractive index constant lines in triangular coordinate system on basis of systematic experimental determinations of density and refractive index for both homogeneous (single-phase) ternary liquid mixtures (of known composition) and the corresponding binary compositions. Background information,…

  8. U.S. Geological Survey Fundamental Science Practices

    USGS Publications Warehouse

    ,

    2011-01-01

    The USGS has a long and proud tradition of objective, unbiased science in service to the Nation. A reputation for impartiality and excellence is one of our most important assets. To help preserve this vital asset, in 2004 the Executive Leadership Team (ELT) of the USGS was charged by the Director to develop a set of fundamental science practices, philosophical premises, and operational principles as the foundation for all USGS research and monitoring activities. In a concept document, 'Fundamental Science Practices of the U.S. Geological Survey', the ELT proposed 'a set of fundamental principles to underlie USGS science practices.' The document noted that protecting the reputation of USGS science for quality and objectivity requires the following key elements: - Clearly articulated, Bureau-wide fundamental science practices. - A shared understanding at all levels of the organization that the health and future of the USGS depend on following these practices. - The investment of budget, time, and people to ensure that the USGS reputation and high-quality standards are maintained. The USGS Fundamental Science Practices (FSP) encompass all elements of research investigations, including data collection, experimentation, analysis, writing results, peer review, management review, and Bureau approval and publication of information products. The focus of FSP is on how science is carried out and how products are produced and disseminated. FSP is not designed to address the question of what work the USGS should do; that is addressed in USGS science planning handbooks and other documents. Building from longstanding existing USGS policies and the ELT concept document, in May 2006, FSP policies were developed with input from all parts of the organization and were subsequently incorporated into the Bureau's Survey Manual. In developing an implementation plan for FSP policy, the intent was to recognize and incorporate the best of USGS current practices to obtain the optimum

  9. Fundamental Physics Explored with High Intensity Laser

    NASA Astrophysics Data System (ADS)

    Tajima, T.; Homma, K.

    2012-10-01

    Over the last century the method of particle acceleration to high energies has become the prime approach to explore the fundamental nature of matter in laboratory. It appears that the latest search of the contemporary accelerator based on the colliders shows a sign of saturation (or at least a slow-down) in increasing its energy and other necessary parameters to extend this frontier. We suggest two pronged approach enabled by the recent progress in high intensity lasers. First we envision the laser-driven plasma accelerator may be able to extend the reach of the collider. For this approach to bear fruit, we need to develop the technology of high averaged power laser in addition to the high intensity. For this we mention that the latest research effort of ICAN is an encouraging sign. In addition to this, we now introduce the concept of the noncollider paradigm in exploring fundamental physics with high intensity (and large energy) lasers. One of the examples we mention is the laser wakefield acceleration (LWFA) far beyond TeV without large luminosity. If we relax or do not require the large luminosity necessary for colliders, but solely in ultrahigh energy frontier, we are still capable of exploring such a fundamental issue. Given such a high energetic particle source and high-intensity laser fields simultaneously, we expect to be able to access new aspects on the matter and the vacuum structure from fundamental physical point of views. LWFA naturally exploits the nonlinear optical effects in the plasma when it becomes of relativistic intensity. Normally nonlinear optical effects are discussed based upon polarization susceptibility of matter to external fields. We suggest application of this concept even to the vacuum structure as a new kind of order parameter to discuss vacuum-originating phenomena at semimacroscopic scales. This viewpoint unifies the following observables with the unprecedented experimental environment we envision; the dispersion relation of

  10. Measurement of the dielectric constant of lunar minerals and regolith

    NASA Astrophysics Data System (ADS)

    Trigwell, S.; Starnes, J.; Brown, C.; White, C.; White, T.; Su, M.; Mahdi, H. H.; Al-Shukri, H. J.; Biris, A.; Non Invasive ProspectingLunar Ores; Minerals

    2010-12-01

    For long-term lunar exploration, the priorities are excavation and beneficiation of lunar regolith for water, oxygen, energy production, and structural and shielding fabrication. This work is part of a project focusing on the utilization of Ground Penetrating Radar (GPR) to identify the presence of enriched areas of sub-surface minerals for excavation and ore processing. GPR detection of sub-surface minerals depends significantly on the differences in dielectric constant of the various minerals. One of the minerals in lunar regolith of interest is ilmenite for its use in oxygen production and a supply of titanium and iron. Several pure minerals (feldspar, spodumene, olivine, and ilmenite) and lunar simulant JSC-1A were sieved into several size fractions (<25, 25-50, 50-75, and 75-100 µm). A test cell with an attached shaker was constructed in a vacuum chamber and measurements of the dielectric constant of the minerals and simulant were taken as a function of particle size and packing density. The results showed that there was a direct correlation between the measured dielectric constant and packing density and that ilmenite had a much higher dielectric constant than the other minerals. Measurements were also taken on Apollo 14 lunar regolith as a comparison and compared to the literature to validate the results. Mixtures of pure silica powder and ilmenite in various concentrations (2, 5, 10, and 15%) were measured and it was determined that approximately 2-4% ilmenite in the mixtures could be distinguished. Core samples taken on the moon for all Apollo missions showed ilmenite concentrations ranging from 0.3-12%, depending upon whether it was in the mare or highlands regions, and so this data may significantly contribute to the use of GPR for mineral prospecting on the moon.

  11. Components of Dielectric Constants of Ionic Liquids

    NASA Astrophysics Data System (ADS)

    Izgorodina, Ekaterina I.

    2010-03-01

    In this study ab initio-based methods were used to calculate electronic polarizability and dipole moment of ions comprising ionic liquids [1]. The test set consisted of a number of anions and cations routinely used in the ionic liquid field. As expected, in the first approximation electronic polarizability volume turned out to be proportional to the ion volume, also calculated by means of ab initio theory. For ionic liquid ions this means that their electronic polarizabilities are at least an order of magnitude larger than those of traditional molecular solvents like water and DMSO. On this basis it may seem surprising that most of ionic liquids actually possess modest dielectric constants, falling the narrow range between 10 and 15. The lower than first expected dielectric constants of ionic liquids has been explored in this work via explicit calculations of the electronic and orientation polarization contributions to the dielectric constant using the Clausius-Mossotti equation and the Onsager theory for polar dielectric materials. We determined that the electronic polarization contribution to the dielectric constant was rather small (between 1.9 and 2.2) and comparable to that of traditional molecular solvents. These findings were explained by the interplay between two quantities, increasing electronic polarizability of ions and decreasing number of ions present in the unit volume; although electronic polarizability is usually relatively large for ionic liquid ions, due to their size there are fewer ions present per unit volume (by a factor of 10 compared to traditional molecular solvents). For ionic liquids consisting of ions with zero (e.g. BF4) or negligible (e.g. NTf2) dipole moments the calculated orientation polarization does not contribute enough to account for the whole of the measured values of the dielectric constants. We suggest that in ionic liquids an additional type of polarization, ``ionic polarization'', originating from small movements of the

  12. Proposed new determination of the gravitational constant G and tests of Newtonian gravitation

    NASA Astrophysics Data System (ADS)

    Sanders, Alvin J.; Deeds, W. E.

    1992-07-01

    The first ``constant of nature'' to be identified, Newton's constant of universal gravitation G, is presently the least accurately known. The currently accepted value (6.672 59+/-0.000 85)×10-11 m3 kg-1 s-2 has an uncertainty of 128 parts per million (ppm), whereas most other fundamental constants are known to less than 1 ppm. Moreover, the inverse-square law and the equivalence principle are not well validated at distances of the order of meters. We propose measurements within an orbiting satellite which would improve the accuracy of G by two orders of magnitude and also place new upper limits on the field-strength parameter α of any Yukawa-type force, assuming a null result. Preliminary analysis indicates that a test of the time variation of G may also be possible. Our proposed tests would place new limits on α=α5(q5/μ)1(q5/μ)2 for characteristic lengths Λ between 30 cm and 30 m and for Λ>1000 km. In terms of the mass mb of a vector boson presumed to mediate such a Yukawa-type force, the proposed experiment would place new limits on α for 7×10-9 eVparts in 107 for Λ>5 m (mbc2<4×10-8 eV), while the longer-distance interaction would test the equivalence principle to 4 parts in 1013 for Λ>REarth (mbc2<3×10-14 eV). Specifically, we propose to observe the motion of a small mass during the encounter phase of a ``horseshoe'' orbit-that is, in the vicinity of its closest approach to a large mass in a nearly

  13. Parts, Cavities, and Object Representation in Infancy

    ERIC Educational Resources Information Center

    Hayden, Angela; Bhatt, Ramesh S.; Kangas, Ashley; Zieber, Nicole

    2011-01-01

    Part representation is not only critical to object perception but also plays a key role in a number of basic visual cognition functions, such as figure-ground segregation, allocation of attention, and memory for shapes. Yet, virtually nothing is known about the development of part representation. If parts are fundamental components of object shape…

  14. Iron metal optical constants: Assessing the effects of metal composition and oxidation on laboratory reflectance spectra of planetary materials

    NASA Astrophysics Data System (ADS)

    Blewett, D. T.; Cahill, J. T.; Lawrence, S. J.; Denevi, B. W.; Nguyen, N. V.

    2012-12-01

    Many planetary surfaces contain Fe or FeNi metal. These metals are present as macroscopic grains (larger than the wavelength of light) in a variety of meteorites and are inferred to exist on/in their asteroid parent bodies. In addition, much smaller (nano- to micrometer) grains of metallic Fe are produced to varying degrees in the surfaces of airless bodies by exposure to the space environment. Space weathering, which includes solar wind sputtering and micrometeoroid impact melting and vaporization, results in the reduction of ferrous Fe harvested from silicates and oxides to a single-domain metallic state, present as nanophase blebs and coatings on and within regolith particles. Nanophase Fe (npFe0) is optically active and has a strong effect on reflectance spectra. For example, a mature lunar soil that has accumulated npFe0 is darker and has a redder spectral slope compared with an unweathered powder of the same lithology; mineralogical absorption bands are also attenuated in space-weathered material. Here we report progress on a comprehensive program undertaken to measure the optical constants of Fe and Ni. The optical constants (real and imaginary parts of the index of refraction) are fundamental physical parameters that govern how light reflects from and transmits through a material. We use ellipsometry to measure the optical constants of high-purity metal films from 160 to 4000 nm, including bare films exposed to the atmosphere and films protected from the atmosphere via a novel technique involving a metal coating on a fused silica prism. Air-exposed Fe films have optical constants that are markedly different from those of the protected film, despite the fact that the air-exposed films appear bright and mirror-like to the eye. X-ray photoelectron spectroscopy confirms the presence of Fe2O3 on the surface of the air-exposed Fe film. Hence, we conclude that oxidation layers form rapidly (minutes to hours) on air-exposed metal and measurably alter the optical

  15. Influence on isotope effect calculations of the method of obtaining force constants from vibrational data

    SciTech Connect

    Goodson, D.Z.; Sarpal, S.K.; Bopp, P.; Wolfsberg, M.

    1982-01-01

    Reduced isotopic partition function ratios (s/sub 2//s/sub 1/)f are employed in the calculation of isotope effects on thermodynamic equilibrium constrants. The quadratic force constants of the molecular force field are needed to evaluate (s/sub 2//s/sub 1/)f. Often these force constants are directly deduced from observed fundamentals in vibrational spectra and the (s/sub 2//s/sub 1/)f values so obtained are labeled (ANHARM). In a theroretically more valid procedure that is more difficult, one corrects observed fundamentals for anharmonicity on the basis of observed overtone and combination bands and then deduces force constants from these observed harmonic frequencies. The (s/sub 2//s/sub 1/)f values obtained from these force constants are labeled (HARM). (HARM) values and (ANHARM) values are evaluated and the isotope effects calculated with these values are discussed. It is concluded that the consistent use of (ANHARM) values in such calculations is a valid procedure.

  16. Density perturbations and the cosmological constant from inflationary landscapes

    SciTech Connect

    Feldstein, Brian; Hall, Lawrence J.; Watari, Taizan

    2005-12-15

    An anthropic understanding of the cosmological constant requires that the vacuum energy at late time scans from one patch of the universe to another. If the vacuum energy during inflation also scans, the various patches of the universe acquire exponentially differing volumes. In a generic landscape with slow-roll inflation, we find that this gives a steeply varying probability distribution for the normalization of the primordial density perturbations, resulting in an exponentially small fraction of observers measuring the Cosmic Background Explorer value of 10{sup -5}. Inflationary landscapes should avoid this ''{sigma} problem,'' and we explore features that can allow them to do that. One possibility is that, prior to slow-roll inflation, the probability distribution for vacua is extremely sharply peaked, selecting essentially a single anthropically allowed vacuum. Such a selection could occur in theories of eternal inflation. A second possibility is that the inflationary landscape has a special property: although scanning leads to patches with volumes that differ exponentially, the value of the density perturbation does not vary under this scanning. This second case is preferred over the first, partly because a flat inflaton potential can result from anthropic selection, and partly because the anthropic selection of a small cosmological constant is more successful.

  17. Dynamic polarizabilities and hyperfine-structure constants for Sc2 +

    NASA Astrophysics Data System (ADS)

    Dutta, Narendra Nath; Roy, Sourav; Deshmukh, P. C.

    2015-11-01

    In this work, we calculate dynamic polarizabilities and hyperfine-structure A and B constants of a few low-lying states for Sc2 +. The sum-over-states technique is applied to calculate the polarizabilities of the 3 d 2D3 /2 ,3 d 2D5 /2 , and 4 s 2S1 /2 states. The most important and correlation sensitive part of the sum is calculated using a highly correlated relativistic coupled-cluster theory. The remaining part of the sum is calculated using a lower-order many-body perturbation theory and the Dirac-Fock theory. Present dynamic polarizabilities are important to investigate the Stark shifts in the 4 s 2S1 /2 - 3 d 2D5 /2 and 4 s 2S1 /2 - 3 d 2D3 /2 clock transitions of Sc2 +. Magic wavelengths for zero Stark shifts corresponding to these transitions are found in the vacuum-ultraviolet region. The coupled-cluster theory is used to estimate the hyperfine A and B constants with a very high accuracy.

  18. Optical constants of liquid and solid methane

    NASA Technical Reports Server (NTRS)

    Martonchik, John V.; Orton, Glenn S.

    1994-01-01

    The optical constants n(sub r) + in(sub i) of liquid methane and phase 1 solid methane were determined over the entire spectral range by the use of various data sources published in the literature. Kramers-Kronig analyses were performed on the absorption spectra of liquid methane at the boiling point (111 K) and the melting point (90 K) and on the absorption spectra of phase 1 solid methane at the melting point and at 30 K. Measurements of the static dielectric constant at these temperatures and refractive indices determined over limited spectral ranges were used as constraints in the analyses. Applications of methane optical properties to studies of outer solar system bodies are described.

  19. Some Dynamical Effects of the Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Axenides, M.; Floratos, E. G.; Perivolaropoulos, L.

    Newton's law gets modified in the presence of a cosmological constant by a small repulsive term (antigravity) that is proportional to the distance. Assuming a value of the cosmological constant consistent with the recent SnIa data (Λ~=10-52 m-2), we investigate the significance of this term on various astrophysical scales. We find that on galactic scales or smaller (less than a few tens of kpc), the dynamical effects of the vacuum energy are negligible by several orders of magnitude. On scales of 1 Mpc or larger however we find that the vacuum energy can significantly affect the dynamics. For example we show that the velocity data in the local group of galaxies correspond to galactic masses increased by 35% in the presence of vacuum energy. The effect is even more important on larger low density systems like clusters of galaxies or superclusters.

  20. BOREAS RSS-17 Dielectric Constant Profile Measurements

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); McDonald, Kyle C.; Zimmerman, Reiner; Way, JoBea

    2000-01-01

    The BOREAS RSS-17 team acquired and analyzed imaging radar data from the ESA's ERS-1 over a complete annual cycle at the BOREAS sites in Canada in 1994 to detect shifts in radar backscatter related to varying environmental conditions. This data set consists of dielectric constant profile measurements from selected trees at various BOREAS flux tower sites. The relative dielectric constant was measured at C-band (frequency = 5 GHz) as a function of depth into the trunk of three trees at each site, Measurements were made during April 1994 with an Applied Microwave Corporation field PDP fitted with a 0.358-cm (0.141-inch) diameter coaxial probe tip. The data are available in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).