Science.gov

Sample records for global volume averaged

  1. A volume averaged global model for inductively coupled HBr/Ar plasma discharge

    NASA Astrophysics Data System (ADS)

    Chung, Sang-Young; Kwon, Deuk-Chul; Choi, Heechol; Song, Mi-Young

    2015-09-01

    A global model for inductively coupled HBr/Ar plasma was developed. The model was based on a self-consistent global model had been developed by Kwon et al., and a set of chemical reactions in the HBr/Ar plasma was compiled by surveying theoretical, experimental and evaluative researches. In this model vibrational excitations of bi-atomic molecules and electronic excitations of hydrogen atom were taken into account. Neutralizations by collisions between positive and negative ions were considered with Hakman's approximate formula achieved by fitting of theoretical result. For some reactions that were not supplied from literatures the reaction parameters of Cl2 and HCl were adopted as them Br2 and HBr, respectively. For validation calculation results using this model were compared with experimental results from literatures for various plasma discharge parameters and it showed overall good agreement.

  2. Averaging of globally coupled oscillators

    NASA Astrophysics Data System (ADS)

    Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt

    1992-03-01

    We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.

  3. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  4. Global Average Brightness Temperature for April 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1

    This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  5. Technical note: Revisiting the geometric theorems for volume averaging

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.

    2013-12-01

    The geometric theorems reported by Quintard and Whitaker [5, Appendix B] are re-examined. We show (1) The geometrical theorems can be interpreted in terms of the raw spatial moments of the pore structure within the averaging volume. (2) For the case where the first spatial moment is aligned with the center of mass of the averaging volume, the geometric theorems can be expressed in terms of the central moments of the porous medium. (3) When the spatial moments of the pore structure are spatially stationary, the geometrical theorems allow substantial simplification of nonlocal terms arising in the averaged equations. (4) In the context of volume averaging, the geometric theorems of Quintard and Whitaker [5, Appendix B] are better interpreted as statements regarding the spatial stationarity of specific volume averaged quantities rather than an explicit statement about the media disorder.

  6. Lighting design for globally illuminated volume rendering.

    PubMed

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  7. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  8. Particle filtration: An analysis using the method of volume averaging

    SciTech Connect

    Quintard, M.; Whitaker, S.

    1994-12-01

    The process of filtration of non-charged, submicron particles is analyzed using the method of volume averaging. The particle continuity equation is represented in terms of the first correction to the Smoluchowski equation that takes into account particle inertia effects for small Stokes numbers. This leads to a cellular efficiency that contains a minimum in the efficiency as a function of the particle size, and this allows us to identify the most penetrating particle size. Comparison of the theory with results from Brownian dynamics indicates that the first correction to the Smoluchowski equation gives reasonable results in terms of both the cellular efficiency and the most penetrating particle size. However, the results for larger particles clearly indicate the need to extend the Smoluchowski equation to include higher order corrections. Comparison of the theory with laboratory experiments, in the absence of adjustable parameters, provides interesting agreement for particle diameters that are equal to or less than the diameter of the most penetrating particle.

  9. Volume Averaging of Spectral-Domain Optical Coherence Tomography Impacts Retinal Segmentation in Children

    PubMed Central

    Trimboli-Heidler, Carmelina; Vogt, Kelly; Avery, Robert A.

    2016-01-01

    Purpose To determine the influence of volume averaging on retinal layer thickness measures acquired with spectral-domain optical coherence tomography (SD-OCT) in children. Methods Macular SD-OCT images were acquired using three different volume settings (i.e., 1, 3, and 9 volumes) in children enrolled in a prospective OCT study. Total retinal thickness and five inner layers were measured around an Early Treatment Diabetic Retinopathy Scale (ETDRS) grid using beta version automated segmentation software for the Spectralis. The magnitude of manual segmentation required to correct the automated segmentation was classified as either minor (<12 lines adjusted), moderate (>12 and <25 lines adjusted), severe (>26 and <48 lines adjusted), or fail (>48 lines adjusted or could not adjust due to poor image quality). The frequency of each edit classification was assessed for each volume setting. Thickness, paired difference, and 95% limits of agreement of each anatomic quadrant were compared across volume density. Results Seventy-five subjects (median age 11.8 years, range 4.3–18.5 years) contributed 75 eyes. Less than 5% of the 9- and 3-volume scans required more than minor manual segmentation corrections, compared with 71% of 1-volume scans. The inner (3 mm) region demonstrated similar measures across all layers, regardless of volume number. The 1-volume scans demonstrated greater variability of the retinal nerve fiber layer (RNLF) thickness, compared with the other volumes in the outer (6 mm) region. Conclusions In children, volume averaging of SD-OCT acquisitions reduce retinal layer segmentation errors. Translational Relevance This study highlights the importance of volume averaging when acquiring macula volumes intended for multilayer segmentation. PMID:27570711

  10. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    PubMed

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  11. Local volume-time averaged equations of motion for dispersed, turbulent, multiphase flows

    SciTech Connect

    Sha, W.T.; Slattery, J.C.

    1980-11-01

    In most flows of liquids and their vapors, the phases are dispersed randomly in both space and time. These dispersed flows can be described only statistically or in terms of averages. Local volume-time averaging is used here to derive a self-consistent set of equations governing momentum and energy transfer in dispersed, turbulent, multiphase flows. The empiricisms required for use with these equations are the subject of current research.

  12. Surface-Based Display of Volume-Averaged Cerebellar Imaging Data

    PubMed Central

    Diedrichsen, Jörn; Zotow, Ewa

    2015-01-01

    The paper presents a flat representation of the human cerebellum, useful for visualizing functional imaging data after volume-based normalization and averaging across subjects. Instead of reconstructing individual cerebellar surfaces, the method uses a white- and grey-matter surface defined on volume-averaged anatomical data. Functional data can be projected along the lines of corresponding vertices on the two surfaces. The flat representation is optimized to yield a roughly proportional relationship between the surface area of the 2D-representation and the volume of the underlying cerebellar grey matter. The map allows users to visualize the activation state of the complete cerebellar grey matter in one concise view, equally revealing both the anterior-posterior (lobular) and medial-lateral organization. As examples, published data on resting-state networks and task-related activity are presented on the flatmap. The software and maps are freely available and compatible with most major neuroimaging packages. PMID:26230510

  13. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  14. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    SciTech Connect

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the charged wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.

  15. Macroscopic Conduction Models by Volume Averaging for Two-Phase Systems

    NASA Astrophysics Data System (ADS)

    Goyeau, Benoît

    The aim here is to describe macroscopic models of conductive heat transfer within systems comprising two solid phases, using the method of volume averaging. The presentation of this technique largely stems from work by Carbonell, Quintard, and Whitaker [1-3]. The macroscopic conservation equations are set up under the assumption of local thermal equilibrium, leading to a model governed by a single equation. The effective thermal conductivity of the equivalent medium is obtained by solving the associated closure problems. The case where thermal equilibrium does not pertain, leading to a model with two energy conservation equations, is discussed briefly.

  16. Volume averaging: Local and nonlocal closures using a Green’s function approach

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.; Valdés-Parada, Francisco J.

    2013-01-01

    Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the

  17. Globally averaged exospheric temperatures derived from CHAMP and GRACE accelerometer measurements

    NASA Astrophysics Data System (ADS)

    Wise, J. O.; Burke, W. J.; Sutton, E. K.

    2012-04-01

    Neutral densities (ρ) inferred from accelerometer measurements on the polar-orbiting Challenging Minisatellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites are used to compile exospheric temperatures (T∞) during extended periods in 2003 and 2004 when their orbital planes were nearly parallel and at quadrature, respectively. Exospheric temperatures were first estimated using ρ-h-T∞ relationships implicit within the Jacchia models, then averaged over individual orbits. We found good agreement between the orbital-averaged T∞ obtained from CHAMP and GRACE accelerometer data as well as with globally averaged exospheric temperatures derived from drag measurements from the constellation of satellites used in the High-Accuracy Satellite Drag Model. Our analysis corrects a critical conjecture by Burke (2008) that globally averaged T∞ is essentially the same as orbit-averaged values obtained by polar-orbiting satellites, independent of the local time of their orbital planes. Unlike the symmetric 0200-1400 LT distribution of T∞ minima and maxima found in early Jacchia models, presented data indicate that the minima are located closer to the dawn meridian. We also demonstrate that the averaging technique used to estimate T∞ affects the outcomes. Statistical analyses provide an empirical basis for improving estimates of the thermosphere's total energy budget.

  18. The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin

    2016-06-01

    On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.

  19. Measurement of average density and relative volumes in a dispersed two-phase fluid

    SciTech Connect

    Sreepada, S.R.; Rippel, R.R.

    1990-12-19

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varying optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  20. Measurement of average density and relative volumes in a dispersed two-phase fluid

    SciTech Connect

    Sreepada, S.R.; Rippel, R.R.

    1992-05-05

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varying optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry. 3 figs.

  1. Measurement of average density and relative volumes in a dispersed two-phase fluid

    DOEpatents

    Sreepada, Sastry R.; Rippel, Robert R.

    1992-01-01

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  2. Efficient simulation of fuel cell stacks with the volume averaging method

    NASA Astrophysics Data System (ADS)

    Roos, M.; Batawi, E.; Harnisch, U.; Hocker, Th.

    In fuel cell systems, a multitude of coupled physical and chemical processes take place within the assembly: fluid flow, diffusion, charge and heat transport, as well as electrochemical reactions. For design and optimisation purposes, direct numerical simulation of the full three-dimensional (3D) structure (using CFD tools) is often not feasible due to the large range of length scales that are associated with the various physical and chemical phenomena. However, since many fuel cell components such as gas ducts or current collectors are made of repetitive structures, volume averaging techniques can be employed to replace details of the original structure by their averaged counterparts. In this study, we present simulation results for SOFC fuel cells that are based on a two-step procedure: first, for all repetitive structures detailed 3D finite element simulations are used to obtain effective parameters for the transport equations and interaction terms for averaged quantities. Bipolar plates, for example, are characterised by their porosity and permeability with respect to fluid flow and by anisotropic material tensors for heat and charge transport. Similarly one obtains effective values for the Nernst potential and various kinetic parameters. The complex structural information is thereby cast into effective material properties. In a second step, we utilise these quantities to simulate fuel cells in 2D, thereby decreasing the computation time by several orders of magnitude. Depending on the design and optimisation goals, one chooses appropriate cuts perpendicular or along the stack axis. The resulting models provide current densities, temperature and species distributions as well as operation characteristics. We tested our method with the FEM-based multiphysics software NMSeses, which offers the flexibility to specify the necessary effective models. Results of simulation runs for Sulzer HEXIS-SOFC stacks are presented.

  3. The Global 2000 Report to the President. Volume Three. Documentation on the Government's Global Sectoral Models: The Government's "Global Model."

    ERIC Educational Resources Information Center

    Barney, Gerald O., Ed.

    The third volume of the Global 2000 study presents basic information ("documentation") on the long-term sectoral models used by the U.S. government to project global trends in population, resources, and the environment. Its threefold purposes are: (1) to present all this basic information in a single volume, (2) to provide an explanation, in the…

  4. LETTERS AND COMMENTS: Spherical volume averages of static electric and magnetic fields using Coulomb and Biot-Savart laws

    NASA Astrophysics Data System (ADS)

    Hu, Ben Yu-Kuang

    2009-05-01

    Virtually identical derivations of the expressions for the spherical volume averages of static electric and magnetic fields are presented. These derivations utilize the Coulomb and Biot-Savart laws, and make no use of vector calculus identities or potentials.

  5. Biotechnology in a global economy. Volume 2. Part 1

    SciTech Connect

    Not Available

    1991-06-01

    Volume 2, Part 1 of Biotechnology in a Global Economy is comprised of various papers relating to the biotechnology industry and its level of development in various countries. Major topics discussed include current status of the industry in these countries, financing sources, future strategies, special projects being pursued, and technology transfer.

  6. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    The use of individual Global Navigation Satellite Services (GPS, GLONASS, Galileo, and Beidou/COMPASS) for the position, navigation, and timing in the Space Service Volume at altitudes of 300 km, 3000 km, 8000 km, 15000 km, 25000 km, 36500km and 70000 km is examined and the percent availability of at least one and at least four satellites is presented.

  7. Sensitivity to environmental properties in globally averaged synthetic spectra of Earth

    NASA Astrophysics Data System (ADS)

    Tinetti, G.; Meadows, V. S.; Crisp, D.; Fong, W.; Velusamy, T.; Fishbein, E.

    2003-12-01

    We are using computer models to explore the observational sensitivity to changes in atmospheric and surface properties, and the detectability of biosignatures, in the globally averaged spectrum of the Earth. Using AIRS (Atmospheric Infrared Sounder) data, as input on atmospheric and surface properties, we have generated spatially resolved high-resolution synthetic spectra using the SMART radiative transfer model (developed by D. Crisp), for a variety of conditions, from the UV to the far-IR (beyond the range of current Earth-based satellite data). We have then averaged over the visible disk for a number of different viewing geometries to quantify the sensitivity to surface types and atmospheric features as a function of viewing geometry, and spatial and spectral resolution. These results have been processed with an instrument simulator to improve our understanding of the detectable characteristics of Earth-like planets as viewed by the first (and probably second) generation extrasolar terrestrial planet detection and characterization missions (Terrestrial Planet Finder/Darwin and Life finder). This model can also be used to analyze Earth-shine data for detectability of planetary characteristics in disk-averaged spectra.

  8. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) to terrestrial users, GPS is currently used to provide for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Beidou, and Galileo), it will be possible to provide these services by using other GNSS constellations. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to 70,000 km. This paper will report a similar analysis of the performance of each of the additional GNSS and compare them with GPS alone. The Space Service Volume, defined as the volume between 3,000 km altitude and geosynchronous altitude, as compared with the Terrestrial Service Volume between the surface and 3,000 km. In the Terrestrial Service Volume, GNSS performance will be similar to performance on the Earth's surface. The GPS system has established signal requirements for the Space Service Volume. A separate paper presented at the conference covers the use of multiple GNSS in the Space Service Volume.

  9. Local and Global Illumination in the Volume Rendering Integral

    SciTech Connect

    Max, N; Chen, M

    2005-10-21

    This article is intended as an update of the major survey by Max [1] on optical models for direct volume rendering. It provides a brief overview of the subject scope covered by [1], and brings recent developments, such as new shadow algorithms and refraction rendering, into the perspective. In particular, we examine three fundamentals aspects of direct volume rendering, namely the volume rendering integral, local illumination models and global illumination models, in a wavelength-independent manner. We review the developments on spectral volume rendering, in which visible light are considered as a form of electromagnetic radiation, optical models are implemented in conjunction with representations of spectral power distribution. This survey can provide a basis for, and encourage, new efforts for developing and using complex illumination models to achieve better realism and perception through optical correctness.

  10. The intrinsic dependence structure of peak, volume, duration, and average intensity of hyetographs and hydrographs

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Kilsby, Chris G.

    2013-06-01

    The information contained in hyetographs and hydrographs is often synthesized by using key properties such as the peak or maximum value Xp, volume V, duration D, and average intensity I. These variables play a fundamental role in hydrologic engineering as they are used, for instance, to define design hyetographs and hydrographs as well as to model and simulate the rainfall and streamflow processes. Given their inherent variability and the empirical evidence of the presence of a significant degree of association, such quantities have been studied as correlated random variables suitable to be modeled by multivariate joint distribution functions. The advent of copulas in geosciences simplified the inference procedures allowing for splitting the analysis of the marginal distributions and the study of the so-called dependence structure or copula. However, the attention paid to the modeling task has overlooked a more thorough study of the true nature and origin of the relationships that link Xp,V,D, and I. In this study, we apply a set of ad hoc bootstrap algorithms to investigate these aspects by analyzing the hyetographs and hydrographs extracted from 282 daily rainfall series from central eastern Europe, three 5 min rainfall series from central Italy, 80 daily streamflow series from the continental United States, and two sets of 200 simulated universal multifractal time series. Our results show that all the pairwise dependence structures between Xp,V,D, and I exhibit some key properties that can be reproduced by simple bootstrap algorithms that rely on a standard univariate resampling without resort to multivariate techniques. Therefore, the strong similarities between the observed dependence structures and the agreement between the observed and bootstrap samples suggest the existence of a numerical generating mechanism based on the superposition of the effects of sampling data at finite time steps and the process of summing realizations of independent random

  11. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  12. Predicting Climate Change Using Response Theory: Global Averages and Spatial Patterns

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Ragone, Francesco; Lunkeit, Frank

    2016-04-01

    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source general circulation model of the atmosphere featuring O(10^5 ) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Response theory allows one to practically compute the time-dependent measure supported on the pullback attractor of the climate system, whose dynamics is non-autonomous as a result of time-dependent forcings. We propose a simple yet efficient method for predicting—at any lead time and in an ensemble sense—the change in climate properties resulting from increase in the concentration of CO_2 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as in their spatial patterns. The quality of the predictions obtained for the surface temperature fields is rather good, while in the case of precipitation a good skill is observed only for the global average. We also show how it is possible to define accurately concepts like the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change.

  13. The dependence on solar elevation of the correlation between monthly average hourly diffuse and global radiation

    SciTech Connect

    Soler, A. )

    1988-01-01

    In the present work, the dependence on {anti {gamma}} of the correlation between {anti K}{sub d} = {anti I}{sub d}/{anti I}{sub O} and {anti K}{sub t} = {anti I}/{anti I}{sub o} is studied, {anti I}, {anti I}{sub d}, and {anti I}{sub o} respectively being the monthly average hourly values of the global, diffuse, and extraterrestrial radiation, all of them on a horizontal surface, and {anti {gamma}} the solar elevation at midhour. The dependence is studied for Uccle for the following sky conditions. Condition A: clear skies (fraction of possible sunshine = 1) and the maximum values of direct radiation measured during the period considered (each of the hours before or after the solar noon for which radiation is received); Condition B corresponding to all the values of radiation measured when the sunshine fraction is 1 during the period considered; Condition C; corresponding to all the data collected, independently of the state of the sky; Condition D: corresponding to overcast skies ({anti I} = {anti I}{sub d}). From the available values of {anti I} and {anti I}{sub d} (monthly average hourly direct radiation on a horizontal surface), values of {anti K}{sub d} and {anti K}{sub t} for 5{degree} {le} {anti {gamma}} {le} 45{degree} and {Delta} {anti {gamma}} = 5{degree} are calculated using Newton's divided difference interpolation formula.

  14. Paleosecular Variation and Time-Averaged Field Behavior: Global and Regional Signatures

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Cromwell, G.; Tauxe, L.; Constable, C.

    2012-12-01

    We use an updated global dataset of directional and intensity data from lava flows to investigate time-averaged field (TAF) and paleosecular variation (PSV) signatures regionally and globally. The data set includes observations from the past 10 Ma, but we focus our investigations on the field structure over past 5 Ma, in particular during the Brunhes and Matuyama. We restrict our analyses to sites with at least 5 samples (all of which have been stepwise demagnetized), and for which the estimate of the Fisher precision parameter, k, is at least 50. The data set comprises 1572 sites from the past 5 Ma that span latitudes 78oS to 71oN; of these ˜40% are from the Brunhes chron and ˜20% are from the Matuyama chron. Age control at the site level is variable because radiometric dates are available for only about one third of our sites. New TAF models for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling, and the limited age information available for many sites. Results from Hawaii indicate that resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but a PSV estimate (virtual geomagnetic pole dispersion) that is increased relative to that obtained from the unevenly sampled data. The global distribution of sites in our dataset allows us to investigate possible hemispheric asymmetries in field structure, in particular differences between north and south high latitude field behavior and low latitude differences between the Pacific and Atlantic hemispheres.

  15. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) services to traditional terrestrial and airborne users, GPS is also being increasingly used as a tool to enable precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite System (GNSS) constellations being replenished and coming into service (GLONASS, Beidou, and Galileo), it will become possible to benefit from greater signal availability and robustness by using evolving multi-constellation receivers. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to seventy thousand kilometers. This paper will report a similar analysis of the signal coverage of GPS in the space domain; however, the analyses will also consider signal coverage from each of the additional GNSS constellations noted earlier to specifically demonstrate the expected benefits to be derived from using GPS in conjunction with other foreign systems. The Space Service Volume is formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude circa 36,000 km, as compared with the Terrestrial Service Volume between 3,000 km and the surface of the Earth. In the Terrestrial Service Volume, GNSS performance is the same as on or near the Earth's surface due to satellite vehicle availability and geometry similarities. The core GPS system has thereby established signal requirements for the Space Service Volume as part of technical Capability Development Documentation (CDD) that specifies system performance. Besides the technical discussion, we also present diplomatic efforts to extend the GPS Space Service Volume concept to other PNT service providers in an effort to assure that all space users will benefit from the enhanced

  16. Analytical solutions for the coefficient of variation of the volume-averaged solute concentration in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Kabala, Z. J.

    1997-08-01

    Under the assumption that local solute dispersion is negligible, a new general formula (in the form of a convolution integral) is found for the arbitrary k-point ensemble moment of the local concentration of a solute convected in arbitrary m spatial dimensions with general sure initial conditions. From this general formula new closed-form solutions in m=2 spatial dimensions are derived for 2-point ensemble moments of the local solute concentration for the impulse (Dirac delta) and Gaussian initial conditions. When integrated over an averaging window, these solutions lead to new closed-form expressions for the first two ensemble moments of the volume-averaged solute concentration and to the corresponding concentration coefficients of variation (CV). Also, for the impulse (Dirac delta) solute concentration initial condition, the second ensemble moment of the solute point concentration in two spatial dimensions and the corresponding CV are demonstrated to be unbound. For impulse initial conditions the CVs for volume-averaged concentrations axe compared with each other for a tracer from the Borden aquifer experiment. The point-concentration CV is unacceptably large in the whole domain, implying that the ensemble mean concentration is inappropriate for predicting the actual concentration values. The volume-averaged concentration CV decreases significantly with an increasing averaging volume. Since local dispersion is neglected, the new solutions should be interpreted as upper limits for the yet to be derived solutions that account for local dispersion; and so should the presented CVs for Borden tracers. The new analytical solutions may be used to test the accuracy of Monte Carlo simulations or other numerical algorithms that deal with the stochastic solute transport. They may also be used to determine the size of the averaging volume needed to make a quasi-sure statement about the solute mass contained in it.

  17. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  18. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886

  19. Average Spatial Distribution of Cosmic Rays behind the Interplanetary Shock—Global Muon Detector Network Observations

    NASA Astrophysics Data System (ADS)

    Kozai, M.; Munakata, K.; Kato, C.; Kuwabara, T.; Rockenbach, M.; Dal Lago, A.; Schuch, N. J.; Braga, C. R.; Mendonça, R. R. S.; Jassar, H. K. Al; Sharma, M. M.; Duldig, M. L.; Humble, J. E.; Evenson, P.; Sabbah, I.; Tokumaru, M.

    2016-07-01

    We analyze the galactic cosmic ray (GCR) density and its spatial gradient in Forbush Decreases (FDs) observed with the Global Muon Detector Network (GMDN) and neutron monitors (NMs). By superposing the GCR density and density gradient observed in FDs following 45 interplanetary shocks (IP-shocks), each associated with an identified eruption on the Sun, we infer the average spatial distribution of GCRs behind IP-shocks. We find two distinct modulations of GCR density in FDs, one in the magnetic sheath and the other in the coronal mass ejection (CME) behind the sheath. The density modulation in the sheath is dominant in the western flank of the shock, while the modulation in the CME ejecta stands out in the eastern flank. This east-west asymmetry is more prominent in GMDN data responding to ˜60 GV GCRs than in NM data responding to ˜10 GV GCRs, because of the softer rigidity spectrum of the modulation in the CME ejecta than in the sheath. The geocentric solar ecliptic-y component of the density gradient, G y , shows a negative (positive) enhancement in FDs caused by the eastern (western) eruptions, while G z shows a negative (positive) enhancement in FDs caused by the northern (southern) eruptions. This implies that the GCR density minimum is located behind the central flank of IP-shocks and propagating radially outward from the location of the solar eruption. We also confirmed that the average G z changes its sign above and below the heliospheric current sheet, in accord with the prediction of the drift model for the large-scale GCR transport in the heliosphere.

  20. Exploring Granger causality between global average observed time series of carbon dioxide and temperature

    SciTech Connect

    Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R

    2010-01-01

    Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

  1. Predicting Climate Change using Response Theory: Global Averages and Spatial Patterns

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Lunkeit, Frank; Ragone, Francesco

    2016-04-01

    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source climate model featuring O(105) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Using the theoretical framework of the pullback attractor and the tools of response theory we propose a simple yet efficient method for predicting - at any lead time and in an ensemble sense - the change in climate properties resulting from increase in the concentration of CO2 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as their spatial patterns. We also show how it is possible to define accurately concepts like the the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change.

  2. Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) Cost-Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2008-01-01

    Lightning Launch Commit Criteria (LLCC) are designed to prevent space launch vehicles from flight through environments conducive to natural or triggered lightning and are used for all U.S. government and commercial launches at government and civilian ranges. They are maintained by a committee known as the NASA/USAF Lightning Advisory Panel (LAP). The previous LLCC for anvil cloud, meant to avoid triggered lightning, have been shown to be overly restrictive. Some of these rules have had such high safety margins that they prohibited flight under conditions that are now thought to be safe 90% of the time, leading to costly launch delays and scrubs. The LLCC for anvil clouds was upgraded in the summer of 2005 to incorporate results from the Airborne Field Mill (ABFM) experiment at the Eastern Range (ER). Numerous combinations of parameters were considered to develop the best correlation of operational weather observations to in-cloud electric fields capable of rocket triggered lightning in anvil clouds. The Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) was the best metric found. Dr. Harry Koons of Aerospace Corporation conducted a risk analysis of the VAHIRR product. The results indicated that the LLCC based on the VAHIRR product would pose a negligible risk of flying through hazardous electric fields. Based on these findings, the Kennedy Space Center Weather Office is considering seeking funding for development of an automated VAHIRR algorithm for the new ER 45th Weather Squadron (45 WS) RadTec 431250 weather radar and Weather Surveillance Radar-1988 Doppler (WSR-88D) radars. Before developing an automated algorithm, the Applied Meteorology Unit (AMU) was tasked to determine the frequency with which VAHIRR would have allowed a launch to safely proceed during weather conditions otherwise deemed "red" by the Launch Weather Officer. To do this, the AMU manually calculated VAHIRR values based on candidate cases from past launches with known anvil cloud

  3. Grade Point Average and Student Outcomes. Data Notes. Volume 5, Number 1, January/February 2010

    ERIC Educational Resources Information Center

    Clery, Sue; Topper, Amy

    2010-01-01

    Using data from Achieving the Dream: Community College Count, this issue of Data Notes investigates the academic achievement patterns of students attending Achieving the Dream colleges. The data show that 21 percent of students at Achieving the Dream colleges had grade point averages (GPAs) of 3.50 or higher at the end of their first year. At…

  4. The global volume and distribution of modern groundwater

    NASA Astrophysics Data System (ADS)

    Gleeson, Tom; Befus, Kevin M.; Jasechko, Scott; Luijendijk, Elco; Cardenas, M. Bayani

    2016-02-01

    Groundwater is important for energy and food security, human health and ecosystems. The time since groundwater was recharged--or groundwater age--can be important for diverse geologic processes, such as chemical weathering, ocean eutrophication and climate change. However, measured groundwater ages range from months to millions of years. The global volume and distribution of groundwater less than 50 years old--modern groundwater that is the most recently recharged and also the most vulnerable to global change--are unknown. Here we combine geochemical, geologic, hydrologic and geospatial data sets with numerical simulations of groundwater and analyse tritium ages to show that less than 6% of the groundwater in the uppermost portion of Earth’s landmass is modern. We find that the total groundwater volume in the upper 2 km of continental crust is approximately 22.6 million km3, of which 0.1-5.0 million km3 is less than 50 years old. Although modern groundwater represents a small percentage of the total groundwater on Earth, the volume of modern groundwater is equivalent to a body of water with a depth of about 3 m spread over the continents. This water resource dwarfs all other components of the active hydrologic cycle.

  5. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2013-01-01

    Besides providing position, velocity, and timing (PVT) for terrestrial users, the Global Positioning System (GPS) is also being used to provide PVT information for earth orbiting satellites. In 2006, F. H. Bauer, et. al., defined the Space Service Volume in the paper GPS in the Space Service Volume , presented at ION s 19th international Technical Meeting of the Satellite Division, and looked at GPS coverage for orbiting satellites. With GLONASS already operational, and the first satellites of the Galileo and Beidou/COMPASS constellations already in orbit, it is time to look at the use of the new Global Navigation Satellite Systems (GNSS) coming into service to provide PVT information for earth orbiting satellites. This presentation extends GPS in the Space Service Volume by examining the coverage capability of combinations of the new constellations with GPS GPS was first explored as a system for refining the position, velocity, and timing of other spacecraft equipped with GPS receivers in the early eighties. Because of this, a new GPS utility developed beyond the original purpose of providing position, velocity, and timing services for land, maritime, and aerial applications. GPS signals are now received and processed by spacecraft both above and below the GPS constellation, including signals that spill over the limb of the earth. Support of GPS space applications is now part of the system plan for GPS, and support of the Space Service Volume by other GNSS providers has been proposed to the UN International Committee on GNSS (ICG). GPS has been demonstrated to provide decimeter level position accuracy in real-time for satellites in low Earth orbit (centimeter level in non-real-time applications). GPS has been proven useful for satellites in geosynchronous orbit, and also for satellites in highly elliptical orbits. Depending on how many satellites are in view, one can keep time locked to the GNSS standard, and through that to Universal Time as long as at least one

  6. Average Volume-Assured Pressure Support in a 16-Year-Old Girl with Congenital Central Hypoventilation Syndrome

    PubMed Central

    Vagiakis, Emmanouil; Koutsourelakis, Ioannis; Perraki, Eleni; Roussos, Charis; Mastora, Zafeiria; Zakynthinos, Spyros; Kotanidou, Anastasia

    2010-01-01

    Congenital central hypoventilation syndrome (CCHS) is an uncommon disorder characterized by the absence of adequate autonomic control of respiration, which results in alveolar hypoventilation and decreased sensitivity to hypercarbia and hypoxemia, especially during sleep.1 Patients with CCHS need lifelong ventilatory support. The treatment options for CCHS include intermittent positive pressure ventilation administered via tracheostomy, noninvasive positive pressure ventilation, negative-pressure ventilation by body chamber or cuirass, and phrenic nerve pacing.2 However, it may be necessary to alter the mode of ventilation according to age, psychosocial reasons, complications of therapy, and emergence of new modes of ventilation.3 We present a case of a 16-year-old girl with CCHS who was mechanically ventilated via tracheostomy for 16 years and was successfully transitioned to a new modality of noninvasive ventilation (average volume-assured pressure support [AVAPS]) that automatically adjusts the pressure support level in order to provide a consistent tidal volume. Citation: Vagiakis E; Koutsourelakis I; Perraki E; Roussos C; Mastora Z; Zakynthinos S; Kotanidou A. Average volume-assured pressure support in a 16-year-old girl with central congenital hypoventilation syndrome. J Clin Sleep Med 2010;6(6):609-612. PMID:21206552

  7. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems.

    PubMed

    Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-21

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  8. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  9. Effects of volume averaging on the line spectra of vertical velocity from multiple-Doppler radar observations

    NASA Technical Reports Server (NTRS)

    Gal-Chen, T.; Wyngaard, J. C.

    1982-01-01

    Calculations of the ratio of the true one-dimensional spectrum of vertical velocity and that measured with multiple-Doppler radar beams are presented. It was assumed that the effects of pulse volume averaging and objective analysis routines is replacement of a point measurement with a volume integral. A u and v estimate was assumed to be feasible when orthogonal radars are not available. Also, the target fluid was configured as having an infinite vertical dimension, zero vertical velocity at the top and bottom, and having homogeneous and isotropic turbulence with a Kolmogorov energy spectrum. The ratio obtained indicated that equal resolutions among radars yields a monotonically decreasing, wavenumber-dependent response function. A gain of 0.95 was demonstrated in an experimental situation with 40 levels. Possible errors introduced when using unequal resolution radars were discussed. Finally, it was found that, for some flows, the extent of attenuation depends on the number of vertical levels resolvable by the radars.

  10. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  11. Calculation of area-averaged vertical profiles of the horizontal wind velocity from volume-imaging lidar data

    NASA Technical Reports Server (NTRS)

    Schols, J. L.; Eloranta, E. W.

    1992-01-01

    Area-averaged horizontal wind measurements are derived from the motion of spatial inhomogeneities in aerosol backscattering observed with a volume-imaging lidar. Spatial averaging provides high precision, reducing sample variations of wind measurements well below the level of turbulent fluctuations, even under conditions of very light mean winds and strong convection or under the difficult conditions represented by roll convection. Wind velocities are measured using the two-dimensional spatial cross correlation computed between successive horizontal plane maps of aerosol backscattering, assembled from three-dimensional lidar scans. Prior to calculation of the correlation function, three crucial steps are performed: (1) the scans are corrected for image distortion by the wind during a finite scan time; (2) a temporal high pass median filtering is applied to eliminate structures that do not move with the wind; and (3) a histogram equalization is employed to reduce biases to the brightest features.

  12. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  13. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes

    PubMed Central

    Nitzsche, Björn; Frey, Stephen; Collins, Louis D.; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H.; Fonov, Vladimir S.; Boltze, Johannes

    2015-01-01

    Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species. PMID:26089780

  14. A multi-moment finite volume method for incompressible Navier-Stokes equations on unstructured grids: Volume-average/point-value formulation

    NASA Astrophysics Data System (ADS)

    Xie, Bin; , Satoshi, Ii; Ikebata, Akio; Xiao, Feng

    2014-11-01

    A robust and accurate finite volume method (FVM) is proposed for incompressible viscous fluid dynamics on triangular and tetrahedral unstructured grids. Differently from conventional FVM where the volume integrated average (VIA) value is the only computational variable, the present formulation treats both VIA and the point value (PV) as the computational variables which are updated separately at each time step. The VIA is computed from a finite volume scheme of flux form, and is thus numerically conservative. The PV is updated from the differential form of the governing equation that does not have to be conservative but can be solved in a very efficient way. Including PV as the additional variable enables us to make higher-order reconstructions over compact mesh stencil to improve the accuracy, and moreover, the resulting numerical model is more robust for unstructured grids. We present the numerical formulations in both two and three dimensions on triangular and tetrahedral mesh elements. Numerical results of several benchmark tests are also presented to verify the proposed numerical method as an accurate and robust solver for incompressible flows on unstructured grids.

  15. Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.

    PubMed

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms.

  16. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  17. Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume

    NASA Astrophysics Data System (ADS)

    Berto, Filippo; Lazzarin, Paolo

    2014-01-01

    When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.

  18. Novel brachytherapy treatment planning system utilizing dose rate dependent average cell survival, CT-simulator, and dose-volume histogram

    SciTech Connect

    Mayer, R.; Fong, W.; Frankel, T.

    1995-12-31

    This report describes a new brachytherapy planning program that provides an evaluation of a given low or high dose rate treatment taking into account spatial dose heterogeneity and cell response to radiation. This brachytherapy scheme uses the images from a CT-Simulator (AcQSim, Picker International, Cleveland, Ohio) to simultaneously localize the seed positions and to axially scan the patient. This procedure helps to ensure accurate registration of the putative seed positions with the patient tissues and organs. The seed positions are determined by back-projecting positions of seeds or dummy seeds from the CT-Simulator setup scout images. Physicians delineate the tissues of interest on the axial slices. Dose is computed after assigning activity (low dose rate) of dwell times (high dose rate) to the Ir{sup 192} or I{sup 125} seed. The planar isodose distribution is superimposed onto axial cuts of the tissues and onto coronal or sagital views of the tissues following image reconstruction. Areal or volumetric calculations of the dose distribution within a given tissue are computed from the tissue outlines. The treatment plan computes (1) volume differential and cummulative dose histograms of the dose delivered to individual tissues, (2) the average, standard deviation, and coefficient of skewness of the dose distribution delivered to the individual tissues, (3) the average survival probability for a given radiation treatment.

  19. Monthly Averages of Aerosol Properties: A Global Comparison Among Models, Satellite Data, and AERONET Ground Data

    SciTech Connect

    Kinne, S.; Lohmann, U; Feichter, J; Schulz, M.; Timmreck, C.; Ghan, Steven J.; Easter, Richard C.; Chin, M; Ginoux, P.; Takemura, T.; Tegen, I.; Koch, D; Herzog, M.; Penner, J.; Pitari, G.; Holben, B. N.; Eck, T.; Smirnov, A.; Dubovik, O.; Slutsker, I.; Tanre, D.; Torres, O.; Mishchenko, M.; Geogdzhayev, I.; Chu, D. A.; Kaufman, Yoram J.

    2003-10-21

    Aerosol introduces the largest uncertainties in model-based estimates of anthropogenic sources on the Earth's climate. A better representation of aerosol in climate models can be expected from an individual processing of aerosol type and new aerosol modules have been developed, that distinguish among at least five aerosol types: sulfate, organic carbon, black carbon, sea-salt and dust. In this study intermediate results of aerosol mass and aerosol optical depth of new aerosol modules from seven global models are evaluated. Among models, differences in predicted mass-fields are expected with differences to initialization and processing. Nonetheless, unusual discrepancies in source strength and in removal rates for particular aerosol types were identified. With simultaneous data for mass and optical depth, type conversion factors were compared. Differences among the tested models cover a factor of 2 for each, even hydrophobic, aerosol type. This is alarming and suggests that efforts of good mass-simulations could be wasted or that conversions are misused to cover for poor mass-simulations. An individual assessment, however, is difficult, as only part of the conversion determining factors (size assumption, permitted humidification and prescribed ambient relative humidity) were revealed. These differences need to be understood and minimized, if conclusions on aerosol processing in models can be drawn from comparisons to aerosol optical depth measurements.

  20. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene

    PubMed Central

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-01-01

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet’s dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to −134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 106 km3 greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 m⋅ka−1 punctuated by periods of greater, particularly at 14.5–14.0 ka BP at ≥40 mm⋅y−1 (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100–150 y ago, with no evidence of oscillations exceeding ∼15–20 cm in time intervals ≥200 y from 6 to 0.15 ka BP. PMID:25313072

  1. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene.

    PubMed

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-10-28

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet's dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to -134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 10(6) km(3) greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 m⋅ka(-1) punctuated by periods of greater, particularly at 14.5-14.0 ka BP at ≥40 mm⋅y(-1) (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100-150 y ago, with no evidence of oscillations exceeding ∼15-20 cm in time intervals ≥200 y from 6 to 0.15 ka BP.

  2. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene

    NASA Astrophysics Data System (ADS)

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-10-01

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet's dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to -134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 106 km3 greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 mṡka-1 punctuated by periods of greater, particularly at 14.5-14.0 ka BP at ≥40 mmṡy-1 (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100-150 y ago, with no evidence of oscillations exceeding ∼15-20 cm in time intervals ≥200 y from 6 to 0.15 ka BP.

  3. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    SciTech Connect

    Tuite, P.; Tuite, K.; O`Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types.

  4. Paleosecular variation and time-averaged field analysis over the last 10 Ma from a new global dataset (PSV10)

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.

    2015-12-01

    Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.

  5. Global Education: What the Research Shows. Information Capsule. Volume 0604

    ERIC Educational Resources Information Center

    Blazer, Christie

    2006-01-01

    Teaching from a global perspective is important because the lives of people around the world are increasingly interconnected through politics, economics, technology, and the environment. Global education teaches students to understand and appreciate people from different cultural backgrounds; view events from a variety of perspectives; recognize…

  6. Computation and use of volume-weighted-average concentrations to determine long-term variations of selected water-quality constituents in lakes and reservoirs

    USGS Publications Warehouse

    Wells, Frank C.; Schertz, Terry L.

    1984-01-01

    A computer program using the Statistical Analysis System has been developed to perform the arithmetic calculations and regression analyses to determine volume-weighted-average concentrations of selected water-quality constituents in lakes and reservoirs. The program has been used in Texas to show decreasing trends in dissolved-solids and total-phosphorus concentrations in Lake Arlington after the discharge of sewage effluent into the reservoir was stopped. The program also was used to show that the August 1978 and October 1981 floods on the Brazos River greatly decreased the volume-weighted-average concentrations of selected constituents in Hubbard Creek Reservoir and Possum Kingdom Lake.

  7. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  8. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City

    PubMed Central

    Metson, Geneviève S.; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world’s main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region’s “phosphorus footprint” – the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident’s annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management.

  9. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City

    PubMed Central

    Metson, Geneviève S.; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world’s main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region’s “phosphorus footprint” – the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident’s annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management. PMID:27617261

  10. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City.

    PubMed

    Metson, Geneviève S; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world's main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region's "phosphorus footprint" - the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident's annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management. PMID:27617261

  11. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City.

    PubMed

    Metson, Geneviève S; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world's main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region's "phosphorus footprint" - the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident's annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management.

  12. A planet under siege: Are we changing earth`s climate?. Global Systems Science, Volume 1

    SciTech Connect

    Sneider, C.; Golden, R.

    1992-12-31

    Global Systems Science is an interdisciplinary course for high school students. It is divided into five volumes. Each volume contains laboratory experiments; home investigations; descriptions of recent scientific work; historical background; and consideration of the political, economic, and ethical issues associated with each problem area. Collectively, these volumes constitute a unique combination of studies in the natural and social sciences from which high school students may view the global environmental problems that they will confront within their lifetimes. The five volumes are: A Planet Under Siege: Are We Changing Earths Climate; A History of Fire and Ice: The Earth`s Climate System; Energy Paths: Use and Conservation of Energy; Ecological Systems: Evolution and Interdependence of Life; and, The Case of the Missing Ozone: Chemistry of the Earth`s Atmosphere.

  13. Transforming Education: Global Perspectives, Experiences and Implications. Educational Psychology: Critical Pedagogical Perspectives. Volume 24

    ERIC Educational Resources Information Center

    DeVillar, Robert A., Ed.; Jiang, Binbin, Ed.; Cummins, Jim, Ed.

    2013-01-01

    This research-based volume presents a substantive, panoramic view of ways in which Australia and countries in Africa, Asia, Europe, and North and South America engage in educational programs and practices to transform the learning processes and outcomes of their students. It reveals and analyzes national and global trajectories in key areas of…

  14. Global Inventory of Regional and National Qualifications Frameworks. Volume II: National and Regional Cases

    ERIC Educational Resources Information Center

    UNESCO Institute for Lifelong Learning, 2015

    2015-01-01

    This second volume of the "Global Inventory of Regional and National Qualifications Frameworks" focuses on national and regional cases of national qualifications frameworks for eighty- six countries from Afghanistan to Uzbekistan and seven regional qualifications frameworks. Each country profile provides a thorough review of the main…

  15. Plantation Pedagogy: A Postcolonial and Global Perspective. Global Studies in Education. Volume 16

    ERIC Educational Resources Information Center

    Bristol, Laurette S. M.

    2012-01-01

    "Plantation Pedagogy" originates from an Afro-Caribbean primary school teacher's experience. It provides a discourse which extends and illuminates the limitations of current neo-liberal and global rationalizations of the challenges posed to a teacher's practice. Plantation pedagogy is distinguished from critical pedagogy by its historical presence…

  16. Mass and volume contributions to twentieth-century global sea level rise.

    PubMed

    Miller, Laury; Douglas, Bruce C

    2004-03-25

    The rate of twentieth-century global sea level rise and its causes are the subjects of intense controversy. Most direct estimates from tide gauges give 1.5-2.0 mm yr(-1), whereas indirect estimates based on the two processes responsible for global sea level rise, namely mass and volume change, fall far below this range. Estimates of the volume increase due to ocean warming give a rate of about 0.5 mm yr(-1) (ref. 8) and the rate due to mass increase, primarily from the melting of continental ice, is thought to be even smaller. Therefore, either the tide gauge estimates are too high, as has been suggested recently, or one (or both) of the mass and volume estimates is too low. Here we present an analysis of sea level measurements at tide gauges combined with observations of temperature and salinity in the Pacific and Atlantic oceans close to the gauges. We find that gauge-determined rates of sea level rise, which encompass both mass and volume changes, are two to three times higher than the rates due to volume change derived from temperature and salinity data. Our analysis supports earlier studies that put the twentieth-century rate in the 1.5-2.0 mm yr(-1) range, but more importantly it suggests that mass increase plays a larger role than ocean warming in twentieth-century global sea level rise.

  17. SU-D-213-04: Accounting for Volume Averaging and Material Composition Effects in An Ionization Chamber Array for Patient Specific QA

    SciTech Connect

    Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K

    2015-06-15

    Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly

  18. Insolation data manual: long-term monthly averages of solar radiation, temperature, degree-days and global anti K/sub T/ for 248 national weather service stations

    SciTech Connect

    Knapp, C L; Stoffel, T L; Whitaker, S D

    1980-10-01

    Monthly averaged data is presented which describes the availability of solar radiation at 248 National Weather Service stations. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data. Average daily maximum, minimum, and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3/sup 0/C (65/sup 0/F). For each station, global anti K/sub T/ (cloudiness index) were calculated on a monthly and annual basis. (MHR)

  19. The Global 2000 Report to the President: Entering the Twenty-First Century. Volume Two--The Technical Report.

    ERIC Educational Resources Information Center

    Council on Environmental Quality, Washington, DC.

    This second volume of the Global 2000 study presents a technical report of detailed projections and analyses. It is a U.S. government effort to present a long-term global perspective on population, resources, and environment. The volume has four parts. Approximately half of the report, part one, deals with projections for the future in the areas…

  20. Shifting Tides in Global Higher Education: Agency, Autonomy, and Governance in the Global Network. Global Studies in Education, Volume 9

    ERIC Educational Resources Information Center

    Witt, Mary Allison

    2011-01-01

    The increasing connection among higher education institutions worldwide is well documented. What is less understood is how this connectivity is enacted and manifested on specific levels of the global education network. This book details the planning process of a multi-institutional program in engineering between institutions in the US and…

  1. International conference on the role of the polar regions in global change: Proceedings. Volume 1

    SciTech Connect

    Weller, G.; Wilson, C.L.; Severin, B.A.B.

    1991-12-01

    The International Conference on the Role of the Polar Regions in Global Change took place on the campus of the University of Alaska Fairbanks on June 11--15, 1990. The goal of the conference was to define and summarize the state of knowledge on the role of the polar regions in global change, and to identify gaps in knowledge. To this purpose experts in a wide variety of relevant disciplines were invited to present papers and hold panel discussions. While there are numerous conferences on global change, this conference dealt specifically with polar regions which occupy key positions in the global system. These two volumes of conference proceedings include papers on (1) detection and monitoring of change; (2) climate variability and climate forcing; (3) ocean, sea ice, and atmosphere interactions and processes; (4) effects on biota and biological feedbacks; (5) ice sheet, glacier and permafrost responses and feedbacks; (6) paleoenvironmental studies; and, (7) aerosols and trace gases.

  2. International conference on the role of the polar regions in global change: Proceedings. Volume 2

    SciTech Connect

    Weller, G.; Wilson, C.L.; Severin, B.A.B.

    1991-12-01

    The International Conference on the Role of the Polar Regions in Global Change took place on the campus of the University of Alaska Fairbanks on June 11--15, 1990. The goal of the conference was to define and summarize the state of knowledge on the role of the polar regions in global change, and to identify gaps in knowledge. To this purpose experts in a wide variety of relevant disciplines were invited to present papers and hold panel discussions. While there are numerous conferences on global change, this conference dealt specifically with the polar regions which occupy key positions in the global system. These two volumes of conference proceedings include papers on (1) detection and monitoring of change; (2) climate variability and climate forcing; (3) ocean, sea ice, and atmosphere interactions and processes; and (4) effects on biota and biological feedbacks; (5) ice sheet, glacier and permafrost responses and feedbacks, (6) paleoenvironmental studies; and, (7) aerosol and trace gases.

  3. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    This paper extends the results I reported at this year's ION International Technical Meeting on multi-constellation GNSS coverage by showing how the use of multi-constellation GNSS improves Geometric Dilution of Precision (GDOP). Originally developed to provide position, navigation, and timing for terrestrial users, GPS has found increasing use for in space for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Galileo, and Beidou) and the development of Satellite Based Augmentation Services, it is possible to obtain improved precision by using evolving multi-constellation receiver. The Space Service Volume formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude ((is) approximately 36,500 km), with the volume below three thousand kilometers defined as the Terrestrial Service Volume (TSV). The USA has established signal requirements for the Space Service Volume (SSV) as part of the GPS Capability Development Documentation (CDD). Diplomatic efforts are underway to extend Space service Volume commitments to the other Position, Navigation, and Timing (PNT) service providers in an effort to assure that all space users will benefit from the enhanced capabilities of interoperating GNSS services in the space domain.

  4. Mars: Crustal pore volume, cryospheric depth, and the global occurrence of groundwater

    NASA Technical Reports Server (NTRS)

    Clifford, Stephen M.

    1987-01-01

    It is argued that most of the Martian hydrosphere resides in a porous outer layer of crust that, based on a lunar analogy, appears to extend to a depth of about 10 km. The total pore volume of this layer is sufficient to store the equivalent of a global ocean of water some 500 to 1500 m deep. Thermal modeling suggests that about 300 to 500 m of water could be stored as ice within the crust. Any excess must exist as groundwater.

  5. Size and distribution of the global volume of surgery in 2012

    PubMed Central

    Haynes, Alex B; Molina, George; Lipsitz, Stuart R; Esquivel, Micaela M; Uribe-Leitz, Tarsicio; Fu, Rui; Azad, Tej; Chao, Tiffany E; Berry, William R; Gawande, Atul A

    2016-01-01

    Abstract Objective To estimate global surgical volume in 2012 and compare it with estimates from 2004. Methods For the 194 Member States of the World Health Organization, we searched PubMed for studies and contacted key informants for reports on surgical volumes between 2005 and 2012. We obtained data on population and total health expenditure per capita for 2012 and categorized Member States as very-low, low, middle and high expenditure. Data on caesarean delivery were obtained from validated statistical reports. For Member States without recorded surgical data, we estimated volumes by multiple imputation using data on total health expenditure. We estimated caesarean deliveries as a proportion of all surgery. Findings We identified 66 Member States reporting surgical data. We estimated that 312.9 million operations (95% confidence interval, CI: 266.2–359.5) took place in 2012, an increase from the 2004 estimate of 226.4 million operations. Only 6.3% (95% CI: 1.7–22.9) and 23.1% (95% CI: 14.8–36.7) of operations took place in very-low- and low-expenditure Member States representing 36.8% (2573 million people) and 34.2% (2393 million people) of the global population of 7001 million people, respectively. Caesarean deliveries comprised 29.6% (5.8/19.6 million operations; 95% CI: 9.7–91.7) of the total surgical volume in very-low-expenditure Member States, but only 2.7% (5.1/187.0 million operations; 95% CI: 2.2–3.4) in high-expenditure Member States. Conclusion Surgical volume is large and growing, with caesarean delivery comprising nearly a third of operations in most resource-poor settings. Nonetheless, there remains disparity in the provision of surgical services globally. PMID:26966331

  6. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 4; Determination of surface and atmosphere fluxes and temporally and spatially averaged products (subsystems 5-12); Determination of surface and atmosphere fluxes and temporally and spatially averaged products

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.

  7. Modelling the flow of a second order fluid through and over a porous medium using the volume averages. I. The generalized Brinkman's equation

    NASA Astrophysics Data System (ADS)

    Minale, Mario

    2016-02-01

    In this paper, the generalized Brinkman's equation for a viscoelastic fluid is derived using the volume averages. Darcy's generalised equation is consequently obtained neglecting the first and the second Brinkman's correction with respect to the drag term. The latter differs from the Newtonian drag because of an additional term quadratic in the velocity and inversely proportional to a "viscoelastic" permeability defined in the paper. The viscoelastic permeability tensor can be calculated by solving a boundary value problem, but it must be in fact experimentally measured. To isolate the elastic contribution, the constitutive equation of the second order fluid of Coleman and Noll is chosen because, in simple shear at steady state, second order fluids show a constant viscosity and first and second normal stress differences quadratic in the shear rate. The model predictions are compared with data of the literature obtained in a Darcy's experiment and the agreement is good.

  8. Effects of Respiration-Averaged Computed Tomography on Positron Emission Tomography/Computed Tomography Quantification and its Potential Impact on Gross Tumor Volume Delineation

    SciTech Connect

    Chi, Pai-Chun Melinda; Mawlawi, Osama; Luo Dershan; Liao Zhongxing; Macapinlac, Homer A.; Pan Tinsu

    2008-07-01

    Purpose: Patient respiratory motion can cause image artifacts in positron emission tomography (PET) from PET/computed tomography (CT) and change the quantification of PET for thoracic patients. In this study, respiration-averaged CT (ACT) was used to remove the artifacts, and the changes in standardized uptake value (SUV) and gross tumor volume (GTV) were quantified. Methods and Materials: We incorporated the ACT acquisition in a PET/CT session for 216 lung patients, generating two PET/CT data sets for each patient. The first data set (PET{sub HCT}/HCT) contained the clinical PET/CT in which PET was attenuation corrected with a helical CT (HCT). The second data set (PET{sub ACT}/ACT) contained the PET/CT in which PET was corrected with ACT. We quantified the differences between the two datasets in image alignment, maximum SUV (SUV{sub max}), and GTV contours. Results: Of the patients, 68% demonstrated respiratory artifacts in the PET{sub HCT}, and for all patients the artifact was removed or reduced in the corresponding PET{sub ACT}. The impact of respiration artifact was the worst for lesions less than 50 cm{sup 3} and located below the dome of the diaphragm. For lesions in this group, the mean SUV{sub max} difference, GTV volume change, shift in GTV centroid location, and concordance index were 21%, 154%, 2.4 mm, and 0.61, respectively. Conclusion: This study benchmarked the differences between the PET data with and without artifacts. It is important to pay attention to the potential existence of these artifacts during GTV contouring, as such artifacts may increase the uncertainties in the lesion volume and the centroid location.

  9. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    SciTech Connect

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-04-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A{sub 1}) was set in the range of 0.0-12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of {gamma} index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within {+-} 0.7%. From the dose area histograms on the film, the mean {+-} standard deviation of the dose covering 100% of the cross section of the target was 102.32 {+-} 1.20% (range, 100.59-103.49%). By contrast, the irradiated areas receiving more than 95% dose for A{sub 1} = 12 mm were 1.46 and 1.33 times larger than those for A{sub 1} = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  10. Transports and budgets of volume, heat, and salt from a global eddy-resolving ocean model

    SciTech Connect

    McCann, M.P.; Semtner, A.J. Jr.; Chervin, R.M.

    1994-07-01

    The results from an integration of a global ocean circulation model have been condensed into an analysis of the volume, heat, and salt transports among the major ocean basins. Transports are also broken down between the model`s Ekman, thermocline, and deep layers. Overall, the model does well. Horizontal exchanges of mass, heat, and salt between ocean basins have reasonable values: and the volume of North Atlantic Deep Water (NADW) transport is in general agreement with what limited observations exist. On a global basis the zonally integrated meridional heat transport is poleward at all latitudes except for the latitude band 30{degrees}S to 45{degrees}S. This anomalous transport is most likely a signature of the model`s inability to form Antarctic Intermediate (AAIW) and Antarctic bottom water (AABW) properly. Eddy heat transport is strong at the equator where its convergence heats the equatorial Pacific about twice as much as it heats the equatorial Atlantic. The greater heating in the Pacific suggests that mesoscale eddies may be a vital mechanism for warming and maintaining an upwelling portion of the global conveyor-belt circulation. The model`s fresh water transport compares well with observations. However, in the Atlantic there is an excessive southward transport of fresh water due to the absence of the Mediterranean outflow and weak northward flow of AAIW. Perhaps the model`s greatest weakness is the lack of strong AAIW and AABW circulation cells. Accurate thermohaline forcing in the North Atlantic (based on numerous hydrographic observations) helps the model adequately produce NADW. In contrast, the southern ocean is an area of sparse observation. Better thermohaline observations in this area may be needed if models such as this are to produce the deep convection that will achieve more accurate simulations of the global 3-dimensional circulation. 41 refs., 18 figs., 1 tab.

  11. Global average concentration and trend for hydroxyl radicals deduced from ALE/GAGE trichloroethane (methyl chloroform) data for 1978-1990

    NASA Technical Reports Server (NTRS)

    Prinn, R.; Cunnold, D.; Simmonds, P.; Alyea, F.; Boldi, R.; Crawford, A.; Fraser, P.; Gutzler, D.; Hartley, D.; Rosen, R.

    1992-01-01

    An optimal estimation inversion scheme is utilized with atmospheric data and emission estimates to determined the globally averaged CH3CCl3 tropospheric lifetime and OH concentration. The data are taken from atmospheric measurements from surface stations of 1,1,1-trichloroethane and show an annual increase of 4.4 +/- 0.2 percent. Industrial emission estimates and a small oceanic loss rate are included, and the OH concentration for the same period (1978-1990) are incorporated at 1.0 +/- 0.8 percent/yr. The positive OH trend is consistent with theories regarding OH and ozone trends with respect to land use and global warming. Attention is given to the effects of the ENSO on the CH3CCl3 data and the assumption of continuing current industrial anthropogenic emissions. A novel tropical atmospheric tracer-transport mechanism is noted with respect to the CH3CCl3 data.

  12. A finite-volume module for simulating global all-scale atmospheric flows

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Deconinck, Willem; Hamrud, Mats; Kühnlein, Christian; Mozdzynski, George; Szmelter, Joanna; Wedi, Nils P.

    2016-06-01

    The paper documents the development of a global nonhydrostatic finite-volume module designed to enhance an established spectral-transform based numerical weather prediction (NWP) model. The module adheres to NWP standards, with formulation of the governing equations based on the classical meteorological latitude-longitude spherical framework. In the horizontal, a bespoke unstructured mesh with finite-volumes built about the reduced Gaussian grid of the existing NWP model circumvents the notorious stiffness in the polar regions of the spherical framework. All dependent variables are co-located, accommodating both spectral-transform and grid-point solutions at the same physical locations. In the vertical, a uniform finite-difference discretisation facilitates the solution of intricate elliptic problems in thin spherical shells, while the pliancy of the physical vertical coordinate is delegated to generalised continuous transformations between computational and physical space. The newly developed module assumes the compressible Euler equations as default, but includes reduced soundproof PDEs as an option. Furthermore, it employs semi-implicit forward-in-time integrators of the governing PDE systems, akin to but more general than those used in the NWP model. The module shares the equal regions parallelisation scheme with the NWP model, with multiple layers of parallelism hybridising MPI tasks and OpenMP threads. The efficacy of the developed nonhydrostatic module is illustrated with benchmarks of idealised global weather.

  13. SU-C-304-01: Investigation of Various Detector Response Functions and Their Geometry Dependence in a Novel Method to Address Ion Chamber Volume Averaging Effect

    SciTech Connect

    Barraclough, B; Lebron, S; Li, J; Fan, Qiyong; Liu, C; Yan, G

    2015-06-15

    Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and

  14. Modelling the flow of a second order fluid through and over a porous medium using the volume averages. II. The stress boundary condition

    NASA Astrophysics Data System (ADS)

    Minale, Mario

    2016-02-01

    In this paper, a stress boundary condition at the interface between a porous medium saturated by a viscoelastic fluid and the free viscoelastic fluid is derived. The volume averages are used to upscale the problem. The boundary condition is obtained on the assumption that the free fluid stress is transferred partially to the fluid within the porous medium and partially to the solid skeleton. To this end the momentum balance on the solid skeleton saturated by the viscoelastic fluid is derived and a generalised Biot's equation is obtained, which is coupled with the generalised Brinkman's equation derived in Part I of the paper. They together state that the whole stress carried by the porous medium, sum of that of the fluid and that of the solid skeleton, is not dissipated. The boundary condition here derived does not show any stress jump and as in Part I, to emphasize the effect of elasticity, a second order fluid of Coleman and Noll is considered as viscoelastic fluid. Also the stress boundary condition at the interface between a homogeneous solid and the porous medium saturated by the viscoelastic fluid is obtained.

  15. Estimating the global volume of deeply recycled continental crust at continental collision zones

    NASA Astrophysics Data System (ADS)

    Scholl, D. W.; Huene, R. V.

    2006-12-01

    CRUSTAL RECYCLING AT OCEAN MARGINS: Large volumes of rock and sediment are missing from the submerged forearcs of ocean margin subduction zones--OMSZs. This observation means that (1) oceanic sediment is transported beneath the margin to either crustally underplate the coastal region or reach mantle depths, and that (2) the crust of the forearc is vertically thinned and horizontally truncated and the removed material transported toward the mantle. Transport of rock and sediment debris occurs in the subduction channel that separates the upper and lower plates. At OMSZs the solid-volume flux of recycling crustal material is estimated to be globally ~2.5 km3/yr (i.e., 2.5 Armstrong units or AU). The corresponding rate of forearc truncation (migration of the trench axis toward a fix reference on the continent) is a sluggish 2-3 km/Myr (about 1/50th the orthogonal convergence rate). Nonetheless during the past 2.5 Gyr (i.e., since the beginning of the Proterozoic) a volume of continental material roughly equal to the existing volume (~7 billion cubic km) has been recycled to the mantle at OMSZs. The amount of crust that has been destroyed is so large that recycling must have been a major factor creating the mapped rock pattern and age-fabric of continental crust. RECYCLING AT CONTINENT/ARC COLLISIONS: The rate at which arc magmatism globally adds juvenile crust to OMSZs has been commonly globally estimated at ~1 AU. But new geophysical and dating information from the Aleutian and IBM arcs imply that the addition rate is at least ~5 AU (equivalent to ~125 km3/Myr/km of arc). If the Armstrong posit is correct that since the early Archean a balance has existed between additions and losses of crust, then a recycling sink for an additional 2-3 AU of continental material must exist. As the exposure of exhumed masses of high P/T blueschist bodies documents that subcrustal streaming of continental material occurs at OMSZs, so does the occurrence of exhumed masses of UHP

  16. An analysis of the global spatial variability of column-averaged CO2 from SCIAMACHY and its implications for CO2 sources and sinks

    USGS Publications Warehouse

    Zhang, Zhen; Jiang, Hong; Liu, Jinxun; Zhang, Xiuying; Huang, Chunlin; Lu, Xuehe; Jin, Jiaxin; Zhou, Guomo

    2014-01-01

    Satellite observations of carbon dioxide (CO2) are important because of their potential for improving the scientific understanding of global carbon cycle processes and budgets. We present an analysis of the column-averaged dry air mole fractions of CO2 (denoted XCO2) of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) retrievals, which were derived from a satellite instrument with relatively long-term records (2003–2009) and with measurements sensitive to the near surface. The spatial-temporal distributions of remotely sensed XCO2 have significant spatial heterogeneity with about 6–8% variations (367–397 ppm) during 2003–2009, challenging the traditional view that the spatial heterogeneity of atmospheric CO2 is not significant enough (2 and surface CO2 were found for major ecosystems, with the exception of tropical forest. In addition, when compared with a simulated terrestrial carbon uptake from the Integrated Biosphere Simulator (IBIS) and the Emissions Database for Global Atmospheric Research (EDGAR) carbon emission inventory, the latitudinal gradient of XCO2 seasonal amplitude was influenced by the combined effect of terrestrial carbon uptake, carbon emission, and atmospheric transport, suggesting no direct implications for terrestrial carbon sinks. From the investigation of the growth rate of XCO2 we found that the increase of CO2 concentration was dominated by temperature in the northern hemisphere (20–90°N) and by precipitation in the southern hemisphere (20–90°S), with the major contribution to global average occurring in the northern hemisphere. These findings indicated that the satellite measurements of atmospheric CO2 improve not only the estimations of atmospheric inversion, but also the understanding of the terrestrial ecosystem carbon dynamics and its feedback to atmospheric CO2.

  17. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  18. The Global Classroom: A Thematic Multicultural Model for the K-6 and ESL Classroom. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    De Cou-Landberg, Michelle

    This two-volume resource guide is designed to help K-6 and ESL teachers implement multicultural whole language learning through thematic social studies units. The four chapters in Volume 1 address universal themes: (1) "Climates and Seasons: Watching the Weather"; (2) "Trees and Plants: Our Rich, Green World"; (3) "Animals around the World: Tame,…

  19. The impact of time-averaged volcanic sulphur emissions on the global cloud condensation nuclei budget in the pre-industrial era

    NASA Astrophysics Data System (ADS)

    Schmidt, Anja; Carslaw, Kenneth; Mann, Graham; Merikanto, Joonas

    2010-05-01

    Volcanoes are a strong source of sulphur dioxide (SO2) with time-averaged emission inventories (e.g. Andres and Kasgnoc, 1998) indicating that volcanoes account for around 40% of the total annual SO2 flux in the pre-industrial atmosphere. We use a global aerosol microphysics model (GLOMAP-mode) to quantify the contribution of time-averaged volcanic sulphur emissions (from both continuous passive degassing and explosive volcanoes) on the global cloud condensation nuclei (CCN) budget. GLOMAP-mode is capable of simulating microphysical processes, such as binary homogeneous nucleation, hygroscopic growth, coagulation, condensation, cloud processing (oxidation of dissolved SO2 to SO4 in cloud droplets), as well as dry and wet deposition. For this study we use a sulphur chemistry scheme which includes 7 species (DMS, DMSO, MSA, SO2, H2SO4, COS, CS2). The runs were conducted using four internally mixed aerosol components, sulphate (SO4), sea salt, black carbon (BC) and organic carbon (OC). We simulated the impact of volcanic degassing in a pre-industrial setting (i.e. using 1750 BC and OC emissions in the absence of any anthropogenic emissions) using the volcanic emission inventory by Dentener et al. (2006). This volcanic inventory is based on datasets by Andres and Kasgnoc (1998) and Halmer et al. (2002) and accounts for an annual flux of ~13 Tg(S) of volcanic SO2. Our simulations suggest that volcanic degassing contributes on average ~50 CCN (>35nm in radius) per cubic centimetre to the annual zonal mean CCN concentrations in the tropical boundary layer. The simulations also reveal complex changes in annual zonal mean total particle concentrations (CN). CN concentrations are more than double in large parts of the tropical boundary layer when comparing the unperturbed run (i.e. without volcanic degassing) to the run featuring time-averaged volcanic degassing. However, the simulations also reveal that the additional SO2 and its subsequent conversion to sulphate aerosol

  20. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can

  1. Multi-parallel open technology to enable collaborative volume visualization: how to create global immersive virtual anatomy classrooms.

    PubMed

    Silverstein, Jonathan C; Walsh, Colin; Dech, Fred; Olson, Eric; E, Michael; Parsad, Nigel; Stevens, Rick

    2008-01-01

    Many prototype projects aspire to develop a sustainable model of immersive radiological volume visualization for virtual anatomic education. Some have focused on distributed or parallel architectures. However, very few, if any others, have combined multi-location, multi-directional, multi-stream sharing of video, audio, desktop applications, and parallel stereo volume rendering, to converge on an open, globally scalable, and inexpensive collaborative architecture and implementation method for anatomic teaching using radiological volumes. We have focused our efforts on bringing this all together for several years. We outline here the technology we're making available to the open source community and a system implementation suggestion for how to create global immersive virtual anatomy classrooms. With the releases of Access Grid 3.1 and our parallel stereo volume rendering code, inexpensive globally scalable technology is available to enable collaborative volume visualization upon an award-winning framework. Based upon these technologies, immersive virtual anatomy classrooms that share educational or clinical principles can be constructed with the setup described with moderate technological expertise and global scalability.

  2. Proceedings of the First National Workshop on the Global Weather Experiment: Current Achievements and Future Directions, volume 2, part 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An assessment of the status of research using Global Weather Experiment (GWE) data and of the progress in meeting the objectives of the GWE, i.e., better knowledge and understanding of the atmosphere in order to provide more useful weather prediction services. Volume Two consists of a compilation of the papers presented during the workshop. These cover studies that addressed GWE research objectives and utilized GWE information. The titles in Part 2 of this volume include General Circulation Planetary Waves, Interhemispheric, Cross-Equatorial Exchange, Global Aspects of Monsoons, Midlatitude-Tropical Interactions During Monsoons, Stratosphere, Southern Hemisphere, Parameterization, Design of Observations, Oceanography, Future Possibilities, Research Gaps, with an Appendix.

  3. The Global 2000 Report to the President: Entering the Twenty-First Century. Volume One - The Summary Report.

    ERIC Educational Resources Information Center

    Council on Environmental Quality, Washington, DC.

    This summary volume presents the conclusions of a United States' Government effort to look at the issues and interdependencies of population, resources, and environment in the long-term global perspective. The report concludes that, if present trends continue, serious stresses of overcrowding, pollution, ecological instability, and vulnerability…

  4. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students.

    PubMed

    Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A

    2016-02-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.

  5. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students

    PubMed Central

    Lee, Romeo B.; Baring, Rito V.; Sta. Maria, Madelene A.

    2016-01-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700

  6. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students.

    PubMed

    Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A

    2016-02-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700

  7. Validation of the global distribution of CO2 volume mixing ratio in the mesosphere and lower thermosphere from SABER

    NASA Astrophysics Data System (ADS)

    Rezac, L.; Jian, Y.; Yue, J.; Russell, J. M.; Kutepov, A.; Garcia, R.; Walker, K.; Bernath, P.

    2015-12-01

    The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument on board the Thermosphere Ionosphere Mesosphere Energetics and Dynamics satellite has been measuring the limb radiance in 10 broadband infrared channels over the altitude range from ~ 400 km to the Earth's surface since 2002. The kinetic temperatures and CO2 volume mixing ratios (VMRs) in the mesosphere and lower thermosphere have been simultaneously retrieved using SABER limb radiances at 15 and 4.3 µm under nonlocal thermodynamic equilibrium (non-LTE) conditions. This paper presents results of a validation study of the SABER CO2 VMRs obtained with a two-channel, self-consistent temperature/CO2 retrieval algorithm. Results are based on comparisons with coincident CO2 measurements made by the Atmospheric Chemistry Experiment Fourier transform spectrometer (ACE-FTS) and simulations using the Specified Dynamics version of the Whole Atmosphere Community Climate Model (SD-WACCM). The SABER CO2 VMRs are in agreement with ACE-FTS observations within reported systematic uncertainties from 65 to 110 km. The annual average SABER CO2 VMR falls off from a well-mixed value above ~80 km. Latitudinal and seasonal variations of CO2 VMRs are substantial. SABER observations and the SD-WACCM simulations are in overall agreement for CO2 seasonal variations, as well as global distributions in the mesosphere and lower thermosphere. Not surprisingly, the CO2 seasonal variation is shown to be driven by the general circulation, converging in the summer polar mesopause region and diverging in the winter polar mesopause region.

  8. Variations of the earth's magnetic field and rapid climatic cooling: A possible link through changes in global ice volume

    NASA Technical Reports Server (NTRS)

    Rampino, M. R.

    1979-01-01

    A possible relationship between large scale changes in global ice volume, variations in the earth's magnetic field, and short term climatic cooling is investigated through a study of the geomagnetic and climatic records of the past 300,000 years. The calculations suggest that redistribution of the Earth's water mass can cause rotational instabilities which lead to geomagnetic excursions; these magnetic variations in turn may lead to short-term coolings through upper atmosphere effects. Such double coincidences of magnetic excursions and sudden coolings at times of ice volume changes have occurred at 13,500, 30,000, 110,000, and 135,000 YBP.

  9. Dependence on solar elevation and the daily sunshine fraction of the correlation between monthly-average-hourly diffuse and global radiation

    SciTech Connect

    Soler, A. )

    1992-01-01

    In the present work the authors study for Uccle, Belgium data (50{degree}48 minutes N, 4{degree}21 minutes E), the dependence on {anti {gamma}} and {sigma} of the correlations between {anti K}{sub d} = {anti I}{sub d}/{anti I}{sub o} and {anti I}{sub t} = {anti I}/{anti I}{sub o}, where {anti I}, {anti I}{sub d}, and {anti I}{sub o} are respectively, the monthly-average-hourly value of global, diffuse, and extraterrestrial radiation (all of them on a horizontal surface), {anti {gamma}} is the solar elevation at midhour and {sigma} the daily sunshine fraction. The dependence on {sigma} is studied for different ranges of values, from {sigma} = 0 to {sigma} > 0.9. The dependence on {anti {gamma}} is studied for {anti {gamma}} = 5{degree}, 10{degree}, 15{degree}, 25{degree}-30{degree}; 35{degree}-40{degree}; 45{degree}-60{degree} ({delta}{anti {gamma}} = 5{degree}). Relating the dependence on {sigma}, for increasing values of {sigma}({sigma} {>=} 0), there is an increase in {anti K}{sub d} with the increase in {anti K}{sub t}. For 0.42 < {anti K}{sub t} < 0.52 a maximum is obtained for {anti K}{sub d}. After the maximum, as the skies become clearer, {anti K}{sub d} decreases as {anti K}{sub t} increases. Relating the dependence on {anti {gamma}}, for each range of values of {sigma} ({sigma} > 0.2), values of the slope for linear {anti K}{sub d} = f({anti K}{sub t}) correlations show a tendency to decrease as {anti {gamma}} increases. For each value of {anti {gamma}} the slopes of the linear {anti K}{sub d} = f({anti K}{sub t}) correlations tend to decrease when {sigma} increases.

  10. Citizenship and Citizenship Education in a Global Age: Politics, Policies, and Practices in China. Global Studies in Education. Volume 2

    ERIC Educational Resources Information Center

    Law, Wing-Wah

    2011-01-01

    This book examines issues of citizenship, citizenship education, and social change in China, exploring the complexity of interactions among global forces, the nation-state, local governments, schools, and individuals--including students--in selecting and identifying with elements of citizenship and citizenship education in a multileveled polity.…

  11. No significant brain volume decreases or increases in adults with high-functioning autism spectrum disorder and above average intelligence: a voxel-based morphometric study.

    PubMed

    Riedel, Andreas; Maier, Simon; Ulbrich, Melanie; Biscaldi, Monica; Ebert, Dieter; Fangmeier, Thomas; Perlov, Evgeniy; Tebartz van Elst, Ludger

    2014-08-30

    Autism spectrum disorder (ASD) is increasingly being recognized as an important issue in adult psychiatry and psychotherapy. High intelligence indicates overall good brain functioning and might thus present a particularly good opportunity to study possible cerebral correlates of core autistic features in terms of impaired social cognition, communication skills, the need for routines, and circumscribed interests. Anatomical MRI data sets for 30 highly intelligent patients with high-functioning autism and 30 pairwise-matched control subjects were acquired and analyzed with voxel-based morphometry. The gray matter volume of the pairwise-matched patients and the controls did not differ significantly. When correcting for total brain volume influences, the patients with ASD exhibited smaller left superior frontal volumes on a trend level. Heterogeneous volumetric findings in earlier studies might partly be explained by study samples biased by a high inclusion rate of secondary forms of ASD, which often go along with neuronal abnormalities. Including only patients with high IQ scores might have decreased the influence of secondary forms of ASD and might explain the absence of significant volumetric differences between the patients and the controls in this study. PMID:24953998

  12. Insolation data manual: Long-term monthly averages of solar radiation, temperature, degree-days, and global KT for 248 National Weather Service stations and direct normal solar radiation data manual: Long-term, monthly mean, daily totals for 235 National Weather Service stations

    NASA Astrophysics Data System (ADS)

    1990-07-01

    The Insolation Data Manual presents monthly averaged data which describes the availability of solar radiation at 248 National Weather Service (NWS) stations, principally in the United States. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data, generally from 1952 to 1975, and listed for each location. Insolation values represent monthly average daily totals of global radiation on a horizontal surface and are depicted using the three units of measurement: kJ/sq m per day, Btu/sq ft per day and langleys per day. Average daily maximum, minimum and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3 C (65 F). For each station, global KT (cloudiness index) values were calculated on a monthly and annual basis. Global KT is an index of cloudiness and indicates fractional transmittance of horizontal radiation, from the top of the atmosphere to the earth's surface. The second section of this volume presents long-term monthly and annual averages of direct normal solar radiation for 235 NWS stations, including a discussion of the basic derivation process. This effort is in response to a generally recognized need for reliable direct normal data and the recent availability of 23 years of hourly averages for 235 stations. The relative inaccessibility of these data on microfiche further justifies reproducing at least the long-term averages in a useful format. In addition to a definition of terms and an overview of the ADIPA model, a discussion of model validation results is presented.

  13. Single mammalian cells compensate for differences in cellular volume and DNA copy number through independent global transcriptional mechanisms

    PubMed Central

    Padovan-Merhar, Olivia; Nair, Gautham P.; Biaesch, Andrew; Mayer, Andreas; Scarfone, Steven; Foley, Shawn W.; Wu, Angela R.; Churchman, L. Stirling; Singh, Abhyudai; Raj, Arjun

    2015-01-01

    Summary Individual mammalian cells exhibit large variability in cellular volume even with the same absolute DNA content and so must compensate for differences in DNA concentration in order to maintain constant concentration of gene expression products. Using single molecule counting and computational image analysis, we show that transcript abundance correlates with cellular volume at the single cell level due to increased global transcription in larger cells. Cell fusion experiments establish that increased cellular content itself can directly increase transcription. Quantitative analysis shows that this mechanism measures the ratio of cellular volume to DNA content, mostly likely through sequestration of a transcriptional factor to DNA. Analysis of transcriptional bursts reveals a separate mechanism for gene dosage compensation after DNA replication that enables proper transcriptional output during early and late S-phase. Our results provide a framework for quantitatively understanding the relationships between DNA content, cell size and gene expression variability in single cells. PMID:25866248

  14. Global Sentry: NASA/USRA high altitude reconnaissance aircraft design, volume 2

    NASA Technical Reports Server (NTRS)

    Alexandru, Mona-Lisa; Martinez, Frank; Tsou, Jim; Do, Henry; Peters, Ashish; Chatsworth, Tom; Yu, YE; Dhillon, Jaskiran

    1990-01-01

    The Global Sentry is a high altitude reconnaissance aircraft design for the NASA/USRA design project. The Global Sentry uses proven technologies, light-weight composites, and meets the R.F.P. requirements. The mission requirements for the Global Sentry are described. The configuration option is discussed and a description of the final design is given. Preliminary sizing analyses and the mass properties of the design are presented. The aerodynamic features of the Global Sentry are described along with the stability and control characteristics designed into the flight control system. The performance characteristics are discussed as is the propulsion installation and system layout. The Global Sentry structural design is examined, including a wing structural analysis. The cockpit, controls and display layouts are covered. Manufacturing is covered and the life cost estimation. Reliability is discussed. Conclusions about the current Global Sentry design are presented, along with suggested areas for future engineering work.

  15. Bibliography on tropical rain forests and the global carbon cycle: Volume 1, An introduction to the literature

    SciTech Connect

    Hall, C.A.S.; Brown, S.; O'Hara, F.M. Jr.; Bogdonoff, P.B.; Barshaw, D.; Kaufman, E.; Underhill, S.

    1988-05-01

    This bibliography covers the world literature on tropical rain forests, tropical deforestation, land-use change in the tropics, tropical forest conversion, and swidden agriculture as related to the global carbon cycle. Historic papers and books are included, but comprehensive coverage was only sought for 1980 through 1987. This compendium of nearly 2000 entries forms the point of departure for a series of bibliographies on this topic. Other works in this series will be on the global carbon cycle and rain forests in specific geographic areas, whereas this volume includes references to literature about the global carbon cycle and rain forests anywhere in the world. The bibliography is ordered alphabetically by author and is indexed by subject and author.

  16. Excluded volume effect of counterions and water dipoles near a highly charged surface due to a rotationally averaged Boltzmann factor for water dipoles.

    PubMed

    Gongadze, Ekaterina; Iglič, Aleš

    2013-03-01

    Water ordering near a negatively charged electrode is one of the decisive factors determining the interactions of an electrode with the surrounding electrolyte solution or tissue. In this work, the generalized Langevin-Bikerman model (Gongadze-Iglič model) taking into account the cavity field and the excluded volume principle is used to calculate the space dependency of ions and water number densities in the vicinity of a highly charged surface. It is shown that for high enough surface charged densities the usual trend of increasing counterion number density towards the charged surface may be completely reversed, i.e. the drop in the counterions number density near the charged surface is predicted.

  17. Infusing a Global Perspective into the Study of Agriculture: Student Activities Volume II.

    ERIC Educational Resources Information Center

    Martin, Robert A., Ed.

    These student activities are designed to be used in a variety of places in the curriculum to provide a global perspective for students as they study agriculture. This document is not a unit of instruction; rather, teachers are encouraged to study the materials and decide which will be helpful in adding a global perspective to the learning…

  18. The computational structural mechanics testbed architecture. Volume 4: The global-database manager GAL-DBM

    NASA Technical Reports Server (NTRS)

    Wright, Mary A.; Regelbrugge, Marc E.; Felippa, Carlos A.

    1989-01-01

    This is the fourth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 4 describes the nominal-record data management component of the NICE software. It is intended for all users.

  19. Local and global volume changes of subcortical brain structures from longitudinally varying neuroimaging data for dementia identification.

    PubMed

    Unay, Devrim

    2012-09-01

    Quantification of structural changes in the human brain is important to elicit resemblances and differences between pathological and normal aging. Identification of dementia, associated with loss of cognitive ability beyond normal aging, and especially converters--the subgroup of individuals at risk for developing dementia--has recently gained importance. For this purpose atrophy markers have been explored and their effectiveness has been evaluated both cross-sectionally and longitudinally. However, more research is needed to understand the dynamics of atrophy markers at different disease stages, which requires temporal analysis of local along with global changes. Unfortunately, most of the longitudinal neuroimaging data available in the clinical settings is acquired at largely varying time intervals. In the light of the above, this study presents a novel methodology to process longitudinal neuroimaging data acquired incompletely and at different time intervals, and explores complementary nature of local and global brain volume changes in identifying dementia. Results on the OASIS database demonstrate discriminative power of global atrophy in hippocampus (as early as two years after the first visit) for identifying demented cases, and local volume shrinkage of thalamus proper (as early as three years after the first visit) for differentiating converters.

  20. The Genetic Association Between Neocortical Volume and General Cognitive Ability Is Driven by Global Surface Area Rather Than Thickness

    PubMed Central

    Vuoksimaa, Eero; Panizzon, Matthew S.; Chen, Chi-Hua; Fiecas, Mark; Eyler, Lisa T.; Fennema-Notestine, Christine; Hagler, Donald J.; Fischl, Bruce; Franz, Carol E.; Jak, Amy; Lyons, Michael J.; Neale, Michael C.; Rinker, Daniel A.; Thompson, Wesley K.; Tsuang, Ming T.; Dale, Anders M.; Kremen, William S.

    2015-01-01

    Total gray matter volume is associated with general cognitive ability (GCA), an association mediated by genetic factors. It is expectable that total neocortical volume should be similarly associated with GCA. Neocortical volume is the product of thickness and surface area, but global thickness and surface area are unrelated phenotypically and genetically in humans. The nature of the genetic association between GCA and either of these 2 cortical dimensions has not been examined. Humans possess greater cognitive capacity than other species, and surface area increases appear to be the primary driver of the increased size of the human cortex. Thus, we expected neocortical surface area to be more strongly associated with cognition than thickness. Using multivariate genetic analysis in 515 middle-aged twins, we demonstrated that both the phenotypic and genetic associations between neocortical volume and GCA are driven primarily by surface area rather than thickness. Results were generally similar for each of 4 specific cognitive abilities that comprised the GCA measure. Our results suggest that emphasis on neocortical surface area, rather than thickness, could be more fruitful for elucidating neocortical–GCA associations and identifying specific genes underlying those associations. PMID:24554725

  1. Global end-diastolic volume an emerging preload marker vis-a-vis other markers - Have we reached our goal?

    PubMed Central

    Kapoor, P. M; Bhardwaj, Vandana; Sharma, Amita; Kiran, Usha

    2016-01-01

    A reliable estimation of cardiac preload is helpful in the management of severe circulatory dysfunction. The estimation of cardiac preload has evolved from nuclear angiography, pulmonary artery catheterization to echocardiography, and transpulmonary thermodilution (TPTD). Global end-diastolic volume (GEDV) is the combined end-diastolic volumes of all the four cardiac chambers. GEDV has been demonstrated to be a reliable preload marker in comparison with traditionally used pulmonary artery catheter-derived pressure preload parameters. Recently, a new TPTD system called EV1000™ has been developed and introduced into the expanding field of advanced hemodynamic monitoring. GEDV has emerged as a better preload marker than its previous conventional counterparts. The advantage of it being measured by minimum invasive methods such as PiCCO™ and newly developed EV1000™ system makes it a promising bedside advanced hemodynamic parameter. PMID:27716702

  2. Cancer Incidence in Five Continents: Inclusion criteria, highlights from Volume X and the global status of cancer registration.

    PubMed

    Bray, F; Ferlay, J; Laversanne, M; Brewster, D H; Gombe Mbalawa, C; Kohler, B; Piñeros, M; Steliarova-Foucher, E; Swaminathan, R; Antoni, S; Soerjomataram, I; Forman, D

    2015-11-01

    Cancer Incidence in Five Continents (CI5), a longstanding collaboration between the International Agency for Research on Cancer and the International Association of Cancer Registries, serves as a unique source of cancer incidence data from high-quality population-based cancer registries around the world. The recent publication of Volume X comprises cancer incidence data from 290 registries covering 424 populations in 68 countries for the registration period 2003-2007. In this article, we assess the status of population-based cancer registries worldwide, describe the techniques used in CI5 to evaluate their quality and highlight the notable variation in the incidence rates of selected cancers contained within Volume X of CI5. We also discuss the Global Initiative for Cancer Registry Development as an international partnership that aims to reduce the disparities in availability of cancer incidence data for cancer control action, particularly in economically transitioning countries, already experiencing a rapid rise in the number of cancer patients annually.

  3. Tectonics, orbital forcing, global climate change, and human evolution in Africa: introduction to the African paleoclimate special volume.

    PubMed

    Maslin, Mark A; Christensen, Beth

    2007-11-01

    The late Cenozoic climate of Africa is a critical component for understanding human evolution. African climate is controlled by major tectonic changes, global climate transitions, and local variations in orbital forcing. We introduce the special African Paleoclimate Issue of the Journal of Human Evolution by providing a background for and synthesis of the latest work relating to the environmental context for human evolution. Records presented in this special issue suggest that the regional tectonics, appearance of C(4) plants in East Africa, and late Cenozoic global cooling combined to produce a long-term drying trend in East Africa. Of particular importance is the uplift associated with the East African Rift Valley formation, which altered wind flow patterns from a more zonal to more meridinal direction. Results in this volume suggest a marked difference in the climate history of southern and eastern Africa, though both are clearly influenced by the major global climate thresholds crossed in the last 3 million years. Papers in this volume present lake, speleothem, and marine paleoclimate records showing that the East African long-term drying trend is punctuated by episodes of short, alternating periods of extreme wetness and aridity. These periods of extreme climate variability are characterized by the precession-forced appearance and disappearance of large, deep lakes in the East African Rift Valley and paralleled by low and high wind-driven dust loads reaching the adjacent ocean basins. Dating of these records show that over the last 3 million years such periods only occur at the times of major global climatic transitions, such as the intensification of Northern Hemisphere Glaciation (2.7-2.5 Ma), intensification of the Walker Circulation (1.9-1.7 Ma), and the Mid-Pleistocene Revolution (1-0.7 Ma). Authors in this volume suggest this onset occurs as high latitude forcing in both Hemispheres compresses the Intertropical Convergence Zone so that East Africa

  4. Global bifurcation and stability of steady states for a reaction-diffusion-chemotaxis model with volume-filling effect

    NASA Astrophysics Data System (ADS)

    Ma, Manjun; Wang, Zhi-An

    2015-08-01

    This paper is devoted to studying a reaction-diffusion-chemotaxis model with a volume-filling effect in a bounded domain with Neumann boundary conditions. We first establish the global existence of classical solutions bounded uniformly in time. Then applying the asymptotic analysis and bifurcation theory, we obtain both the local and global structure of steady states bifurcating from the homogeneous steady states in one dimension by treating the chemotactic coefficient as a bifurcation parameter. Moveover we find the stability criterion of the bifurcating steady states and give a sufficient condition for the stability of steady states with small amplitude. The pattern formation of the model is numerically shown and the stability criterion is verified by our numerical simulations.

  5. Global Trends in Educational Policy. International Perspectives on Education and Society. Volume 6

    ERIC Educational Resources Information Center

    Baker, David, Ed.; Wiseman, Alex, Ed.

    2005-01-01

    This volume of International Perspectives on Education and Society highlights the valuable role that educational policy plays in the development of education and society around the world. The role of policy in the development of education is crucial. Much rests on the decisions, support, and most of all resources that policymakers can either give…

  6. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    GPS has been used for spacecraft navigation for many years center dot In support of this, the US has committed that future GPS satellites will continue to provide signals in the Space Service Volume center dot NASA is working with international agencies to obtain similar commitments from other providers center dot In support of this effort, I simulated multi-constellation navigation in the Space Service Volume In this presentation, I extend the work to examine the navigational benefits and drawbacks of the new constellations center dot A major benefit is the reduced geometric dilution of precision (GDOP). I show that there is a substantial reduction in GDOP by using all of the GNSS constellations center dot The increased number of GNSS satellites broadcasting does produce mutual interference, raising the noise floor. A near/far signal problem can also occur where a nearby satellite drowns out satellites that are far away. - In these simulations, no major effect was observed Typically, the use of multi-constellation GNSS navigation improves GDOP by a factor of two or more over GPS alone center dot In addition, at the higher altitudes, four satellite solutions can be obtained much more often center dot This show the value of having commitments to provide signals in the Space Service Volume Besides a commitment to provide a minimum signal in the Space Service Volume, detailed signal gain information is useful for mission planning center dot Knowledge of group and phase delay over the pattern would also reduce the navigational uncertainty

  7. Proceedings of Eco-Informa `96 - global networks for environmental information. Volume 10 and 11

    SciTech Connect

    1996-12-31

    This fourth Eco-Informa forum has been designed to bridge the gap between scientific knowledge and real world applications. Enhancement of of international and exchange of global environmental technology among scientific, governmental, and commercial communities is the goal. Researchers, policy makers, and information managers presented papers that integrate scientific and technical issues with the global needs for expanded networks effective communication and responsible decision making. Special emphasis was given to environmental information management and decision support systems, including environmental computing and modeling, data banks and environmental education. In addition, fields such as waste management and remediation, sustainable food production, life-cycle analysis and auditing were also addressed.

  8. Mars Global Digital Dune Database (MGD3): North polar region (MC-1) distribution, applications, and volume estimates

    USGS Publications Warehouse

    Hayward, R.K.

    2011-01-01

    The Mars Global Digital Dune Database (MGD3) now extends from 90??N to 65??S. The recently released north polar portion (MC-1) of MGD3 adds ~844 000km2 of moderate- to large-size dark dunes to the previously released equatorial portion (MC-2 to MC-29) of the database. The database, available in GIS- and tabular-format in USGS Open-File Reports, makes it possible to examine global dune distribution patterns and to compare dunes with other global data sets (e.g. atmospheric models). MGD3 can also be used by researchers to identify areas suitable for more focused studies. The utility of MGD3 is demonstrated through three example applications. First, the uneven geographic distribution of the dunes is discussed and described. Second, dune-derived wind direction and its role as ground truth for atmospheric models is reviewed. Comparisons between dune-derived winds and global and mesoscale atmospheric models suggest that local topography may have an important influence on dune-forming winds. Third, the methods used here to estimate north polar dune volume are presented and these methods and estimates (1130km3 to 3250km3) are compared with those of previous researchers (1158km3 to 15 000km3). In the near future, MGD3 will be extended to include the south polar region. ?? 2011 by John Wiley and Sons, Ltd.

  9. Global Journal of Computer Science and Technology. Volume 1.2

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2009-01-01

    Articles in this issue of "Global Journal of Computer Science and Technology" include: (1) Input Data Processing Techniques in Intrusion Detection Systems--Short Review (Suhair H. Amer and John A. Hamilton, Jr.); (2) Semantic Annotation of Stock Photography for CBIR Using MPEG-7 standards (R. Balasubramani and V. Kannan); (3) An Experimental Study…

  10. Global Inventory of Regional and National Qualifications Frameworks. Volume I: Thematic Chapters

    ERIC Educational Resources Information Center

    Deij, Arjen; Graham, Michael; Bjornavold, Jens; Grm, Slava Pevec; Villalba, Ernesto; Christensen, Hanne; Chakroun, Borhene; Daelman, Katrien; Carlsen, Arne; Singh, Madhu

    2015-01-01

    The "Global Inventory of Regional and National Qualifications Frameworks," the result of collaborative work between the European Training Foundation (ETF), the European Centre for the Development of Vocational Training (Cedefop), UNESCO [United Nations Educational, Scientific and Cultural Organization] and UIL [UNESCO Institute for…

  11. Technical Report Series on Global Modeling and Data Assimilation, Volume 41 : GDIS Workshop Report

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Schubert, Siegfried; Pozzi, Will; Mo, Kingtse; Wood, Eric F.; Stahl, Kerstin; Hayes, Mike; Vogt, Juergen; Seneviratne, Sonia; Stewart, Ron; Pulwarty, Roger; Stefanski, Robert

    2015-01-01

    The workshop "An International Global Drought Information System Workshop: Next Steps" was held on 10-13 December 2014 in Pasadena, California. The more than 60 participants from 15 countries spanned the drought research community and included select representatives from applications communities as well as providers of regional and global drought information products. The workshop was sponsored and supported by the US National Integrated Drought Information System (NIDIS) program, the World Climate Research Program (WCRP: GEWEX, CLIVAR), the World Meteorological Organization (WMO), the Group on Earth Observations (GEO), the European Commission Joint Research Centre (JRC), the US Climate Variability and Predictability (CLIVAR) program, and the US National Oceanic and Atmospheric Administration (NOAA) programs on Modeling, Analysis, Predictions and Projections (MAPP) and Climate Variability & Predictability (CVP). NASA/JPL hosted the workshop with logistical support provided by the GEWEX program office. The goal of the workshop was to build on past Global Drought Information System (GDIS) progress toward developing an experimental global drought information system. Specific goals were threefold: (i) to review recent research results focused on understanding drought mechanisms and their predictability on a wide range of time scales and to identify gaps in understanding that could be addressed by coordinated research; (ii) to help ensure that WRCP research priorities mesh with efforts to build capacity to address drought at the regional level; and (iii) to produce an implementation plan for a short duration pilot project to demonstrate current GDIS capabilities. See http://www.wcrp-climate.org/gdis-wkshp-2014-objectives for more information.

  12. Global Journal of Computer Science and Technology. Volume 9, Issue 5 (Ver. 2.0)

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2010-01-01

    This is a special issue published in version 1.0 of "Global Journal of Computer Science and Technology." Articles in this issue include: (1) [Theta] Scheme (Orthogonal Milstein Scheme), a Better Numerical Approximation for Multi-dimensional SDEs (Klaus Schmitz Abe); (2) Input Data Processing Techniques in Intrusion Detection Systems--Short Review…

  13. The deep sea oxygen isotopic record: Significance for tertiary global ice volume history, with emphasis on the latest Miocene/early Pliocene

    SciTech Connect

    Prentice, M.L.

    1988-01-01

    Planktonic and benthic isotopic records as well as carbonate sedimentation records extending from 6.1 to 4.1 Ma for eastern South Atlantic Holes 526A and 525B are presented. These data suggest ice volume variations about a constant mean sufficient to drive sea level between 10 m and 75 m below present. Isotopic records at the deeper (2500 m) site have been enriched by up to 0.5% by dissolution. Carbonate accumulation rates at both sites quadrupled at 4.6 Ma primarily because of increased production and, secondarily, decreased dissolution. The second part presents a Cenozoic-long composite {delta}{sup 18}O curve for tropical shallow-dwelling planktonic foraminifers and the benthic foraminifer Cibicides at 2-4 km depths. Surface {delta}{sup 18}O gradients between various low-and-mid latitude sites reflect: (1) widespread SST stability through the Cenozoic and (2) significant change in Tasman Sea SST through the Tertiary. Assuming average SST for tropical non-upwelling areas was constant, the planktonic composite suggest that global ice volume for the last 40 my has not been significantly less than today. Residual benthic {delta}{sup 18}O reflect relatively warm and saline deep water until the early Miocene after which time deep water progressively cooled. The third part presents {delta}{sup 18}O for Recent Orbulina universa from 44 core-tops distributed through the Atlantic and Indian Oceans. The purpose was to test the hypothesis that Orbulina calcifies at constant temperature and so records only ice volume changes. Orbulina commonly calcifies at intermediate depths over a wide range of temperatures salinities, and densities. These physical factors are not the primary controls on the spatial and vertical distribution of Orbulina.

  14. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  15. Relative Importance of Mass and Volume Changes to Global Sea Level Rise

    NASA Astrophysics Data System (ADS)

    Jevrejeva, S.; Moore, J.; Grinsted, A.

    2008-12-01

    Sea level is an integrated indicator of climate variability, reflecting changes in the dynamic and thermodynamic in atmosphere, ocean and cryosphere. The rate of sea level rise and its causes is a topic of active debate. We examine the relationship between 50 year long records of global sea level (GSL) calculated from 1023 tide gauge stations and global ocean heat content (GOHC), glacier and ice sheet melting. The lack of consistent correlation between changes in GOHC and GSL during the period 1955-2003 argues against GOHC being the dominant factor in GSL as is often thought. We provide clear evidence of the substantial and increasing role in GSL from the eustatic component (47 per cent) compared with the contribution from increasing heat content (25 per cent), suggesting that the primary role is being played by the melting glaciers and ice sheets. There remains about 23 per cent of GSL rise unaccounted for by the best estimates of both eustatic and thermosteric effects. This fraction also exhibits large variability that is not readily associated with known causes of sea level variability. The most likely explanation of this unknown fraction is underestimated melting, climate- driven changes in terrestrial storage components and decadal time scale variability in global water cycle. This argues for a concerted effort to quantify changes in these reservoirs.

  16. Comparison of average global exposure of population induced by a macro 3G network in different geographical areas in France and Serbia.

    PubMed

    Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe

    2016-09-01

    This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc. PMID:27385053

  17. Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-12-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  18. Bibliography on tropical rain forests and the global carbon cycle: Volume 2, South Asia

    SciTech Connect

    Flint, E.P.; Richards, J.F.

    1989-02-01

    This bibliography covers the literature on tropical rain forests,tropical deforestation, land-use change, tropical forest conversion, and shifting cultivation in South Asia (predominantly India, Pakistan, and Bangladesh but also including contributions in Burma, Ceylon, Malaysia, and Sri Lanka). It covers not only rain-forest ecosystems but also other ecosystems that border, derive from, or influence rain forests. The literature included was selected because of its contribution to understanding the global carbon cycle, changes in that cycle, and rain forests' role in that cycle. Journal articles, books, and reports from 1880 to 1988 are included. The more than 4200 entries of this bibliography are ordered alphabetically by author and are indexed by subject and author.

  19. The balanced-force volume tracking algorithm and global embedded interface formulation for droplet dynamics with mass transfer

    SciTech Connect

    Francois, Marianne M; Carlson, Neil N

    2010-01-01

    Understanding the complex interaction of droplet dynamics with mass transfer and chemical reactions is of fundamental importance in liquid-liquid extraction. High-fidelity numerical simulation of droplet dynamics with interfacial mass transfer is particularly challenging because the position of the interface between the fluids and the interface physics need to be predicted as part of the solution of the flow equations. In addition, the discontinuity in fluid density, viscosity and species concentration at the interface present additional numerical challenges. In this work, we extend our balanced-force volume-tracking algorithm for modeling surface tension force (Francois et al., 2006) and we propose a global embedded interface formulation to model the interfacial conditions of an interface in thermodynamic equilibrium. To validate our formulation, we perform simulations of pure diffusion problems in one- and two-dimensions. Then we present two and three-dimensional simulations of a single droplet dynamics rising by buoyancy with mass transfer.

  20. A planet under siege: Are we changing earth`s climate?. Global Systems Science, Teacher`s guide to Volume 1

    SciTech Connect

    Sneider, C.; Golden, R.

    1993-01-01

    Global Systems Science is an interdisciplinary course for high school students that emphasizes how scientists from a wide variety of fields work together to understand problems of global impact. The ``big ideas`` of science are stressed, such as the concept of an interacting system, co-evolution of the atmosphere and life, and the important role that individuals can play in both affecting and protecting our vulnerable global environment. The target audience for this course encompasses the entire range of high school students from s nine tough twelve. The course involves students actively in learning. Global Systems Science is divided into five volumes. Each volume contains laboratory experiments; home investigations; descriptions of recent scientific work; historical background; and consideration of the political, economic, and ethical issues associated with each problem area. Collectively, these volumes constitute a unique combination of studies in the natural and social sciences from which high school students may view the global environmental problems that they will confront within their lifetimes. Collectively, they constitute a unique combination of studies in the natural and social sciences through which high school students may view the global environmental problems that they will confront within their lifetimes. The five volumes are: A Planet Under Siege: Are We Changing Earths Climate; A History of Fire and Ice: The Earth`s Climate System; Energy Paths: Use and Conservation of Energy; Ecological Systems: Evolution and Interdependence of Life; and, The Case of the Missing Ozone: Chemistry of the Earth`s Atmosphere.

  1. How to achieve synergy between volume replacement and filling products for global facial rejuvenation.

    PubMed

    Raspaldo, Hervé; Aziza, Richard; Belhaouari, Lakhdar; Berros, Philippe; Body, Sylvie; Galatoire, Olivier; Le Louarn, Claude; Michaud, Thierry; Niforos, François; Rousseaux, Isabelle; Runge, Marc; Taieb, Maryna

    2011-04-01

    The objective of this paper is to provide an expert consensus regarding facial rejuvenation using a combination of volume replacement (Juvéderm(®) VOLUMA(®)), filling products (Juvéderm(®) Ultra product line) and botulinum toxin. The Juvéderm product line exploits innovative 3-D technology, producing a range of cohesive, homogenous gels that produce predictable, long-lasting and natural results. The products are easy to use by practitioners and are well-tolerated by patients, and used in combination can provide additional benefits not achieved with one product alone. An assessment of facial anatomy and consideration of the aging process, as well as available treatment options, are also addressed in determining the best combination of products to use. Outcomes from a questionnaire and workshop sessions focusing on specific aspects of use of the Juvéderm product line and botulinum toxin in daily clinical practice are discussed, and recommendations for product use following debate amongst the experts are provided.

  2. Age Differences in Big Five Behavior Averages and Variabilities Across the Adult Lifespan: Moving Beyond Retrospective, Global Summary Accounts of Personality

    PubMed Central

    Noftle, Erik E.; Fleeson, William

    2009-01-01

    In three intensive cross-sectional studies, age differences in behavior averages and variabilities were examined. Three questions were posed: Does variability differ among age groups? Does the sizable variability in young adulthood persist throughout the lifespan? Do past conclusions about trait development, based on trait questionnaires, hold up when actual behavior is examined? Three groups participated: younger adults (18-23 years), middle-aged adults (35-55 years), and older adults (65-81 years). In two experience-sampling studies, participants reported their current behavior multiple times per day for one or two week spans. In a third study, participants interacted in standardized laboratory activities on eight separate occasions. First, results revealed a sizable amount of intraindividual variability in behavior for all adult groups, with standard deviations ranging from about half a point to well over one point on 6-point scales. Second, older adults were most variable in Openness whereas younger adults were most variable in Agreeableness and Emotional Stability. Third, most specific patterns of maturation-related age differences in actual behavior were both more greatly pronounced and differently patterned than those revealed by the trait questionnaire method. When participants interacted in standardized situations, personality differences between younger adults and middle-aged adults were larger, and older adults exhibited a more positive personality profile than they exhibited in their everyday lives. PMID:20230131

  3. Evaluation of the skill of North-American Multi-Model Ensemble (NMME) Global Climate Models in predicting average and extreme precipitation and temperature over the continental USA

    NASA Astrophysics Data System (ADS)

    Slater, Louise J.; Villarini, Gabriele; Bradley, Allen A.

    2016-08-01

    This paper examines the forecasting skill of eight Global Climate Models from the North-American Multi-Model Ensemble project (CCSM3, CCSM4, CanCM3, CanCM4, GFDL2.1, FLORb01, GEOS5, and CFSv2) over seven major regions of the continental United States. The skill of the monthly forecasts is quantified using the mean square error skill score. This score is decomposed to assess the accuracy of the forecast in the absence of biases (potential skill) and in the presence of conditional (slope reliability) and unconditional (standardized mean error) biases. We summarize the forecasting skill of each model according to the initialization month of the forecast and lead time, and test the models' ability to predict extended periods of extreme climate conducive to eight `billion-dollar' historical flood and drought events. Results indicate that the most skillful predictions occur at the shortest lead times and decline rapidly thereafter. Spatially, potential skill varies little, while actual model skill scores exhibit strong spatial and seasonal patterns primarily due to the unconditional biases in the models. The conditional biases vary little by model, lead time, month, or region. Overall, we find that the skill of the ensemble mean is equal to or greater than that of any of the individual models. At the seasonal scale, the drought events are better forecast than the flood events, and are predicted equally well in terms of high temperature and low precipitation. Overall, our findings provide a systematic diagnosis of the strengths and weaknesses of the eight models over a wide range of temporal and spatial scales.

  4. Seasonal cycle of volume transport through Kerama Gap revealed by a 20-year global HYbrid Coordinate Ocean Model reanalysis

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Metzger, E. Joseph; Thoppil, Prasad; Hurlburt, Harley E.; Zamudio, Luis; Smedstad, Ole Martin; Na, Hanna; Nakamura, Hirohiko; Park, Jae-Hun

    2015-12-01

    The temporal variability of volume transport from the North Pacific Ocean to the East China Sea (ECS) through Kerama Gap (between Okinawa Island and Miyakojima Island - a part of Ryukyu Islands Arc) is investigated using a 20-year global HYbrid Coordinate Ocean Model (HYCOM) reanalysis with the Navy Coupled Ocean Data Assimilation from 1993 to 2012. The HYCOM mean transport is 2.1 Sv (positive into the ECS, 1 Sv = 106 m3/s) from June 2009 to June 2011, in good agreement with the observed 2.0 Sv transport during the same period. This is similar to the 20-year mean Kerama Gap transport of 1.95 ± 4.0 Sv. The 20-year monthly mean volume transport (transport seasonal cycle) is maximum in October (3.0 Sv) and minimum in November (0.5 Sv). The annual variation component (345-400 days), mesoscale eddy component (70-345 days), and Kuroshio meander component (< 70 days) are separated to determine their contributions to the transport seasonal cycle. The annual variation component has a close relation with the local wind field and increases (decreases) transport into the ECS through Kerama Gap in summer (winter). Most of the variations in the transport seasonal cycle come from the mesoscale eddy component. The impinging mesoscale eddies increase the transport into the ECS during January, February, May, and October, and decrease it in March, April, November, and December, but have little effect in summer (June-September). The Kuroshio meander components cause smaller transport variations in summer than in winter.

  5. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function

    PubMed Central

    Yotti, Raquel; del Villar, Candelas Pérez; del Álamo, Juan C.; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Carlos Antoranz, J.; Mar Desco, M.; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-01-01

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart. PMID:23743396

  6. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  7. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  8. Paradoxes in Averages.

    ERIC Educational Resources Information Center

    Mitchem, John

    1989-01-01

    Examples used to illustrate Simpson's paradox for secondary students include probabilities, university admissions, batting averages, student-faculty ratios, and average and expected class sizes. Each result is explained. (DC)

  9. Educational Policy Transfer in an Era of Globalization: Theory--History--Comparison. Comparative Studies Series. Volume 23

    ERIC Educational Resources Information Center

    Rappleye, Jeremy

    2012-01-01

    As education becomes increasingly global, the processes and politics of transfer have become a central focus of research. This study provides a comprehensive analysis of contemporary theoretical and analytical work aimed at exploring international educational reform and reveals the myriad ways that globalization is now fundamentally altering our…

  10. Suicide triggers as sex-specific threats in domains of evolutionary import: negative correlation between global male-to-female suicide ratios and average per capita gross national income.

    PubMed

    Saad, Gad

    2007-01-01

    From an evolutionary perspective, suicide is a paradoxical phenomenon given its fatal consequences on one's reproductive fitness. That fact notwithstanding, evolutionists have typically used kin and group selection arguments in proposing that suicide might indeed be viewed as an adaptive behavioral response. The current paper posits that in some instances, suicide might be construed as the ultimate maladaptive response to "crushing defeats" in domains of great evolutionary import (e.g., mating). Specifically, it is hypothesized that numerous sex-specific triggers of suicide are universally consistent because they correspond to dire sex-specific attacks on one's reproductive fitness (e.g., loss of occupational status is much more strongly linked to male suicides). More generally, it is proposed that many epidemiological aspects of suicide are congruent with Darwinian-based frameworks. These include the near-universal finding that men are much more likely to commit suicide (sexual selection theory), the differential motives that drive men and women to commit suicide (evolutionary psychology), and the shifting patterns of suicide across the life span (life-history theory). Using data from the World Health Organization and the World Bank, several evolutionary-informed hypotheses, regarding the correlation between male-to-female suicide ratios and average per capita Gross National Income, are empirically tested. Overall, the findings are congruent with Darwinian-based expectations namely as economic conditions worsen the male-to-female suicide ratio is exacerbated, with the negative correlation being the strongest for the "working age" brackets. The hypothesized evolutionary outlook provides a consilient framework in comprehending universal sex-specific triggers of suicide. Furthermore, it allows suicidologists to explore new research avenues that might remain otherwise untapped if one were to restrict their research interests on the identification of proximate causes

  11. Drilling and dating New Jersey oligocene-miocene sequences: Ice volume, global sea level, and Exxon records

    SciTech Connect

    Miller, K.G.; Mountain, G.S.

    1996-02-23

    Oligocene to middle Miocene sequence boundaries on the New Jersey coastal plain (Ocean Drilling Project Leg 150X) and continental slope (Ocean Drilling Project Leg 150) were dated by integrating strontium isotopic stratigraphy, magnetostratigraphy, and biostratigraphy (planktonic foraminifera, nannofossils, dinocysts, and diatoms). The ages of coastal plain unconformities and slope seismic reflectors (unconformities or stratal breaks with no discernible hiatuses) match the ages of global {delta}{sup 18}O increases (inferred glacioeustatic lowerings) measured in deep-sea sites. These correlations confirm a causal link between coastal plain and slope sequence boundaries: both formed during global sea-level lowerings. The ages of New Jersey sequence boundaries and global {delta}{sup 18}O increases also correlate well within the Exxon Production Research sea-level records of Haq et al. and Vail et al., validating and refining their compilations. 33 refs., 2 figs., 1 tab.

  12. Technical Report Series on Global Modeling and Data Assimilation. Volume 31; Global Surface Ocean Carbon Estimates in a Model Forced by MERRA

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; Rousseaux, Cecile S.

    2013-01-01

    MERRA products were used to force an established ocean biogeochemical model to estimate surface carbon inventories and fluxes in the global oceans. The results were compared to public archives of in situ carbon data and estimates. The model exhibited skill for ocean dissolved inorganic carbon (DIC), partial pressure of ocean CO2 (pCO2) and air-sea fluxes (FCO2). The MERRA-forced model produced global mean differences of 0.02% (approximately 0.3 microns) for DIC, -0.3% (about -1.2 (micro) atm; model lower) for pCO2, and -2.3% (-0.003 mol C/sq m/y) for FCO2 compared to in situ estimates. Basin-scale distributions were significantly correlated with observations for all three variables (r=0.97, 0.76, and 0.73, P<0.05, respectively for DIC, pCO2, and FCO2). All major oceanographic basins were represented as sources to the atmosphere or sinks in agreement with in situ estimates. However, there were substantial basin-scale and local departures.

  13. Technical Report Series on Global Modeling and Data Assimilation, Volume 43. MERRA-2; Initial Evaluation of the Climate

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Bosilovich, Michael G.; Akella, Santha; Lawrence, Coy; Cullather, Richard; Draper, Clara; Gelaro, Ronald; Kovach, Robin; Liu, Qing; Molod, Andrea; Norris, Peter; Wargan, Krzysztof; Chao, Winston; Reichle, Rolf; Takacs, Lawrence; Todling, Ricardo; Vikhliaev, Yury; Bloom, Steve; Collow, Allison; Partyka, Gary; Labow, Gordon; Pawson, Steven; Reale, Oreste; Schubert, Siegfried; Suarez, Max

    2015-01-01

    The years since the introduction of MERRA have seen numerous advances in the GEOS-5 Data Assimilation System as well as a substantial decrease in the number of observations that can be assimilated into the MERRA system. To allow continued data processing into the future, and to take advantage of several important innovations that could improve system performance, a decision was made to produce MERRA-2, an updated retrospective analysis of the full modern satellite era. One of the many advances in MERRA-2 is a constraint on the global dry mass balance; this allows the global changes in water by the analysis increment to be near zero, thereby minimizing abrupt global interannual variations due to changes in the observing system. In addition, MERRA-2 includes the assimilation of interactive aerosols into the system, a feature of the Earth system absent from previous reanalyses. Also, in an effort to improve land surface hydrology, observations-corrected precipitation forcing is used instead of model-generated precipitation. Overall, MERRA-2 takes advantage of numerous updates to the global modeling and data assimilation system. In this document, we summarize an initial evaluation of the climate in MERRA-2, from the surface to the stratosphere and from the tropics to the poles. Strengths and weaknesses of the MERRA-2 climate are accordingly emphasized.

  14. Educating American Students for Life in a Global Society. Policy Briefs: Education Reform. Volume 2, Number 4

    ERIC Educational Resources Information Center

    Lansford, Jennifer E.

    2002-01-01

    Progress in travel, technology, and other domains has contributed to the breaking down of barriers between countries and allowed for the development of an increasingly global society. International cooperation and competition are now pervasive in areas as diverse as business, science, arts, politics, and athletics. Educating students to navigate…

  15. Democracy in Schools, Citizenship and Global Concern. Didaktiske Studier. Studies in Educational Theory and Curriculum, Volume 18.

    ERIC Educational Resources Information Center

    Jensen, Knud, Ed.; Larsen, Ole B., Ed.; Walker, Stephen, Ed.

    One way to explore how the relationship between schools and the local culture can be enriched and transformed is a fruitful dialogue among different communities. The conference that is reported on in this collection was concerned with using three "talking points" as a platform: democratization, citizenship, and global concerns. The collection is…

  16. Proceedings of the First National Workshop on the Global Weather Experiment: Current Achievements and Future Directions, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A summary of the proceedings in which the most important findings stemming from the Global Weather Experiment (GWE) are highlighted, additional key results and recommendations are comered, and the presentations and discussion are summarized. Detailed achievements, unresolved problems, and recommendations are included.

  17. The average enzyme principle.

    PubMed

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-09-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This "average enzyme principle" provides a natural methodology for jointly studying metabolism and its regulation.

  18. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  19. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  20. Technical Report Series on Global Modeling and Data Assimilation. Volume 20; The Climate of the FVCCM-3 Model

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Chang, Yehui; Schubert, Siegfried D.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2001-01-01

    This document describes the climate of version 1 of the NASA-NCAR model developed at the Data Assimilation Office (DAO). The model consists of a new finite-volume dynamical core and an implementation of the NCAR climate community model (CCM-3) physical parameterizations. The version of the model examined here was integrated at a resolution of 2 degrees latitude by 2.5 degrees longitude and 32 levels. The results are based on assimilation that was forced with observed sea surface temperature and sea ice for the period 1979-1995, and are compared with NCEP/NCAR reanalyses and various other observational data sets. The results include an assessment of seasonal means, subseasonal transients including the Madden Julian Oscillation, and interannual variability. The quantities include zonal and meridional winds, temperature, specific humidity, geopotential height, stream function, velocity potential, precipitation, sea level pressure, and cloud radiative forcing.

  1. Technical Report Series on Global Modeling and Data Assimilation. Volume 13; Interannual Variability and Potential Predictability in Reanalysis Products

    NASA Technical Reports Server (NTRS)

    Min, Wei; Schubert, Siegfried D.; Suarez, Max J. (Editor)

    1997-01-01

    The Data Assimilation Office (DAO) at Goddard Space Flight Center and the National Center for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) have produced multi-year global assimilations of historical data employing fixed analysis systems. These "reanalysis" products are ideally suited for studying short-term climatic variations. The availability of multiple reanalysis products also provides the opportunity to examine the uncertainty in the reanalysis data. The purpose of this document is to provide an updated estimate of seasonal and interannual variability based on the DAO and NCEP/NCAR reanalyses for the 15-year period 1980-1995. Intercomparisons of the seasonal means and their interannual variations are presented for a variety of prognostic and diagnostic fields. In addition, atmospheric potential predictability is re-examined employing selected DAO reanalysis variables.

  2. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP ENTITLED "GLOBAL ANALYSIS OF POLARIZED PARTON DESTRIBUTIONS IN THE RHIC ERA" (VOLUME 86).

    SciTech Connect

    DESHPANDE,A.; VOGELSANG, W.

    2007-10-08

    The determination of the polarized gluon distribution is a central goal of the RHIC spin program. Recent achievements in polarization and luminosity of the proton beams in RHIC, has enabled the RHIC experiments to acquire substantial amounts of high quality data with polarized proton beams at 200 and 62.4 GeV center of mass energy, allowing a first glimpse of the polarized gluon distribution at RHIC. Short test operation at 500 GeV center of mass energy has also been successful, indicating absence of any fundamental roadblocks for measurements of polarized quark and anti-quark distributions planned at that energy in a couple of years. With this background, it has now become high time to consider how all these data sets may be employed most effectively to determine the polarized parton distributions in the nucleon, in general, and the polarized gluon distribution, in particular. A global analysis of the polarized DIS data from the past and present fixed target experiments jointly with the present and anticipated RHIC Spin data is needed.

  3. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  4. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  5. Assessment of Treatment Response by Total Tumor Volume and Global Apparent Diffusion Coefficient Using Diffusion-Weighted MRI in Patients with Metastatic Bone Disease: A Feasibility Study

    PubMed Central

    Blackledge, Matthew D.; Collins, David J.; Tunariu, Nina; Orton, Matthew R.; Padhani, Anwar R.; Leach, Martin O.; Koh, Dow-Mu

    2014-01-01

    We describe our semi-automatic segmentation of whole-body diffusion-weighted MRI (WBDWI) using a Markov random field (MRF) model to derive tumor total diffusion volume (tDV) and associated global apparent diffusion coefficient (gADC); and demonstrate the feasibility of using these indices for assessing tumor burden and response to treatment in patients with bone metastases. WBDWI was performed on eleven patients diagnosed with bone metastases from breast and prostate cancers before and after anti-cancer therapies. Semi-automatic segmentation incorporating a MRF model was performed in all patients below the C4 vertebra by an experienced radiologist with over eight years of clinical experience in body DWI. Changes in tDV and gADC distributions were compared with overall response determined by all imaging, tumor markers and clinical findings at serial follow up. The segmentation technique was possible in all patients although erroneous volumes of interest were generated in one patient because of poor fat suppression in the pelvis, requiring manual correction. Responding patients showed a larger increase in gADC (median change = +0.18, range = −0.07 to +0.78×10−3 mm2/s) after treatment compared to non-responding patients (median change = −0.02, range = −0.10 to +0.05×10−3 mm2/s, p = 0.05, Mann-Whitney test), whereas non-responding patients showed a significantly larger increase in tDV (median change = +26%, range = +3 to +284%) compared to responding patients (median change = −50%, range = −85 to +27%, p = 0.02, Mann-Whitney test). Semi-automatic segmentation of WBDWI is feasible for metastatic bone disease in this pilot cohort of 11 patients, and could be used to quantify tumor total diffusion volume and median global ADC for assessing response to treatment. PMID:24710083

  6. Incorporating global warming risks in power sector planning: A case study of the New England region. Volume 1

    SciTech Connect

    Krause, F.; Busch, J.; Koomey, J.

    1992-11-01

    Growing international concern over the threat of global climate change has led to proposals to buy insurance against this threat by reducing emissions of carbon (short for carbon dioxide) and other greenhouse gases below current levels. Concern over these and other, non-climatic environmental effects of electricity generation has led a number of states to adopt or explore new mechanisms for incorporating environmental externalities in utility resource planning. For example, the New York and Massachusetts utility commissions have adopted monetized surcharges (or adders) to induce emission reductions of federally regulated air pollutants (notably, SO{sub 2}, NO{sub x}, and particulates) beyond federally mandated levels. These regulations also include preliminary estimates of the cost of reducing carbon emissions, for which no federal regulations exist at this time. Within New England, regulators and utilities have also held several workshops and meetings to discuss alternative methods of incorporating externalities as well as the feasibility of regional approaches. This study examines the potential for reduced carbon emissions in the New England power sector as well as the cost and rate impacts of two policy approaches: environmental externality surcharges and a target- based approach. We analyze the following questions: Does New England have sufficient low-carbon resources to achieve significant reductions (10% to 20% below current levels) in fossil carbon emissions in its utility sector? What reductions could be achieved at a maximum? What is the expected cost of carbon reductions as a function of the reduction goal? How would carbon reduction strategies affect electricity rates? How effective are environmental externality cost surcharges as an instrument in bringing about carbon reductions? To what extent could the minimization of total electricity costs alone result in carbon reductions relative to conventional resource plans?

  7. Global Distribution of CO2 Volume Mixing Ratio in the Mesosphere and Lower Thermosphere and Long-Term Changes Observed By Saber

    NASA Astrophysics Data System (ADS)

    Russell, J. M., III; Rezac, L.; Yue, J.; Jian, Y.; Kutepov, A. A.; Garcia, R. R.; Walker, K. A.; Bernath, P. F.

    2014-12-01

    The SABER 10-channel limb scanning radiometer has been operating onboard the TIMED satellite nearly continuously since launch on December 7, 2001. Beginning in late January, 2002 and continuing to the present day, SABER has been measuring limb radiance profiles used to retrieve vertical profiles of temperature, volume mixing ratios (VMRs) of O3, CO2, H2O, [O], and [H], and volume emission rates of NO, OH(2.1μm), OH(1.6μm) and O2(singlet delta). The measurements extend from the tropopause to the lower thermosphere, and span from 54S to 84N or 54N to 84S daily with alternating latitude coverage every ~ 60 days. Currently more than six million profiles of each parameter have been retrieved. The CO2 VMR is a new SABER data product that just became available this year. The temperature and CO2 VMRs are simultaneously retrieved in the ~65 km to 110 km range using limb radiances measured at 4.3 and 15 micrometers. Results will be presented of CO2 validation studies done using comparisons with coincident ACE-FTS CO2 data and SD-WACCM model simulations. The CO2 VMRs agree with ACE-FTS observations to within reported measurement uncertainties and they are in good agreement with SD-WACCM seasonal and global distributions. The SABER observed CO2 VMR departure from uniform mixing tends to start above ~80 km which is generally higher than what the model calculates. Variations of CO2 VMR with latitude and season are substantial. Seasonal zonal mean cross sections and CO2 time series for selected latitudes and altitudes over the 12.5-year time period, will also be shown. The CO2 VMR increase rate at 100 km is in close agreement with in situ results measured at the Mauna Loa Observatory.

  8. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  9. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  10. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  11. Volcanoes and global catastrophes

    NASA Technical Reports Server (NTRS)

    Simkin, Tom

    1988-01-01

    The search for a single explanation for global mass extinctions has let to polarization and the controversies that are often fueled by widespread media attention. The historic record shows a roughly linear log-log relation between the frequency of explosive volcanic eruptions and the volume of their products. Eruptions such as Mt. St. Helens 1980 produce on the order of 1 cu km of tephra, destroying life over areas in the 10 to 100 sq km range, and take place, on the average, once or twice a decade. Eruptions producing 10 cu km take place several times a century and, like Krakatau 1883, destroy life over 100 to 1000 sq km areas while producing clear global atmospheric effects. Eruptions producting 10,000 cu km are known from the Quaternary record, and extrapolation from the historic record suggests that they occur perhaps once in 20,000 years, but none has occurred in historic time and little is known of their biologic effects. Even larger eruptions must also exist in the geologic record, but documentation of their volume becomes increasingly difficult as their age increases. The conclusion is inescapable that prehistoric eruptions have produced catastrophes on a global scale: only the magnitude of the associated mortality is in question. Differentiation of large magma chambers is on a time scale of thousands to millions of years, and explosive volcanoes are clearly concentrated in narrow belts near converging plate margins. Volcanism cannot be dismissed as a producer of global catastrophes. Its role in major extinctions is likely to be at least contributory and may well be large. More attention should be paid to global effects of the many huge eruptions in the geologic record that dwarf those known in historic time.

  12. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  13. Influence of Type 2 Diabetes on Brain Volumes and Changes in Brain Volumes

    PubMed Central

    Espeland, Mark A.; Bryan, R. Nick; Goveas, Joseph S.; Robinson, Jennifer G.; Siddiqui, Mustafa S.; Liu, Simin; Hogan, Patricia E.; Casanova, Ramon; Coker, Laura H.; Yaffe, Kristine; Masaki, Kamal; Rossom, Rebecca; Resnick, Susan M.

    2013-01-01

    OBJECTIVE To study how type 2 diabetes adversely affects brain volumes, changes in volume, and cognitive function. RESEARCH DESIGN AND METHODS Regional brain volumes and ischemic lesion volumes in 1,366 women, aged 72–89 years, were measured with structural brain magnetic resonance imaging (MRI). Repeat scans were collected an average of 4.7 years later in 698 women. Cross-sectional differences and changes with time between women with and without diabetes were compared. Relationships that cognitive function test scores had with these measures and diabetes were examined. RESULTS The 145 women with diabetes (10.6%) at the first MRI had smaller total brain volumes (0.6% less; P = 0.05) and smaller gray matter volumes (1.5% less; P = 0.01) but not white matter volumes, both overall and within major lobes. They also had larger ischemic lesion volumes (21.8% greater; P = 0.02), both overall and in gray matter (27.5% greater; P = 0.06), in white matter (18.8% greater; P = 0.02), and across major lobes. Overall, women with diabetes had slightly (nonsignificant) greater loss of total brain volumes (3.02 cc; P = 0.11) and significant increases in total ischemic lesion volumes (9.7% more; P = 0.05) with time relative to those without diabetes. Diabetes was associated with lower scores in global cognitive function and its subdomains. These relative deficits were only partially accounted for by brain volumes and risk factors for cognitive deficits. CONCLUSIONS Diabetes is associated with smaller brain volumes in gray but not white matter and increasing ischemic lesion volumes throughout the brain. These markers are associated with but do not fully account for diabetes-related deficits in cognitive function. PMID:22933440

  14. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  15. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  16. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733

  17. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations.

  18. Average luminosity distance in inhomogeneous universes

    SciTech Connect

    Kostov, Valentin

    2010-04-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained

  19. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the

  20. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  1. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  2. Volcanic Signatures in Estimates of Stratospheric Aerosol Size, Distribution Width, Surface Area, and Volume Deduced from Global Satellite-Based Observations

    NASA Technical Reports Server (NTRS)

    Bauman, J. J.; Russell, P. B.

    2000-01-01

    Volcanic signatures in the stratospheric aerosol layer are revealed by two independent techniques which retrieve aerosol information from global satellite-based observations of particulate extinction. Both techniques combine the 4-wavelength Stratospheric Aerosol and Gas Experiment (SAGE) II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument. The algorithms use the SAGE II/CLAES composite extinction spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub R). The first technique is a multi-wavelength Look-Up-Table (LUT) algorithm which retrieves values and uncertainties of R(sub eff) by comparing ratios of extinctions from SAGE II and CLAES (e.g., E(sub lambda)/E(sub 1.02) to pre-computed extinction ratios which are based on a range of unimodal lognormal size distributions. The pre-computed ratios are presented as a function of R(sub eff) for a given sigma(sub g); thus the comparisons establish the range of R(sub eff) consistent with the measured spectra for that sigma(sub g). The fact that no solutions are found for certain sigma(sub g) values provides information on the acceptable range of sigma(sub g), which is found to evolve in response to volcanic injections and removal periods. Analogous comparisons using absolute extinction spectra and error bars establish the range of S and V. The second technique is a Parameter Search Technique (PST) which estimates R(sub eff) and sigma(sub g) within a month-latitude-altitude bin by minimizing the chi-squared values obtained by comparing the SAGE II/CLAES extinction spectra and error bars with spectra calculated by varying the lognormal fitting parameters: R(sub eff), sigma(sub g), and the total number of particles N(sub 0). For both techniques, possible biases in

  3. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  4. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  5. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  6. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  7. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  8. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  9. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  10. The German Skills Machine: Sustaining Comparative Advantage in a Global Economy. Policies and Institutions: Germany, Europe, and Transatlantic Relations, Volume 3.

    ERIC Educational Resources Information Center

    Culpepper, Pepper D., Ed.; Finegold, David, Ed.

    This book examines the effectiveness and distributive ramifications of the institutions of German skill provision as they functioned at home in the 1990s and as they served as a template for reform in other industrialized countries. The volume relies on multiple sources of data, including in-firm case studies, larger-scale surveys of companies,…

  11. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  12. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866

  13. A Salzburg Global Seminar: "Optimizing Talent: Closing Education and Social Mobility Gaps Worldwide." Policy Notes. Volume 20, Number 3, Fall 2012

    ERIC Educational Resources Information Center

    Schwartz, Robert

    2012-01-01

    This issue of ETS Policy Notes (Vol. 20, No. 3) provides highlights from the Salzburg Global Seminar in December 2011. The seminar focused on bettering the educational and life prospects of students up to age 18 worldwide. [This article was written with the assistance of Beth Brody.

  14. Higher Education in a Global Society Achieving Diversity, Equity and Excellence (Advances in Education in Diverse Communities: Research Policy and Praxis, Volume 5)

    ERIC Educational Resources Information Center

    Elsevier, 2006

    2006-01-01

    The "problem of the 21st century" is rapidly expanding diversity alongside stubbornly persistent status and power inequities by race, ethnicity, gender, class, language, citizenship and region. Extensive technological, economic, political and social changes, along with immigration, combine to produce a global community of great diversity…

  15. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  16. OAST Space Theme Workshop. Volume 2: Theme summary. 5: Global service (no. 11). A. Statement. B. 26 April 1976 presentation. C. Summary

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The benefits to be obtained from cost-effective global observation of the earth, its environment, and its natural and man-made features are examined using typical spacecraft and missions which could enhance the benefits of space operations. The technology needs and areas of interest include: (1) a ten-fold increase in the dimensions of deployable and erectable structures to provide booms, antennas, and platforms for global sensor systems; (2) control and stabilization systems capable of pointing accuracies of 1 arc second or less to locate targets of interest and maintain platform or sensor orientation during operations; (3) a factor of five improvements in spacecraft power capacity to support payloads and supporting electronics; (4) auxiliary propulsion systems capable of 5 to 10 years on orbit operation; (5) multipurpose sensors; and (6) end-to-end data management and an information system configured to accept new components or concepts as they develop.

  17. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-01

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc. PMID:16652369

  18. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  19. Average observational quantities in the timescape cosmology

    SciTech Connect

    Wiltshire, David L.

    2009-12-15

    We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

  20. NASA University Research Centers Technical Advances in Aeronautics, Space Sciences and Technology, Earth Systems Sciences, Global Hydrology, and Education. Volumes 2 and 3

    NASA Technical Reports Server (NTRS)

    Coleman, Tommy L. (Editor); White, Bettie (Editor); Goodman, Steven (Editor); Sakimoto, P. (Editor); Randolph, Lynwood (Editor); Rickman, Doug (Editor)

    1998-01-01

    This volume chronicles the proceedings of the 1998 NASA University Research Centers Technical Conference (URC-TC '98), held on February 22-25, 1998, in Huntsville, Alabama. The University Research Centers (URCS) are multidisciplinary research units established by NASA at 11 Historically Black Colleges or Universities (HBCU's) and 3 Other Minority Universities (OMU's) to conduct research work in areas of interest to NASA. The URC Technical Conferences bring together the faculty members and students from the URC's with representatives from other universities, NASA, and the aerospace industry to discuss recent advances in their fields.

  1. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  2. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  3. Technical report series on global modeling and data assimilation. Volume 2: Direct solution of the implicit formulation of fourth order horizontal diffusion for gridpoint models on the sphere

    NASA Technical Reports Server (NTRS)

    Li, Yong; Moorthi, S.; Bates, J. Ray; Suarez, Max J.

    1994-01-01

    High order horizontal diffusion of the form K Delta(exp 2m) is widely used in spectral models as a means of preventing energy accumulation at the shortest resolved scales. In the spectral context, an implicit formation of such diffusion is trivial to implement. The present note describes an efficient method of implementing implicit high order diffusion in global finite difference models. The method expresses the high order diffusion equation as a sequence of equations involving Delta(exp 2). The solution is obtained by combining fast Fourier transforms in longitude with a finite difference solver for the second order ordinary differential equation in latitude. The implicit diffusion routine is suitable for use in any finite difference global model that uses a regular latitude/longitude grid. The absence of a restriction on the timestep makes it particularly suitable for use in semi-Lagrangian models. The scale selectivity of the high order diffusion gives it an advantage over the uncentering method that has been used to control computational noise in two-time-level semi-Lagrangian models.

  4. Technical report series on global modeling and data assimilation. Volume 6: A multiyear assimilation with the GEOS-1 system: Overview and results

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Schubert, Siegfried; Rood, Richard; Park, Chung-Kyu; Wu, Chung-Yu; Kondratyeva, Yelena; Molod, Andrea; Takacs, Lawrence; Seablom, Michael; Higgins, Wayne

    1995-01-01

    The Data Assimilation Office (DAO) at Goddard Space Flight Center has produced a multiyear global assimilated data set with version 1 of the Goddard Earth Observing System Data Assimilation System (GEOS-1 DAS). One of the main goals of this project, in addition to benchmarking the GEOS-1 system, was to produce a research quality data set suitable for the study of short-term climate variability. The output, which is global and gridded, includes all prognostic fields and a large number of diagnostic quantities such as precipitation, latent heating, and surface fluxes. Output is provided four times daily with selected quantities available eight times per day. Information about the observations input to the GEOS-1 DAS is provided in terms of maps of spatial coverage, bar graphs of data counts, and tables of all time periods with significant data gaps. The purpose of this document is to serve as a users' guide to NASA's first multiyear assimilated data set and to provide an early look at the quality of the output. Documentation is provided on all the data archives, including sample read programs and methods of data access. Extensive comparisons are made with the corresponding operational European Center for Medium-Range Weather Forecasts analyses, as well as various in situ and satellite observations. This document is also intended to alert users of the data about potential limitations of assimilated data, in general, and the GEOS-1 data, in particular. Results are presented for the period March 1985-February 1990.

  5. A database of age-appropriate average MRI templates.

    PubMed

    Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze

    2016-01-01

    This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/).

  6. The global frequency-wave number spectrum of oceanic variability estimated from TOPEX/POSEIDON altimetric measurements. Volume 100, No. C12; The Journal of Geophysical Research

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1995-01-01

    Two years of altimetric data from the TOPEX/POSEIDON spacecraft have been used to produce preliminary estimates of the space and time spectra of global variability for both sea surface height and slope. The results are expressed in terms of both degree variances from spherical harmonic expansions and in along-track wavenumbers. Simple analytic approximations both in terms of piece-wise power laws and Pade fractions are provided for comparison with independent measurements and for easy use of the results. A number of uses of such spectra exist, including the possibility of combining the altimetric data with other observations, predictions of spatial coherences, and the estimation of the accuracy of apparent secular trends in sea level.

  7. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

  8. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground-based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley s Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  9. Technical Report Series on Global Modeling and Data Assimilation. Volume 32; Estimates of AOD Trends (2002 - 2012) Over the World's Major Cities Based on the MERRA Aerosol Reanalysis

    NASA Technical Reports Server (NTRS)

    Provencal, Simon; Kishcha, Pavel; Elhacham, Emily; daSilva, Arlindo M.; Alpert, Pinhas; Suarez, Max J.

    2014-01-01

    NASA's Global Modeling and Assimilation Office has extended the Modern-Era Retrospective Analysis for Research and Application (MERRA) tool with five atmospheric aerosol species (sulfates, organic carbon, black carbon, mineral dust and sea salt). This inclusion of aerosol reanalysis data is now known as MERRAero. This study analyses a ten-year period (July 2002 - June 2012) MERRAero aerosol reanalysis applied to the study of aerosol optical depth (AOD) and its trends for the aforementioned aerosol species over the world's major cities (with a population of over 2 million inhabitants). We found that a proportion of various aerosol species in total AOD exhibited a geographical dependence. Cities in industrialized regions (North America, Europe, central and eastern Asia) are characterized by a strong proportion of sulfate aerosols. Organic carbon aerosols are dominant over cities which are located in regions where biomass burning frequently occurs (South America and southern Africa). Mineral dust dominates other aerosol species in cities located in proximity to the major deserts (northern Africa and western Asia). Sea salt aerosols are prominent in coastal cities but are dominant aerosol species in very few of them. AOD trends are declining over cities in North America, Europe and Japan, as a result of effective air quality regulation. By contrast, the economic boom in China and India has led to increasing AOD trends over most cities in these two highly-populated countries. Increasing AOD trends over cities in the Middle East are caused by increasing desert dust.

  10. Advance of East Antarctic outlet glaciers during the Hypsithermal: Implications for the volume state of the Antarctic ice sheet under global warming

    SciTech Connect

    Domack, E.W. ); Jull, A.J.T. ); Nakao, Seizo )

    1991-11-01

    The authors present the first circum-East Antarctic chronology for the Holocene, based on 17 radiocarbon dates generated by the accelerator method. Marine sediments form around East Antarctica contain a consistent, high-resolution record of terrigenous (ice-proximal) and biogenic (open-marine) sedimentation during Holocene time. This record demonstrates that biogenic sedimentation beneath the open-marine environment on the continental shelf has been restricted to approximately the past 4 ka, whereas a period of terrigenous sedimentation related to grounding line advance of ice tongues and ice shelves took place between 7 and 4 ka. An earlier period of open-marine (biogenic sedimentation) conditions following the late Pleistocene glacial maximum is recognized from the Prydz Bay (Ocean Drilling Program) record between 10.7 and 7.3 ka. Clearly, the response of outlet systems along the periphery of the East Antarctic ice sheet during the mid-Holocene was expansion. This may have been a direct consequence of climate warming during an Antarctic Hypsithermal. Temperature-accumulation relations for the Antarctic indicate that warming will cause a significant increase in accumulation rather than in ablation. Models that predict a positive mass balance (growth) of the Antarctic ice sheet under global warming are supported by the mid-Holocene data presented herein.

  11. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  12. Compendium of NASA Data Base for the Global Tropospheric Experiment's Pacific Exploratory Mission - Tropics B (PEM-Tropics B). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Scott, A. Donald, Jr.; Kleb, Mary M.; Raper, James L.

    2000-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) conducted in March and April 1999. PEM-Tropics B was conducted during the southern-tropical wet season when the influence from biomass burning observed in PEM-Tropics A was minimal. Major deployment sites were Hawaii, Kiritimati (Christmas Island), Tahiti, Fiji, and Easter Island. The broad goals of PEM-Tropics B were to improved understanding of the oxidizing power of the atmosphere and the processes controlling sulfur aerosol formation and to establish baseline values for chemical species that are directly coupled to the oxidizing power and aerosol loading of the troposphere. The purpose of this document is to provide a representation of aircraft data that will be available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) or are available through the GTE Project Office archive. The data format is not intended to support original research/analysis, but to assist the reader in identifying data that are of interest.

  13. Compendium of NASA Data Base for the Global Tropospheric Experiment's Pacific Exploratory Mission-Tropics B (PEM-Tropics B). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Scott, A. Donald, Jr.; Kleb, Mary M.; Raper, James L.

    2000-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) conducted in March and April 1999. PEM-Tropics B was conducted during the southern-tropical wet season when the influence from biomass burning observed in PEM-Tropics A was minimal. Major deployment sites were Hawaii, Kiritimati (Christmas Island), Tahiti, Fiji, and Easter Island. The broad goals of PEM-Tropics B were to improved understanding of the oxidizing power of the atmosphere and the processes controlling sulfur aerosol formation and to establish baseline values for chemical species that are directly coupled to the oxidizing power and aerosol loading of the troposphere. The purpose of this document is to provide a representation of aircraft data that will be available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) or are available through the GTE Project Office archive. The data format is not intended to support original research/analysis, but to assist the reader in identifying data that are of interest.

  14. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  15. Synthesizing average 3D anatomical shapes using deformable templates

    NASA Astrophysics Data System (ADS)

    Christensen, Gary E.; Johnson, Hans J.; Haller, John W.; Melloy, Jenny; Vannier, Michael W.; Marsh, Jeffrey L.

    1999-05-01

    A major task in diagnostic medicine is to determine whether or not an individual has a normal or abnormal anatomy by examining medical images such as MRI, CT, etc. Unfortunately, there are few quantitative measures that a physician can use to discriminate between normal and abnormal besides a couple of length, width, height, and volume measurements. In fact, there is no definition/picture of what normal anatomical structures--such as the brain-- look like let alone normal anatomical variation. The goal of this work is to synthesize average 3D anatomical shapes using deformable templates. We present a method for empirically estimating the average shape and variation of a set of 3D medical image data sets collected from a homogeneous population of topologically similar anatomies. Results are shown for synthesizing the average brain image volume from a set of six normal adults and synthesizing the average skull/head image volume from a set of five 3 - 4 month old infants with sagittal synostosis.

  16. A procedure to average 3D anatomical structures.

    PubMed

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  17. Implementation of the NCAR Community Land Model (CLM) in the NASA/NCAR finite-volume Global Climate Model (fvGCM)

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon D.; Wang, Guiling; Chern, Jiundar; Bosilovich, Michael G.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2002-01-01

    In this study, the NCAR CLM version 2.0 land-surface model was integrated into the NASA/NCAR fvGCM. The CLM was developed collaboratively by an open interagency/university group of scientists and based on well-proven physical parameterizations and numerical schemes that combine the best features of BATS, NCAR-LSM, and IAP94. The CLM design is a one-dimensional point model with 1 vegetation layer, along with sub-grid scale tiles. The features of the CLM include 10-uneven soil layers with water, ice, and temperature states in each soil layer, and five snow layers, with water flow, refreezing, compaction, and aging allowed. In addition, the CLM utilizes two-stream canopy radiative transfer, the Bonan lake model and topographic enhanced streamflow based on TOPMODEL. The DAO fvGCM uses a genuinely conservative Flux-Form Semi-Lagrangian transport algorithm along with terrain- following Lagrangian control-volume vertical coordinates. The physical parameterizations are based on the NCAR Community Atmosphere Model (CAM-2). For our purposes, the fvGCM was run at 2 deg x 2.5 deg horizontal resolution with 55 vertical levels. The 10-year climate from the fvGCM with CLM2 was intercompared with the climate from fvGCM with LSM, ECMWF and NCEP. We concluded that the incorporation of CLM2 did not significantly impact the fvGCM climate from that of LSM. The most striking difference was the warm bias in the CLM2 surface skin temperature over desert regions. We determined that the warm bias can be partially attributed to the value of the drag coefficient for the soil under the canopy, which was too small resulting in a decoupling between the ground surface and the canopy. We also discovered that the canopy interception was high compared to observations in the Amazon region. A number of experiments were then performed focused on implementing model improvements. In order to correct the warm bias, the drag coefficient for the soil under the canopy was considered a function of LAI (Leaf

  18. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  19. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  20. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  1. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  2. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  3. NASA Global Hawk Overview

    NASA Technical Reports Server (NTRS)

    Naftel, Chris

    2014-01-01

    The NASA Global Hawk Project is supporting Earth Science research customers. These customers include: US Government agencies, civilian organizations, and universities. The combination of the Global Hawks range, endurance, altitude, payload power, payload volume and payload weight capabilities separates the Global Hawk platform from all other platforms available to the science community. This presentation includes an overview of the concept of operations and an overview of the completed science campaigns. In addition, the future science plans, using the NASA Global Hawk System, will be presented.

  4. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  6. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  7. Chesapeake Bay Hypoxic Volume Forecasts and Results

    USGS Publications Warehouse

    Evans, Mary Anne; Scavia, Donald

    2013-01-01

    Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer's hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  8. Chesapeake Bay hypoxic volume forecasts and results

    USGS Publications Warehouse

    Scavia, Donald; Evans, Mary Anne

    2013-01-01

    The 2013 Forecast - Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer’s hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  11. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  12. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  13. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  14. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  15. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  16. Natural look in volume restoration.

    PubMed

    Lupo, Mary P

    2008-09-01

    Filling and volumizing injection procedures are currently widely used for facial augmentation and re-establishing a youthful appearance. Aesthetic physicians have advanced from the practice of treating single lines and wrinkles towards filling large facial areas to globally restore natural facial contours and meet patient demand for nonsurgical rejuvenation. This review describes the different categories of fillers and volumizers based on their duration of action and ability to create a natural looking effect; they can be broadly classified as temporary or long-lasting biodegradable agents, or permanent nonbiodegradable agents. Temporary fillers are effective to correct lines and wrinkles, but may not adequately meet the need for global facial rejuvenation and volume replacement in a long-term, cost-efficient manner. Permanent fillers for global restoration pose the issue of long-term safety, and may not be compatible with changes in facial architecture with continued aging. Longer lasting volumizers provide patients with a durable, effective option for the restoration of facial volume and the re-establishment of youthful facial contours. Temporary fillers and volumizers may also be used in combination to provide a wide source of options for the global restoration and rejuvenation of the face.

  17. Contribution of small glaciers to global sea level

    USGS Publications Warehouse

    Meier, M.F.

    1984-01-01

    Observed long-term changes in glacier volume and hydrometeorological mass balance models yield data on the transfer of water from glaciers, excluding those in Greenland and Antarctica, to the oceans, The average observed volume change for the period 1900 to 1961 is scaled to a global average by use of the seasonal amplitude of the mass balance. These data are used to calibrate the models to estimate the changing contribution of glaciers to sea level for the period 1884 to 1975. Although the error band is large, these glaciers appear to accountfor a third to half of observed rise in sea level, approximately that fraction not explained by thermal expansion of the ocean.

  18. Is Global Warming Accelerating?

    NASA Astrophysics Data System (ADS)

    Shukla, J.; Delsole, T. M.; Tippett, M. K.

    2009-12-01

    A global pattern that fluctuates naturally on decadal time scales is identified in climate simulations and observations. This newly discovered component, called the Global Multidecadal Oscillation (GMO), is related to the Atlantic Meridional Oscillation and shown to account for a substantial fraction of decadal fluctuations in the observed global average sea surface temperature. IPCC-class climate models generally underestimate the variance of the GMO, and hence underestimate the decadal fluctuations due to this component of natural variability. Decomposing observed sea surface temperature into a component due to anthropogenic and natural radiative forcing plus the GMO, reveals that most multidecadal fluctuations in the observed global average sea surface temperature can be accounted for by these two components alone. The fact that the GMO varies naturally on multidecadal time scales implies that it can be predicted with some skill on decadal time scales, which provides a scientific rationale for decadal predictions. Furthermore, the GMO is shown to account for about half of the warming in the last 25 years and hence a substantial fraction of the recent acceleration in the rate of increase in global average sea surface temperature. Nevertheless, in terms of the global average “well-observed” sea surface temperature, the GMO can account for only about 0.1° C in transient, decadal-scale fluctuations, not the century-long 1° C warming that has been observed during the twentieth century.

  19. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  20. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  1. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  2. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  3. Making the Grade? Globalisation and the Training Market in Australia. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    Hall, Richard; Buchanan, John; Bretherton, Tanya; van Barneveld, Kristin; Pickersgill, Richard

    This two-volume document reports on a study of globalization and Australia's training market. Volume 1 begins by examining debate on globalization and industry training in Australia. Discussed next is the study methodology, which involved field studies of the metals and engineering industry in South West Sydney and the Hunter and the information…

  4. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Averaging. 1037.710 Section 1037.710 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW HEAVY-DUTY MOTOR VEHICLES Averaging, Banking, and Trading for Certification §...

  5. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  6. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  7. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

  8. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  9. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  10. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  11. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  12. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  13. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  14. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  15. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  16. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  17. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  18. Averaged equations for distributed Josephson junction arrays

    NASA Astrophysics Data System (ADS)

    Bennett, Matthew; Wiesenfeld, Kurt

    2004-06-01

    We use an averaging method to study the dynamics of a transmission line studded by Josephson junctions. The averaged system is used as a springboard for studying experimental strategies which rely on spatial non-uniformity to achieve enhanced synchronization. A reduced model for the near resonant case elucidates in physical terms the key to achieving stable synchronized dynamics.

  19. New results on averaging theory and applications

    NASA Astrophysics Data System (ADS)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  20. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  1. A thermochemically derived global reaction mechanism for detonation application

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Yang, J.; Sun, M.

    2012-07-01

    A 4-species 4-step global reaction mechanism for detonation calculations is derived from detailed chemistry through thermochemical approach. Reaction species involved in the mechanism and their corresponding molecular weight and enthalpy data are derived from the real equilibrium properties. By substituting these global species into the results of constant volume explosion and examining the evolution process of these global species under varied conditions, reaction paths and corresponding rates are summarized and formulated. The proposed mechanism is first validated to the original chemistry through calculations of the CJ detonation wave, adiabatic constant volume explosion, and the steady reaction structure after a strong shock wave. Good agreement in both reaction scales and averaged thermodynamic properties has been achieved. Two sets of reaction rates based on different detailed chemistry are then examined and applied for numerical simulations of two-dimensional cellular detonations. Preliminary results and a brief comparison between the two mechanisms are presented. The proposed global mechanism is found to be economic in computation and also competent in description of the overall characteristics of detonation wave. Though only stoichiometric acetylene-oxygen mixture is investigated in this study, the method to derive such a global reaction mechanism possesses a certain generality for premixed reactions of most lean hydrocarbon mixtures.

  2. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    PubMed

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  3. Averaged model for momentum and dispersion in hierarchical porous media

    NASA Astrophysics Data System (ADS)

    Chabanon, Morgan; David, Bertrand; Goyeau, Benoît.

    2015-08-01

    Hierarchical porous media are multiscale systems, where different characteristic pore sizes and structures are encountered at each scale. Focusing the analysis to three pore scales, an upscaling procedure based on the volume-averaging method is applied twice, in order to obtain a macroscopic model for momentum and diffusion-dispersion. The effective transport properties at the macroscopic scale (permeability and dispersion tensors) are found to be explicitly dependent on the mesoscopic ones. Closure problems associated to these averaged properties are numerically solved at the different scales for two types of bidisperse porous media. Results show a strong influence of the lower-scale porous structures and flow intensity on the macroscopic effective transport properties.

  4. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  5. Quantum volume

    NASA Astrophysics Data System (ADS)

    Ryabov, V. A.

    2015-08-01

    Quantum systems in a mechanical embedding, the breathing mode of a small particles, optomechanical system, etc. are far not the full list of examples in which the volume exhibits quantum behavior. Traditional consideration suggests strain in small systems as a result of a collective movement of particles, rather than the dynamics of the volume as an independent variable. The aim of this work is to show that some problem here might be essentially simplified by introducing periodic boundary conditions. At this case, the volume is considered as the independent dynamical variable driven by the internal pressure. For this purpose, the concept of quantum volume based on Schrödinger’s equation in 𝕋3 manifold is proposed. It is used to explore several 1D model systems: An ensemble of free particles under external pressure, quantum manometer and a quantum breathing mode. In particular, the influence of the pressure of free particle on quantum oscillator is determined. It is shown also that correction to the spectrum of the breathing mode due to internal degrees of freedom is determined by the off-diagonal matrix elements of the quantum stress. The new treatment not using the “force” theorem is proposed for the quantum stress tensor. In the general case of flexible quantum 3D dynamics, quantum deformations of different type might be introduced similarly to monopole mode.

  6. Global Fluency.

    ERIC Educational Resources Information Center

    Tosti, Donald T.

    1999-01-01

    Defines global fluency as a facility with cultural behaviors that help an organization thrive in an ever-changing global business environment; and discusses business culture, global culture, an example of a change effort at a global company, leadership values, company values, and defining global values and practices. (Author/LRW)

  7. Discrete Averaging Relations for Micro to Macro Transition

    NASA Astrophysics Data System (ADS)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  8. Averaging of Backscatter Intensities in Compounds

    PubMed Central

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  9. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793

  10. Average Shape of Transport-Limited Aggregates

    NASA Astrophysics Data System (ADS)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  11. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  12. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  13. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  14. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  15. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  16. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  17. Average Passenger Occupancy (APO) in Your Community.

    ERIC Educational Resources Information Center

    Stenstrup, Al

    1995-01-01

    Provides details of an activity in which students in grades 4-10 determine the Average Passenger Occupancy (APO) in their community and develop, administer, and analyze a survey to determine attitudes toward carpooling. (DDR)

  18. Rotational averaging of multiphoton absorption cross sections

    SciTech Connect

    Friese, Daniel H. Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  19. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  20. Averaging Sampled Sensor Outputs To Detect Failures

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.

    1990-01-01

    Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.

  1. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  2. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  3. Time-averaging water quality assessment

    SciTech Connect

    Reddy, L.S.; Ormsbee, L.E.; Wood, D.J.

    1995-07-01

    While reauthorization of the Safe Drinking Water Act is pending, many water utilities are preparing to monitor and regulate levels of distribution system constituents that affect water quality. Most frequently, utilities are concerned about average concentrations rather than about tracing a particular constituent`s path. Mathematical and computer models, which provide a quick estimate of average concentrations, could play an important role in this effort. Most water quality models deal primarily with isolated events, such as tracing a particular constituent through a distribution system. This article proposes a simple, time-averaging model that obtains average, maximum, and minimum constituent concentrations and ages throughout the network. It also computes percentage flow contribution and percentage constituent concentration. The model is illustrated using two water distribution systems, and results are compared with those obtained using a dynamic water quality model. Both models predict average water quality parameters with no significant deviations; the time-averaging approach is a simple and efficient alternative to the dynamic model.

  4. Globalization and global health.

    PubMed

    Berlinguer, G

    1999-01-01

    Along with the positive or negative consequences of the globalization of health, we can consider global health as a goal, responding to human rights and to common interests. History tells us that after the "microbial unification" of the world, which began in 1492, over three centuries elapsed before the recognition of common risks and attempts to cope with them in a cross-boundary effort. In the 19th and 20th centuries, the struggle against epidemics united countries, world health became a common goal, and considerable results were achieved. However, in recent decades the notion of health as a cornerstone of economic development has been replaced by the idea that public health and health services are an obstacle to the wealth of nations. Meanwhile, new common threats are growing: among them, the exacerbation of old infections and emergence of new ones, the impact of environmental changes, drug traffic on a world scale, and destructive and self-destructive violence. New and stronger empirical motives relate the interests of peoples to universal rights and to global health. The author concludes with some proposals for policies.

  5. Emergency department visit volume variability

    PubMed Central

    Kang, Seung Woo; Park, Hyun Soo

    2015-01-01

    Objective One of the most important and basic variables in emergency department (ED) operations is patient visit volumes. This variable is usually predicted on the basis of the average ED patient visit volume over a certain period. However, ED patient visit variability is poorly understood. Therefore, we evaluated ED patient visit variability in order to determine if the average can be used to operate EDs. Methods Nationwide ED patient visit data were from the standard emergency patient data of the National Emergency Department Information System. The data are transferred automatically by 141 EDs nationwide. The hourly ED visit volumes over 365 days were determined, and the variability was analyzed to evaluate the representativeness of the average. Results A total of 4,672,275 patient visits were collected in 2013. The numbers of daily ED patient visits were widely dispersed and positively skewed rather than symmetric and narrow with a normal distribution. Conclusion The daily variability of ED visit is too large and it did not show normal distribution. The average visit volume does not adequately represent ED operation. PMID:27752589

  6. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  7. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  8. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  9. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  10. Successive averages of firmly nonexpansve mappings

    SciTech Connect

    Flam, S.

    1994-12-31

    The problem considered here is to find common fixed points of (possibly infinitely) many firmly nonexpansive selfmappings in a Hilbert space. For this purpose we use averaged relaxations of the original mappings, the averages being Bochner integrals with respect to chosen measures. Judicious choices of such measures serve to enhance the convergence towards common fixed points. Since projection operators onto closed convex sets are firmly non expansive, the methods explored are applicable for solving convex feasibility problems. In particular, by varying the measures our analysis encompasses recent developments of so-called block-iterative algorithms. We demonstrate convergence theorems which cover and extend many known results.

  11. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  12. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  13. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  14. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    SciTech Connect

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  15. Global health and justice.

    PubMed

    Dwyer, James

    2005-10-01

    In Australia, Japan, Sweden, and Switzerland, the average life expectancy is now greater than 80 years. But in Angola, Malawi, Sierra Leone, and Zimbabwe, the average life expectancy is less than 40 years. The situation is even worse than these statistics suggest because average figures tend to mask inequalities within countries. What are we to make of a world with such inequal health prospects? What does justice demand in terms of global health? To address these problems, I characterize justice at the local level, at the domestic or social level, and at the international or global level. Because social conditions, structures, and institutions have such a profound influence on the health of populations, I begin by focusing attention on the relationship between social justice and health prospects. Then I go on to discuss health prospects and the problem of global justice. Here I distinguish two views: a cosmopolitan view and a political view of global justice. In my account of global justice, I modify and use the political view that John Rawls developed in The Law of Peoples. I try to show why an adequate political account must include three duties: a duty not to harm, a duty to reconstruct international arrangements, and a duty to assist.

  16. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  17. Average configuration of the induced venus magnetotail

    SciTech Connect

    McComas, D.J.; Spence, H.E.; Russell, C.T.

    1985-01-01

    In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

  18. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  19. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  20. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  1. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  2. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  3. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  4. Average entanglement for Markovian quantum trajectories

    SciTech Connect

    Vogelsberger, S.; Spehner, D.

    2010-11-15

    We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.

  5. New applications for high average power beams

    NASA Astrophysics Data System (ADS)

    Neau, E. L.; Turman, B. N.; Patterson, E. L.

    1993-06-01

    The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

  6. From cellular doses to average lung dose.

    PubMed

    Hofmann, W; Winkler-Heil, R

    2015-11-01

    Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions.

  7. High-average-power exciplex laser system

    NASA Astrophysics Data System (ADS)

    Sentis, M.

    The LUX high-average-power high-PRF exciplex laser (EL) system being developed at the Institut de Mecanique des Fluides de Marseille is characterized, and some preliminary results are presented. The fundamental principles and design criteria of ELs are reviewed, and the LUX components are described and illustrated, including a closed-circuit subsonic wind tunnel and a 100-kW-average power 1-kHz-PRF power pulser providing avalanche-discharge preionization by either an electron beam or an X-ray beam. Laser energy of 50 mJ has been obtained at wavelength 308 nm in the electron-beam mode (14.5 kV) using a 5300/190/10 mixture of Ne/Xe/HCl at pressure 1 bar.

  8. Emissions averaging top option for HON compliance

    SciTech Connect

    Kapoor, S. )

    1993-05-01

    In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.

  9. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  10. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  11. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  12. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  13. Geomagnetic effects on the average surface temperature

    NASA Astrophysics Data System (ADS)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  14. Annual average radon concentrations in California residences.

    PubMed

    Liu, K S; Hayward, S B; Girman, J R; Moed, B A; Huang, F Y

    1991-09-01

    A study was conducted to determine the annual average radon concentrations in California residences, to determine the approximate fraction of the California population regularly exposed to radon concentrations of 4 pCi/l or greater, and to the extent possible, to identify regions of differing risk for high radon concentrations within the state. Annual average indoor radon concentrations were measured with passive (alpha track) samplers sent by mail and deployed by home occupants, who also completed questionnaires on building and occupant characteristics. For the 310 residences surveyed, concentrations ranged from 0.10 to 16 pCi/l, with a geometric mean of whole-house (bedroom and living room) average concentrations of 0.85 pCi/l and a geometric standard deviation of 1.91. A total of 88,000 California residences (0.8 percent) were estimated to have radon concentrations exceeding 4 pCi/l. When the state was divided into six zones based on geology, significant differences in geometric mean radon concentrations were found between several of the zones. Zones with high geometric means were the Sierra Nevada mountains, the valleys east of the Sierra Nevada, the central valley (especially the southern portion), and Ventura and Santa Barbara Counties. Zones with low geometric means included most coastal counties and the portion of the state from Los Angeles and San Bernardino Counties south.

  15. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  16. Fast Optimal Transport Averaging of Neuroimaging Data.

    PubMed

    Gramfort, A; Peyré, G; Cuturi, M

    2015-01-01

    Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface. PMID:26221679

  17. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  18. A generalization of averaging theorems for porous medium analysis

    NASA Astrophysics Data System (ADS)

    Gray, William G.; Miller, Cass T.

    2013-12-01

    The contributions of Stephen Whitaker to the rigorous analysis of porous medium flow and transport are built on the use of temporal and spatial averaging theorems applied to phases in representative elementary volumes. Here, these theorems are revisited, common point theorems are considered, extensions of existing theorems are developed to include the effects of lower dimensional entities represented as singularities, and a unified form of the theorems for phases, interfaces, common curves, and common points is established for both macroscale and mixed macroscale-megascale systems. The availability of the full set of theorems facilitates detailed analysis of a variety of porous medium systems. Explicit modeling of the physical processes associated with interfaces, common curves, and common points, as well as the kinematics of these entities, can be undertaken at both the macroscale and megascale based on these theorems.

  19. STREMR: Numerical model for depth-averaged incompressible flow

    NASA Astrophysics Data System (ADS)

    Roberts, Bernard

    1993-09-01

    The STREMR computer code is a two-dimensional model for depth-averaged incompressible flow. It accommodates irregular boundaries and nonuniform bathymetry, and it includes empirical corrections for turbulence and secondary flow. Although STREMR uses a rigid-lid surface approximation, the resulting pressure is equivalent to the displacement of a free surface. Thus, the code can be used to model free-surface flow wherever the local Froude number is 0.5 or less. STREMR uses a finite-volume scheme to discretize and solve the governing equations for primary flow, secondary flow, and turbulence energy and dissipation rate. The turbulence equations are taken from the standard k-Epsilon turbulence model, and the equation for secondary flow is developed herein. Appendices to this report summarize the principal equations, as well as the procedures used for their discrete solution.

  20. Using four-phase Eulerian volume averaging approach to model macrosegregation and shrinkage cavity

    NASA Astrophysics Data System (ADS)

    Wu, M.; Kharicha, A.; Ludwig, A.

    2015-06-01

    This work is to extend a previous 3-phase mixed columnar-equiaxed solidification model to treat the formation of shrinkage cavity by including an additional phase. In the previous model the mixed columnar and equiaxed solidification with consideration of multiphase transport phenomena (mass, momentum, species and enthalpy) is proposed to calculate the as- cast structure including columnar-to-equiaxed transition (CET) and formation of macrosegregation. In order to incorporate the formation of shrinkage cavity, an additional phase, i.e. gas phase or covering liquid slag phase, must be considered in addition to the previously introduced 3 phases (parent melt, solidifying columnar dendrite trunks and equiaxed grains). No mass and species transfer between the new and other 3 phases is necessary, but the treatment of the momentum and energy exchanges between them is crucially important for the formation of free surface and shrinkage cavity, which in turn influences the flow field and formation of segregation. A steel ingot is preliminarily calculated to exam the functionalities of the model.

  1. Taylor-Aris Dispersion: An Explicit Example for Understanding Multiscale Analysis via Volume Averaging

    ERIC Educational Resources Information Center

    Wood, Brian D.

    2009-01-01

    Although the multiscale structure of many important processes in engineering is becoming more widely acknowledged, making this connection in the classroom is a difficult task. This is due in part because the concept of multiscale structure itself is challenging and it requires the students to develop new conceptual pictures of physical systems,…

  2. Global HRD.

    ERIC Educational Resources Information Center

    1997

    This document contains four papers from a symposium on global human resource development (HRD). "Globalization of Human Resource Management (HRM) in Government: A Cross-Cultural Perspective" (Pan Suk Kim) relates HRM to national cultures and addresses its specific functional aspects with a unique dimension in a global organization. "An…

  3. Global Education.

    ERIC Educational Resources Information Center

    Berkley, June, Ed.

    1982-01-01

    The articles in this collection deal with various methods of global education--education to prepare students to function as understanding and informed citizens of the world. Topics discussed in the 26 articles include: (1) the necessity of global education; (2) global education in the elementary school language arts curriculum; (3) science fiction…

  4. Global Education.

    ERIC Educational Resources Information Center

    Longstreet, Wilma S., Ed.

    1988-01-01

    This issue contains an introduction ("The Promise and Perplexity of Globalism," by W. Longstreet) and seven articles dedicated to exploring the meaning of global education for today's schools. "Global Education: An Overview" (J. Becker) develops possible definitions, identifies objectives and skills, and addresses questions and issues in this…

  5. Fluctuations of wavefunctions about their classical average

    NASA Astrophysics Data System (ADS)

    Benet, L.; Flores, J.; Hernández-Saldaña, H.; Izrailev, F. M.; Leyvraz, F.; Seligman, T. H.

    2003-02-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  6. Collimation of average multiplicity in QCD jets

    NASA Astrophysics Data System (ADS)

    Arleo, François; Pérez Ramos, Redamy

    2009-11-01

    The collimation of average multiplicity inside quark and gluon jets is investigated in perturbative QCD in the modified leading logarithmic approximation (MLLA). The role of higher order corrections accounting for energy conservation and the running of the coupling constant leads to smaller multiplicity collimation as compared to leading logarithmic approximation (LLA) results. The collimation of jets produced in heavy-ion collisions has also been explored by using medium-modified splitting functions enhanced in the infrared sector. As compared to elementary collisions, the angular distribution of the jet multiplicity is found to broaden in QCD media at all energy scales.

  7. Average characteristics of partially coherent electromagnetic beams.

    PubMed

    Seshadri, S R

    2000-04-01

    Average characteristics of partially coherent electromagnetic beams are treated with the paraxial approximation. Azimuthally or radially polarized, azimuthally symmetric beams and linearly polarized dipolar beams are used as examples. The change in the mean squared width of the beam from its value at the location of the beam waist is found to be proportional to the square of the distance in the propagation direction. The proportionality constant is obtained in terms of the cross-spectral density as well as its spatial spectrum. The use of the cross-spectral density has advantages over the use of its spatial spectrum.

  8. Auto-exploratory average reward reinforcement learning

    SciTech Connect

    Ok, DoKyeong; Tadepalli, P.

    1996-12-31

    We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.

  9. A Green's function quantum average atom model

    SciTech Connect

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  10. REVISITING THE SOLAR TACHOCLINE: AVERAGE PROPERTIES AND TEMPORAL VARIATIONS

    SciTech Connect

    Antia, H. M.; Basu, Sarbani E-mail: sarbani.basu@yale.edu

    2011-07-10

    The tachocline is believed to be the region where the solar dynamo operates. With over a solar cycle's worth of data available from the Michelson Doppler Imager and Global Oscillation Network Group instruments, we are in a position to investigate not merely the average structure of the solar tachocline, but also its time variations. We determine the properties of the tachocline as a function of time by fitting a two-dimensional model that takes latitudinal variations of the tachocline properties into account. We confirm that if we consider the central position of the tachocline, it is prolate. Our results show that the tachocline is thicker at latitudes higher than the equator, making the overall shape of the tachocline more complex. Of the tachocline properties examined, the transition of the rotation rate across the tachocline, and to some extent the position of the tachocline, show some temporal variations.

  11. Forecasts of time averages with a numerical weather prediction model

    NASA Technical Reports Server (NTRS)

    Roads, J. O.

    1986-01-01

    Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

  12. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    SciTech Connect

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  13. Motional averaging in a superconducting qubit.

    PubMed

    Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S

    2013-01-01

    Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011

  14. Quantifying Water Stress Using Total Water Volumes and GRACE

    NASA Astrophysics Data System (ADS)

    Richey, A. S.; Famiglietti, J. S.; Druffel-Rodriguez, R.

    2011-12-01

    Water will follow oil as the next critical resource leading to unrest and uprisings globally. To better manage this threat, an improved understanding of the distribution of water stress is required today. This study builds upon previous efforts to characterize water stress by improving both the quantification of human water use and the definition of water availability. Current statistics on human water use are often outdated or inaccurately reported nationally, especially for groundwater. This study improves these estimates by defining human water use in two ways. First, we use NASA's Gravity Recovery and Climate Experiment (GRACE) to isolate the anthropogenic signal in water storage anomalies, which we equate to water use. Second, we quantify an ideal water demand by using average water requirements for the domestic, industrial, and agricultural water use sectors. Water availability has traditionally been limited to "renewable" water, which ignores large, stored water sources that humans use. We compare water stress estimates derived using either renewable water or the total volume of water globally. We use the best-available data to quantify total aquifer and surface water volumes, as compared to groundwater recharge and surface water runoff from land-surface models. The work presented here should provide a more realistic image of water stress by explicitly quantifying groundwater, defining water availability as total water supply, and using GRACE to more accurately quantify water use.

  15. High average power linear induction accelerator development

    SciTech Connect

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.

  16. Average Gait Differential Image Based Human Recognition

    PubMed Central

    Chen, Jinyan; Liu, Jiansheng

    2014-01-01

    The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

  17. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171

  18. [Quetelet, the average man and medical knowledge].

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:24141918

  19. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  20. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  1. Scaling crossover for the average avalanche shape

    NASA Astrophysics Data System (ADS)

    Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.

    2010-03-01

    Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

  2. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  3. 40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of... toxics value, Ti, of each batch of gasoline is determined using the Phase II Complex Model specified...

  4. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    SciTech Connect

    Eimerl, D.

    1985-10-28

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology.

  5. Atmospheric carbon dioxide and the global carbon cycle

    SciTech Connect

    Trabalka, J R

    1985-12-01

    This state-of-the-art volume presents discussions on the global cycle of carbon, the dynamic balance among global atmospheric CO2 sources and sinks. Separate abstracts have been prepared for the individual papers. (ACR)

  6. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  7. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  8. Average oxidation state of carbon in proteins.

    PubMed

    Dick, Jeffrey M

    2014-11-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.

  9. Landslide volumes and landslide mobilization rates in Umbria, central Italy

    NASA Astrophysics Data System (ADS)

    Guzzetti, Fausto; Ardizzone, Francesca; Cardinali, Mauro; Rossi, Mauro; Valigi, Daniela

    2009-03-01

    A catalogue of 677 landslides of the slide type was selected from a global database of geometrical measurements of individual landslides, including landslide area ( AL) and volume ( VL). The measurements were used to establish an empirical relationship to link AL (in m 2) to VL (in m 3). The relationship takes the form of a power law with a scaling exponent α = 1.450, covers eight orders of magnitude of AL and twelve orders of magnitude of VL, and is in general agreement with existing relationships published in the literature. The reduced scatter of the experiential data around the dependency line, and the fact that the considered landslides occurred in multiple physiographic and climatic environments and were caused by different triggers, indicate that the relationship between VL and AL is largely independent of the physiographical setting. The new relationship was used to determine the volume of individual landslides of the slide type in the Collazzone area, central Italy, a 78.9 km 2 area for which a multi-temporal landslide inventory covering the 69-year period from 1937 to 2005 is available. In the observation period, the total volume of landslide material was VLT = 4.78 × 10 7 m 3, corresponding to an average rate of landslide mobilization φL = 8.8 mm yr - 1 . Exploiting the temporal information in the landslide inventory, the volume of material produced during different periods by new and reactivated landslides was singled out. The wet period from 1937 to 1941 was recognized as an episode of accelerated landslide production. During this 5-year period, approximately 45% of the total landslide material inventoried in the Collazzone area was produced, corresponding to an average rate of landslide mobilization φL = 54 mm yr - 1 , six times higher than the long term rate. The volume of landslide material in an event or period was used as a proxy for the magnitude of the event or period, defined as the logarithm (base 10) of the total landslide volume produced

  10. A Systematic Literature Review of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; van der Maas, Han L. J.

    2010-01-01

    On the basis of several reviews of the literature, Lynn [Lynn, R., (2006). Race differences in intelligence: An evolutionary analysis. Augusta, GA: Washington Summit Publishers.] and Lynn and Vanhanen [Lynn, R., & Vanhanen, T., (2006). IQ and global inequality. Augusta, GA: Washington Summit Publishers.] concluded that the average IQ of the Black…

  11. Unbiased Average Age-Appropriate Atlases for Pediatric Studies

    PubMed Central

    Fonov, Vladimir; Evans, Alan C.; Botteron, Kelly; Almli, C. Robert; McKinstry, Robert C.; Collins, D. Louis

    2010-01-01

    Spatial normalization, registration, and segmentation techniques for Magnetic Resonance Imaging (MRI) often use a target or template volume to facilitate processing, take advantage of prior information, and define a common coordinate system for analysis. In the neuroimaging literature, the MNI305 Talairach-like coordinate system is often used as a standard template. However, when studying pediatric populations, variation from the adult brain makes the MNI305 suboptimal for processing brain images of children. Morphological changes occurring during development render the use of age-appropriate templates desirable to reduce potential errors and minimize bias during processing of pediatric data. This paper presents the methods used to create unbiased, age-appropriate MRI atlas templates for pediatric studies that represent the average anatomy for the age range of 4.5–18.5 years, while maintaining a high level of anatomical detail and contrast. The creation of anatomical T1-weighted, T2-weighted, and proton density-weighted templates for specific developmentally important age-ranges, used data derived from the largest epidemiological, representative (healthy and normal) sample of the U.S. population, where each subject was carefully screened for medical and psychiatric factors and characterized using established neuropsychological and behavioral assessments. . Use of these age-specific templates was evaluated by computing average tissue maps for gray matter, white matter, and cerebrospinal fluid for each specific age range, and by conducting an exemplar voxel-wise deformation-based morphometry study using 66 young (4.5–6.9 years) participants to demonstrate the benefits of using the age-appropriate templates. The public availability of these atlases/templates will facilitate analysis of pediatric MRI data and enable comparison of results between studies in a common standardized space specific to pediatric research. PMID:20656036

  12. Global Digital Mapping of the Moon

    NASA Astrophysics Data System (ADS)

    Edwards, K. E.; Colvin, T. R.; Becker, T. L.; Cook, D.; Davies, M. E.; Duxbury, T. C.; Eliason, E. M.; Lee, E. M.; McEwen, A. S.; Morgan, H.; Robinson, M. S.; Sorensen, T.

    1996-03-01

    The Clementine spacecraft imaged more than 99% of the Moon's surface at resolutions of 80-250 m/pixel; we are in the process of assembling a global digital base map. A new geodetic network has been constructed from ~43,000 images and ~265,000 match points. The average relative positional error is about 80 m, less than 1 pixel. The absolute positional accuracy should be ~250 m/pixel on average, which will facilitate future lunar exploration. Previous control networks achieved accuracies of ~1-2 km/pixel on the near side and ~10-20 km/pixel on the far side. The final base map will consist of 750-nm normalized albedo. The processed dataset will be distributed on a 7-volume set of CD-ROMs and made available via the Internet. The imaging node of the Planetary Data System (PDS) has developed a program called MapMaker which facilitates the construction of digital maps with user-specified latitude-longitude limits and scale.

  13. A Microgenetic Analysis of Strategic Variability in Gifted and Average-Ability Children

    ERIC Educational Resources Information Center

    Steiner, Hillary Hettinger

    2006-01-01

    Many researchers have described cognitive differences between gifted and average-performing children. Regarding strategy use, the gifted advantage is often associated with differences such as greater knowledge of strategies, quicker problem solving, and the ability to use strategies more appropriately. The current study used microgenetic methods…

  14. Insular volume reduction in schizophrenia.

    PubMed

    Saze, Teruyasu; Hirao, Kazuyuki; Namiki, Chihiro; Fukuyama, Hidenao; Hayashi, Takuji; Murai, Toshiya

    2007-12-01

    Structural and functional abnormalities of the insular cortex have been reported in patients with schizophrenia. Most studies have shown that the insular volumes in schizophrenia patients are smaller than those of healthy people. As the insular cortex is functio-anatomically divided into anterior and posterior subdivisons, recent research is focused on uncovering a specific subdivisional abnormality of the insula in patients with schizophrenia. A recent ROI-based volumetric MRI study demonstrated specific left anterior insular volume reduction in chronic schizophrenia patients (Makris N, Goldstein J, Kennedy D, Hodge S, Caviness V, Faraone S, Tsuang M, Seidman L (2006) Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophr Res 83:155-171). On the other hand, our VBM-based volumetric study revealed a reduction in right posterior insular volume (Yamada M, Hirao K, Namiki C, Hanakawa T, Fukuyama H, Hayashi T, Murai T (2007) Social cognition and frontal lobe pathology in schizophrenia: a voxel-based morphometric study. NeuroImage 35:292-298). In order to address these controversial results, ROI-based subdivisional volumetry was performed using the MRI images from the same population we analyzed in our previous VBM-study. The sample group comprised 20 schizophrenia patients and 20 matched healthy controls. Patients with schizophrenia showed a global reduction in insular gray matter volumes relative to healthy comparison subjects. In a simple comparison of the volumes of each subdivision between the groups, a statistically significant volume reduction in patients with schizophrenia was demonstrated only in the right posterior insula. This study suggests that insular abnormalities in schizophrenia would include anterior as well as posterior parts. Each subdivisional abnormality may impact on different aspects of the pathophysiology and psychopathology of schizophrenia; these relationships should be the focus of future research.

  15. Global Academe: Engaging Intellectual Discourse

    ERIC Educational Resources Information Center

    Nagy-Zekmi, Silvia, Ed.; Hollis, Karyn, Ed.

    2012-01-01

    The representation of the economic, political, cultural and, more importantly, global interrelations between agents involved in the process of intellectual activity is at the core of the inquiry in this volume that scrutinizes a distinct transformation occurring in the modalities of intellectual production also detectable in the changing role of…

  16. Microwave emission spectrum of the moon: mean global heat flow and average depth of the regolith.

    PubMed

    Keihm, S J; Langseth, M G

    1975-01-10

    Earth-based observations of the lunar microwave brightness temperature spectrum at wavelengths between 5 and 500 centimeters, when reexamined in the light of physical property data derived from the Apollo program, tentatively support the high heat flows measured in situ and indicate that a regolith thickness between 10 and 30 meters may characterize a large portion of the lunar near side. PMID:17844211

  17. The global atmospheric response to low-frequency tropical forcing: Zonally averaged basic states

    NASA Technical Reports Server (NTRS)

    Li, Long; Nathan, Terrence R.

    1994-01-01

    The extratropical response to localized, low-frequency tropical forcing is examined using a linearized, non-divergent barotropic model on a sphere. Zonal-mean basic states characterized by solid-body rotation or critical latitudes are considered. An analytical analysis based on WKB and ray tracing methods shows that, in contrast to stationary Rossby waves, westward moving, low-frequency Rossby waves can propagate through the tropical easterlies into the extratropics. It is shown analytically that the difference between the stationary and low-frequency ray paths is proportional to the forcing frequency and inversely proportional to the zonal wavenumber cubed. An expression for the disturbance amplitude is derived that shows the ability of the forced waves to maintain their strength well into middle latitudes depends on their meridional wave scale and northward group velocity, both of which are functions of the slowly varying background flow. A local energetics analysis shows that the combination of energy dispersion from the forcing region and energy extraction from the equatorward flank of the midlatitude jet produces disturbances that have the greatest impact on the extratropical circulation. Under the assumption that the forcing amplitude is independent of frequency, this impact is largest when the tropical forcing period is in the range 10-20 days.

  18. Fueling global fishing fleets.

    PubMed

    Tyedmers, Peter H; Watson, Reg; Pauly, Daniel

    2005-12-01

    Over the course of the 20th century, fossil fuels became the dominant energy input to most of the world's fisheries. Although various analyses have quantified fuel inputs to individual fisheries, to date, no attempt has been made to quantify the global scale and to map the distribution of fuel consumed by fisheries. By integrating data representing more than 250 fisheries from around the world with spatially resolved catch statistics for 2000, we calculate that globally, fisheries burned almost 50 billion L of fuel in the process of landing just over 80 million t of marine fish and invertebrates for an average rate of 620 L t(-1). Consequently, fisheries account for about 1.2% of global oil consumption, an amount equivalent to that burned by the Netherlands, the 18th-ranked oil consuming country globally, and directly emit more than 130 million t of CO2 into the atmosphere. From an efficiency perspective, the energy content of the fuel burned by global fisheries is 12.5 times greater than the edible-protein energy content of the resulting catch.

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... and corporate pool average sulfur level determined? (a) The annual refinery or importer average and corporate pool average gasoline sulfur level is calculated as follows: ER10FE00.007 Where: Sa = The...

  20. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  1. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  2. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  3. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  4. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  5. Potential of high-average-power solid state lasers

    SciTech Connect

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  6. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    SciTech Connect

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-05

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  7. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    NASA Astrophysics Data System (ADS)

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-01

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  8. Global Composite

    Atmospheric Science Data Center

    2013-04-19

    article title:  MISR Global Images See the Light of Day     View Larger Image ... than its nadir counterpart due to enhanced reflection of light by atmospheric particulates. MISR data are processed at the ...

  9. Global Albedo

    Atmospheric Science Data Center

    2013-04-19

    ... estimation of crop yields and disease outbreaks) and land management. Global MISR DHR maps are also available for all other parts of the ... of Directional Hemispherical Reflectance. project:  MISR category:  gallery date:  ...

  10. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  11. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  12. 28W average power hydrocarbon-free rubidium diode pumped alkali laser.

    PubMed

    Zweiback, Jason; Krupke, William F

    2010-01-18

    We present experimental results for a high-power diode pumped hydrocarbon-free rubidium laser with a scalable architecture. The laser consists of a liquid cooled, copper waveguide which serves to both guide the pump light and to provide a thermally conductive surface near the gain volume to remove heat. A laser diode stack, with a linewidth narrowed to approximately 0.35 nm with volume bragg gratings, is used to pump the cell. We have achieved 24W average power output using 4 atmospheres of naturally occurring helium ((4)He) as the buffer gas and 28W using 2.8 atmospheres of (3)He.

  13. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  14. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  15. Social Class and Education: Global Perspectives

    ERIC Educational Resources Information Center

    Weis, Lois, Ed.; Dolby, Nadine, Ed.

    2012-01-01

    "Social Class and Education: Global Perspectives" is the first empirically grounded volume to explore the intersections of class, social structure, opportunity, and education on a truly global scale. Fifteen essays from contributors representing the US, Europe, China, Latin America and other regions offer an unparralleled examination of how social…

  16. Consuming Globalization, Local Identities, and Common Experiences

    ERIC Educational Resources Information Center

    Filax, Gloria

    2004-01-01

    In articulating global and local forms of sexuality and its impact on how people conceptualise conceptualised LGBT issues in education, the author explores three timely texts: (1) Dennis Altman's "Global Sex" (2000); (2) Vanessa Baird's "The No-Nonsense Guide to Sexual Diversity" (2001); and (3) an edited volume by Evelyn Blackwood and Saskia…

  17. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  18. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  19. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  20. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  1. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  2. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  3. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  4. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  5. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  6. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  7. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  8. Vector light shift averaging in paraffin-coated alkali vapor cells

    NASA Astrophysics Data System (ADS)

    Zhivun, Elena; Wickenbrock, Arne; Sudyka, Julia; Patton, Brian; Pustelny, Szymon; Budker, Dmitry

    2016-07-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  9. Vector light shift averaging in paraffin-coated alkali vapor cells

    NASA Astrophysics Data System (ADS)

    Zhivun, Elena; Wickenbrock, Arne; Sudyka, Julia; Patton, Brian; Pustelny, Szymon; Budker, Dmitry

    2016-05-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  10. Technical Report Series on Global Modeling and Data Assimilation. Volume 40; Soil Moisture Active Passive (SMAP) Project Assessment Report for the Beta-Release L4_SM Data Product

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Liu, Qing; Colliander, Andreas; Conaty, Austin; Jackson, Thomas; Kimball, John

    2015-01-01

    During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public beta release scheduled for 30 October 2015. The primary objective of the beta release is to allow users to familiarize themselves with the data product before the validated product becomes available. The beta release also allows users to conduct their own assessment of the data and to provide feedback to the L4_SM science data product team. The assessment of the L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to upscaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 1 validation and supports the beta release of the data. The validation against

  11. Kinetic energy equations for the average-passage equation system

    NASA Technical Reports Server (NTRS)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  12. Changes in average length of stay and average charges generated following institution of PSRO review.

    PubMed Central

    Westphal, M; Frazier, E; Miller, M C

    1979-01-01

    A five-year review of accounting data at a university hospital shows that immediately following institution of concurrent PSRO admission and length of stay review of Medicare-Medicaid patients, there was a significant decrease in length of stay and a fall in average charges generated per patient against the inflationary trend. Similar changes did not occur for the non-Medicare-Medicaid patients who were not reviewed. The observed changes occurred even though the review procedure rarely resulted in the denial of services to patients, suggesting an indirect effect of review. PMID:393658

  13. Fluctuations of trading volume in a stock market

    NASA Astrophysics Data System (ADS)

    Hong, Byoung Hee; Lee, Kyoung Eun; Hwang, Jun Kyung; Lee, Jae Woo

    2009-03-01

    We consider the probability distribution function of the trading volume and the volume changes in the Korean stock market. The probability distribution function of the trading volume shows double peaks and follows a power law, P(V/)∼( at the tail part of the distribution with α=4.15(4) for the KOSPI (Korea composite Stock Price Index) and α=4.22(2) for the KOSDAQ (Korea Securities Dealers Automated Quotations), where V is the trading volume and is the monthly average value of the trading volume. The second peaks originate from the increasing trends of the average volume. The probability distribution function of the volume changes also follows a power law, P(Vr)∼Vr-β, where Vr=V(t)-V(t-T) and T is a time lag. The exponents β depend on the time lag T. We observe that the exponents β for the KOSDAQ are larger than those for the KOSPI.

  14. Local Origin of Global Contact Numbers in Frictional Ellipsoid Packings

    NASA Astrophysics Data System (ADS)

    Schaller, Fabian M.; Neudecker, Max; Saadatfar, Mohammad; Delaney, Gary W.; Schröder-Turk, Gerd E.; Schröter, Matthias

    2015-04-01

    In particulate soft matter systems the average number of contacts Z of a particle is an important predictor of the mechanical properties of the system. Using x-ray tomography, we analyze packings of frictional, oblate ellipsoids of various aspect ratios α , prepared at different global volume fractions ϕg. We find that Z is a monotonically increasing function of ϕg for all α . We demonstrate that this functional dependence can be explained by a local analysis where each particle is described by its local volume fraction ϕl computed from a Voronoi tessellation. Z can be expressed as an integral over all values of ϕl: Z (ϕg,α ,X )=∫Zl(ϕl,α ,X )P (ϕl|ϕg)d ϕl . The local contact number function Zl(ϕl,α ,X ) describes the relevant physics in term of locally defined variables only, including possible higher order terms X . The conditional probability P (ϕl|ϕg) to find a specific value of ϕl given a global packing fraction ϕg is found to be independent of α and X . Our results demonstrate that for frictional particles a local approach is not only a theoretical requirement but also feasible.

  15. Technical Report Series on Global Modeling and Data Assimilation. Volume 42; Soil Moisture Active Passive (SMAP) Project Calibration and Validation for the L4_C Beta-Release Data Product

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas

    2015-01-01

    During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.

  16. Optimal Averaging of Seasonal Sea Surface Temperatures and Associated Confidence Intervals (1860-1989).

    NASA Astrophysics Data System (ADS)

    Smith, Thomas M.; Reynolds, Richard W.; Ropelewski, Chester F.

    1994-06-01

    Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. The ability to assign confidence intervals is the main advantage of this method. Confidence intervals reflect how densely and uniformly an area is sampled during the averaging season. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest uncertainties. Analysis of OA-based uncertainty estimates shows that before 1930 sampling in the Southern Hemisphere was as good as it was in the Northern Hemisphere. From about 1930 to 1950, uncertainties decreased in both hemispheres, but the magnitude of the Northern Hemisphere uncertainties reduced more and remained smaller. After the early 1950s uncertainties were relatively constant in both hemispheres, indicating that sampling was relatively consistent over the period. During the two world wars, increased uncertainties reflected the sampling decreases over all the oceans, with the biggest decreases south of 40°S. The OA global SST anomalies are virtually identical to estimates of global SST anomalies computed using simpler methods, when the same data corrections are applied. When data are plentiful over an area there is no clear advantage of the OA over simpler methods. The major advantage of the OA over the simpler methods is the accompanying error estimates.The OA analysis suggests that SST anomalies were not significantly different from 0 from 1860 to 1900. This result is heavily influenced by the choice of the data corrections applied before the 1950s. Global anomalies are also near zero from 1940 until the mid-1970s. The OA analysis suggests that negative anomalies dominated the period from the early 1900s through the 1930s although the uncertainties are quite large during and immediately following World War

  17. Decomposing global crop yield variability

    NASA Astrophysics Data System (ADS)

    Ben-Ari, Tamara; Makowski, David

    2014-11-01

    Recent food crises have highlighted the need to better understand the between-year variability of agricultural production. Although increasing future production seems necessary, the globalization of commodity markets suggests that the food system would also benefit from enhanced supplies stability through a reduction in the year-to-year variability. Here, we develop an analytical expression decomposing global crop yield interannual variability into three informative components that quantify how evenly are croplands distributed in the world, the proportion of cultivated areas allocated to regions of above or below average variability and the covariation between yields in distinct world regions. This decomposition is used to identify drivers of interannual yield variations for four major crops (i.e., maize, rice, soybean and wheat) over the period 1961-2012. We show that maize production is fairly spread but marked by one prominent region with high levels of crop yield interannual variability (which encompasses the North American corn belt in the USA, and Canada). In contrast, global rice yields have a small variability because, although spatially concentrated, much of the production is located in regions of below-average variability (i.e., South, Eastern and South Eastern Asia). Because of these contrasted land use allocations, an even cultivated land distribution across regions would reduce global maize yield variance, but increase the variance of global yield rice. Intermediate results are obtained for soybean and wheat for which croplands are mainly located in regions with close-to-average variability. At the scale of large world regions, we find that covariances of regional yields have a negligible contribution to global yield variance. The proposed decomposition could be applied at any spatial and time scales, including the yearly time step. By addressing global crop production stability (or lack thereof) our results contribute to the understanding of a key

  18. A Population-Average, Landmark- and Surface-based (PALS) atlas of human cerebral cortex.

    PubMed

    Van Essen, David C

    2005-11-15

    This report describes a new electronic atlas of human cerebral cortex that provides a substrate for a wide variety of brain-mapping analyses. The Population-Average, Landmark- and Surface-based (PALS) atlas approach involves surface-based and volume-based representations of cortical shape, each available as population averages and as individual subject data. The specific PALS-B12 atlas introduced here is derived from structural MRI volumes of 12 normal young adults. Accurate cortical surface reconstructions were generated for each hemisphere, and the surfaces were inflated, flattened, and mapped to standard spherical configurations using SureFit and Caret software. A target atlas sphere was generated by averaging selected landmark contours from each of the 24 contributing hemispheres. Each individual hemisphere was deformed to this target using landmark-constrained surface registration. The utility of the resultant PALS-B12 atlas was demonstrated using a variety of analyses. (i) Probabilistic maps of sulcal identity were generated using both surface-based registration (SBR) and conventional volume-based registration (VBR). The SBR approach achieved markedly better consistency of sulcal alignment than did VBR. (ii) A method is introduced for 'multi-fiducial mapping' of volume-averaged group data (e.g., fMRI data, probabilistic architectonic maps) onto each individual hemisphere in the atlas, followed by spatial averaging across the individual maps. This yielded a population-average surface representation that circumvents the biases inherent in choosing any single hemisphere as a target. (iii) Surface-based and volume-based morphometry applied to maps of sulcal depth and sulcal identity demonstrated prominent left-right asymmetries in and near the superior temporal sulcus and Sylvian fissure. Moreover, shape variability in the temporal lobe is significantly greater in the left than the right hemisphere. The PALS-B12 atlas has been registered to other surface

  19. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment... Carbon-Related Exhaust Emissions § 600.510-12 Calculation of average fuel economy and average carbon.... (iv) (2) Average carbon-related exhaust emissions will be calculated to the nearest one gram per...

  20. Normal age-related brain morphometric changes: nonuniformity across cortical thickness, surface area and gray matter volume?

    PubMed

    Lemaitre, Herve; Goldman, Aaron L; Sambataro, Fabio; Verchinski, Beth A; Meyer-Lindenberg, Andreas; Weinberger, Daniel R; Mattay, Venkata S

    2012-03-01

    Normal aging is accompanied by global as well as regional structural changes. While these age-related changes in gray matter volume have been extensively studied, less has been done using newer morphological indexes, such as cortical thickness and surface area. To this end, we analyzed structural images of 216 healthy volunteers, ranging from 18 to 87 years of age, using a surface-based automated parcellation approach. Linear regressions of age revealed a concomitant global age-related reduction in cortical thickness, surface area and volume. Cortical thickness and volume collectively confirmed the vulnerability of the prefrontal cortex, whereas in other cortical regions, such as in the parietal cortex, thickness was the only measure sensitive to the pronounced age-related atrophy. No cortical regions showed more surface area reduction than the global average. The distinction between these morphological measures may provide valuable information to dissect age-related structural changes of the brain, with each of these indexes probably reflecting specific histological changes occurring during aging. PMID:20739099

  1. Environment Abstracts Annual 1988. Volume 18.

    ERIC Educational Resources Information Center

    Yuster, Leigh C., Ed.; And Others

    This publication is a compilation of environmental information and resources for the year 1988. The first section details the coverage and use of this volume. Section 2 contains a review of events in 1988; a chronology of events; a status report produced for Congress; three articles on environmental issues including global change, pesticides, and…

  2. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGES

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  3. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    SciTech Connect

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.

  4. Global Education.

    ERIC Educational Resources Information Center

    McCoubrey, Sharon

    1994-01-01

    This theme issue focuses on topics related to global issues. (1) "Recycling for Art Projects" (Wendy Stephenson) gives an argument for recycling in the art classroom; (2) "Winds of Change: Tradition and Innovation in Circumpolar Art" (Bill Zuk and Robert Dalton) includes profiles of Alaskan Yupik artist, Larry Beck, who creates art from recycled…

  5. Global Warming.

    ERIC Educational Resources Information Center

    Hileman, Bette

    1989-01-01

    States the foundations of the theory of global warming. Describes methodologies used to measure the changes in the atmosphere. Discusses steps currently being taken in the United States and the world to slow the warming trend. Recognizes many sources for the warming and the possible effects on the earth. (MVL)

  6. Global Warming?

    ERIC Educational Resources Information Center

    Eichman, Julia Christensen; Brown, Jeff A.

    1994-01-01

    Presents information and data on an experiment designed to test whether different atmosphere compositions are affected by light and temperature during both cooling and heating. Although flawed, the experiment should help students appreciate the difficulties that researchers face when trying to find evidence of global warming. (PR)

  7. Global Change

    USGS Publications Warehouse

    ,

    1993-01-01

    Global change is a relatively new area of scientific study using research from many disciplines to determine how Earth systems change, and to assess the influence of human activity on these changes. This teaching packet consists of a poster and three activity sheets. In teaching these activities four themes are important: time, change, cycles, and Earth as home.

  8. Estimates of the global electric circuit from global thunderstorm activity

    NASA Astrophysics Data System (ADS)

    Hutchins, M. L.; Holzworth, R. H.; Brundell, J. B.

    2013-12-01

    The World Wide Lightning Location Network (WWLLN) has a global detection efficiency around 10%, however the network has been shown to identify 99% of thunderstorms (Jacobson, et al 2006, using WWLLN data from 2005). To create an estimate of the global electric circuit activity a clustering algorithm is applied to the WWLLN dataset to identify global thunderstorms from 2009 - 2013. The annual, seasonal, and regional thunderstorm activity is investigated with this new WWLLN thunderstorm dataset in order to examine the source behavior of the global electric circuit. From the clustering algorithm the total number of active thunderstorms is found every 30 minutes to create a measure of the global electric circuit source function. The clustering algorithm used is shown to be robust over parameter ranges related to real physical storm sizes and times. The thunderstorm groupings are verified with case study comparisons using satellite and radar data. It is found that there are on average 714 × 81 thunderstorms active at any given time. Similarly the highest average number of thunderstorms occurs in July (783 × 69) with the lowest in January (599 × 76). The annual and diurnal thunderstorm activity seen with the WWLLN thunderstorms is in contrast with the bimodal stroke activity seen by WWLLN. Through utilizing the global coverage and high time resolution of WWLLN, it is shown that the total active thunderstorm count is less than previous estimates based on compiled climatologies.

  9. Determination of sediment thickness and volume in Lake Byron, South Dakota, using continuous seismic-reflection methods, May 1992

    USGS Publications Warehouse

    Sando, S.K.; Cates, S.W.

    1994-01-01

    A sediment survey to assess the amount and distribution of lake sediment was made as part of a diagnostic/feasibility study investigating the potential for lake restoration of Lake Byron, South Dakota. A high-frequency, continuous seismic- reflection system was used to estimate thickness of sediment, and a global-positioning system was used to monitor horizontal and vertical position while traversing 15 north-south and two diagonal transects of the lake. The volume of water was 10,645 acre- feet, and the average depth was 5.6 feet. The volume of loose, uncompacted sediment in Lake Byron was estimated to be 3.8 million cubic yards, and the average depth of uncompacted sediment was estimated to be 1.2 feet. The volume of total lake sediment in Lake Byron was estimated to be 34 million cubic yards. The average thickness of total lake sediment in the Western part of Lake Byron was estimated to be 11 feet.

  10. Genetic and Environmental Contributions to the Relationships between Brain Structure and Average Lifetime Cigarette Use

    PubMed Central

    Prom-Wormley, Elizabeth; Maes, Hermine H.M.; Schmitt, J. Eric; Panizzon, Matthew S.; Xian, Hong; Eyler, Lisa T.; Franz, Carol E.; Lyons, Michael J.; Tsuang, Ming T.; Dale, Anders M.; Fennema-Notestine, Christine; Kremen, William S.; Neale, Michael C.

    2015-01-01

    Chronic cigarette use has been consistently associated with differences in the neuroanatomy of smokers relative to nonsmokers in case-control studies. However, the etiology underlying the relationships between brain structure and cigarette use is unclear. A community-based sample of male twin pairs ages 51-59 (110 monozygotic pairs, 92 dizygotic pairs) was used to determine the extent to which there are common genetic and environmental influences between brain structure and average lifetime cigarette use. Brain structure was measured by high-resolution structural magnetic resonance imaging, from which subcortical volume and cortical volume, thickness and surface area were derived. Bivariate genetic models were fitted between these measures and average lifetime cigarette use measured as cigarette pack-years. Widespread, negative phenotypic correlations were detected between cigarette pack-years and several cortical as well as subcortical structures. Shared genetic and unique environmental factors contributed to the phenotypic correlations shared between cigarette pack-years and subcortical volume as well as cortical volume and surface area. Brain structures involved in many of the correlations were previously reported to play a role in specific aspects of networks of smoking-related behaviors. These results provide evidence for conducting future research on the etiology of smoking-related behaviors using measures of brain morphology. PMID:25690561

  11. Quantifying the Restorable Water Volume of California's Sierra Nevada Meadows

    NASA Astrophysics Data System (ADS)

    Emmons, J. D.; Yarnell, S. M.; Fryjoff-Hung, A.; Viers, J.

    2013-12-01

    The Sierra Nevada is estimated to provide over 66% of California's water supply, which is largely derived from snowmelt. Global climate warming is expected to result in a decrease in snow pack and an increase in melting rate, making the attenuation of snowmelt by any means, an important ecosystem service for ensuring water availability. Montane meadows are dispersed throughout the mountain range and can act like natural reservoirs, and also provide wildlife habitat, water filtration, and water storage. Despite the important role of meadows in the Sierra Nevada, a large proportion is degraded from stream incision, which increases volume outflows and reduces overbank flooding, thus reducing infiltration and potential water storage. Restoration of meadow stream channels would therefore improve hydrological functioning, including increased water storage. The potential water holding capacity of restored meadows has yet to be quantified, thus this research seeks to address this knowledge gap by estimating the restorable water volume due to stream incision. More than 17,000 meadows were analyzed by categorizing their erosion potential using channel slope and soil texture, ultimately resulting in six general erodibility types. Field measurements of over 100 meadows, stratified by latitude, elevation, and geologic substrate, were then taken and analyzed for each erodibility type to determine average depth of incision. Restorable water volume was then quantified as a function of water holding capacity of the soil, meadow area and incised depth. Total restorable water volume was found to be 120 x 10^6 m3, or approximately 97,000 acre-feet. Using 95% confidence intervals for incised depth, the upper and lower bounds of the total restorable water volume were found to be 107 - 140 x 10^6 m3. Though this estimate of restorable water volume is small in regards to the storage capacity of typical California reservoirs, restoration of Sierra Nevada meadows remains an important

  12. An investigation of trends in precipitation volume for the last three decades in different regions of Fars province, Iran

    NASA Astrophysics Data System (ADS)

    Ahani, Hossein; Kherad, Mehrzad; Kousari, Mohammad Reza; Rezaeian-Zadeh, Mehdi; Karampour, Mohammad Amin; Ejraee, Faezeh; Kamali, Saeedeh

    2012-08-01

    Under condition of climate changes as global warming, monitoring and detecting trend of precipitation volume is essential and will be useful for agricultural sections. Considering the fact that there were not enough research related to precipitation volume, this study aimed to determine trends in precipitation volume, monthly and annually in different regions of Fars province for the last three decades (33 years period; 1978-2010). Fars province is located in arid and semi-arid regions of Iran, and it plays an important role in agricultural production. Inverse distance weighting interpolation method was used to provide precipitation data for all regions. To analyze the trends of precipitation volume, Mann-Kendall test, Sen's slope estimator, and 10-year moving average low-pass filter (within time series) were used. The negative trends were identified by the Sen's slope estimator as well as Mann-Kendall test. However, all the trends were insignificant at the surveyed confidence level (95%). With regards to the application of 10-year moving average low-pass filter, a considerable decreasing trend was observed after around year 1994. Since one of the most important restrictions in agricultural development of the Fars province is lack of sufficient water resources, any changes onward to lack of sufficient precipitation impose impressive pressure and stress on valuable resources and subsequently agricultural production.

  13. Global Mental Health: An Introduction.

    PubMed

    Verdeli, Helen

    2016-08-01

    In this introductory paper to the Global Mental Health volume, the inception and development of the filed in the last 15 years is reviewed, placing an emphasis on a series of pivotal turning points. A critical delivery strategy, task-shifting is briefly described, as well as the fundamental principles of Interpersonal Psychotherapy (IPT), an evidence-based psychotherapy being adapted and delivered in low-resource settings. Nine case studies by the trainees, supervisors, or local providers from India, the United States, Haiti, Israel, Colombia, and Kenya, presented in this volume, illustrate the prevention and treatment processes or in-depth assessment of "psychological distress" as locally defined and expressed. PMID:27532521

  14. High-volume centers.

    PubMed

    Vespa, P; Diringer, Michael N

    2011-09-01

    Outcome from trauma, surgery, and a variety of other medical conditions has been shown to be positively affected by providing treatment at facilities experiencing a high volume of patients with those conditions. An electronic literature search was made to identify English-language articles available through March 2011, addressing the effect of patient treatment volume on outcome for patients with subarachnoid hemorrhage. Limited data were identified, with 16 citations included in the current review. Over 60% of hospitals fall into the lowest case-volume quartile. Outcome is influenced by patient volume, with better outcome occurring in high-volume centers treating >60 cases per year. Patients treated at low-volume hospitals are less likely to experience definitive treatment. Furthermore, transfer to high-volume centers may be inadequately arranged. Several factors may influence the better outcome at high-volume centers, including the availability of neurointensivists and interventional neuroradiologists. PMID:21792754

  15. Transfer factor, lung volumes, resistance and ventilation distribution in healthy adults.

    PubMed

    Verbanck, Sylvia; Van Muylem, Alain; Schuermans, Daniel; Bautmans, Ivan; Thompson, Bruce; Vincken, Walter

    2016-01-01

    Monitoring of chronic lung disease requires reference values of lung function indices, including putative markers of small airway function, spanning a wide age range.We measured spirometry, transfer factor of the lung for carbon monoxide (TLCO), static lung volume, resistance and ventilation distribution in a healthy population, studying at least 20 subjects per sex and per decade between the ages of 20 and 80 years.With respect to the Global Lung Function Initiative reference data, our subjects had average z-scores for forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC) and FEV1/FVC of -0.12, 0.04 and -0.32, respectively. Reference equations were obtained which could account for a potential dependence of index variability on age and height. This was done for (but not limited to) indices that are pertinent to asthma and chronic obstructive pulmonary disease studies: forced expired volume in 6 s, forced expiratory flow, TLCO, specific airway conductance, residual volume (RV)/total lung capacity (TLC), and ventilation heterogeneity in acinar and conductive lung zones.Deterioration in acinar ventilation heterogeneity and lung clearance index with age were more marked beyond 60 years, and conductive ventilation heterogeneity showed the greatest increase in variability with age. The most clinically relevant deviation from published reference values concerned RV/TLC values, which were considerably smaller than American Thoracic Society/European Respiratory Society-endorsed reference values.

  16. Cost averaging techniques for robust control of flexible structural systems

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  17. Average American 15 Pounds Heavier Than 20 Years Ago

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  18. A Global Climate Model for Instruction.

    ERIC Educational Resources Information Center

    Burt, James E.

    This paper describes a simple global climate model useful in a freshman or sophomore level course in climatology. There are three parts to the paper. The first part describes the model, which is a global model of surface air temperature averaged over latitude and longitude. Samples of the types of calculations performed in the model are provided.…

  19. Panwapa: Global Kids, Global Connections

    ERIC Educational Resources Information Center

    Berson, Ilene R.; Berson, Michael J.

    2009-01-01

    Panwapa, created by the Sesame Street Workshop of PBS, is an example of an initiative on the Internet designed to enhance students' learning by exposing them to global communities. Panwapa means "Here on Earth" in Tshiluba, a Bantu language spoken in the Democratic Republic of Congo. At the Panwapa website, www.panwapa.org, children aged four to…

  20. Disc Volume Reduction with Percutaneous Nucleoplasty in an Animal Model

    PubMed Central

    Kasch, Richard; Mensel, Birger; Schmidt, Florian; Ruetten, Sebastian; Barz, Thomas; Froehlich, Susanne; Seipel, Rebecca; Merk, Harry R.; Kayser, Ralph

    2012-01-01

    Study Design We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI). Purpose To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. Methods We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group) was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group) the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. Results The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml). In the nucleoplasty group (n = 21) volume was reduced by an average of 0.087 ml (SD: 0.110 ml) or 7.14%. In the placebo group (n = 21) volume was increased by an average of 0.075 ml (SD: 0.075 ml) or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml) or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001). Conclusions Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs. PMID:23209677

  1. Recent advances in the development of high average power induction accelerators for industrial and environmental applications

    SciTech Connect

    Neau, F.L.

    1994-12-31

    Short-pulse accelerator technology developed during time period from the early 60`s through the late 80`s is now being extended to high average power systems capable of being used in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput may require systems with beam power levels from several hundreds of kilowatts to megawatts. Processes may include chemical waste mitigation, flue gas cleanup, food pasteurization, and new forms of materials preparation and treatment. This paper will address the present status of high average power systems now in operation that use combinations of semiconductor and saturable core magnetic switches with inductive voltage adders to achieve MeV beams of electrons or x-rays over areas of 10,000 cm{sup 2} or more. Similar high average power technology is also being used below 1 MeV to drive repetitive ion beam sources for treatment of material surfaces.

  2. Navajo History. Volume 1.

    ERIC Educational Resources Information Center

    Yazzie, Ethelou, Ed.

    This volume, an account of the prerecorded history of the Navajos, is the first of a series of two volumes. (Volume 2 will take up recorded history.) From the knowledge of verbal literature supplied by Navajos themselves, this composite was completed to help alleviate the lack of materials on Navajo culture. Consensus, the authors point out, was…

  3. Variable volume maser techniques

    NASA Technical Reports Server (NTRS)

    Reinhardt, V. S.

    1977-01-01

    The frequency stability of hydrogen masers in variable volume storage bulbs is discussed in terms of wall shift. Variable volume devices discussed include: Brenner flexible bulb, Debely device, and the concertina hydrogen maser. A flexible cone variable volume element outside the cavity is described.

  4. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  5. 20 CFR 404.220 - Average-monthly-wage method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  6. 27 CFR 19.37 - Average effective tax rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  7. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  8. 7 CFR 1410.44 - Average adjusted gross income.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  9. 34 CFR 668.196 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  10. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  11. 34 CFR 668.215 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  12. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Computation of average terrain elevation. 1.959... Procedures § 1.959 Computation of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a...

  13. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Computation of average terrain elevation. 1.959... Procedures § 1.959 Computation of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a...

  14. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Computation of average terrain elevation. 1.959... of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30 second point or...

  15. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Computation of average terrain elevation. 1.959... Procedures § 1.959 Computation of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a...

  16. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959... of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30 second point or...

  17. 78 FR 16711 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...

  18. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an inmate in a Community Corrections...

  19. 76 FR 57081 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...

  20. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  1. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  2. Going Global

    ERIC Educational Resources Information Center

    Boulard, Garry

    2010-01-01

    In a move to increase its out-of-state and international student enrollment, officials at the University of Iowa are stepping up their global recruitment efforts--even in the face of criticism that the school may be losing sight of its mission. The goal is to increase enrollment across the board, with both in-state as well as out-of-state and…

  3. Global Arrays

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the sharedmore » data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).« less

  4. Size and average density spectra of macromolecules obtained from hydrodynamic data.

    PubMed

    Pavlov, G M

    2007-02-01

    It is proposed to normalize the Mark-Kuhn-Houwink-Sakurada type of equation relating the hydrodynamic characteristics, such as intrinsic viscosity, velocity sedimentation coefficient and translational diffusion coefficient of linear macromolecules to their molecular masses for the values of linear density M(L) and the statistical segment length A. When the set of data covering virtually all known experimental information is normalized for M(L), it is presented as a size spectrum of linear polymer molecules. Further normalization for the A value reduces all data to two regions: namely the region exhibiting volume interactions and that showing hydrodynamic draining. For chains without intachain excluded volume effects these results may be reproduced using the Yamakawa-Fujii theory of wormlike cylinders. Data analyzed here cover a range of contour lengths of linear chains varying by three orders of magnitude, with the range of statistical segment lengths varying approximately 500 times. The plot of the dependence of [eta]M on M represents the spectrum of average specific volumes occupied by linear and branched macromolecules. Dendrimers and globular proteins for which the volume occupied by the molecule in solution is directly proportional to M have the lowest specific volume. The homologous series of macromolecules in these plots are arranged following their fractal dimensionality. PMID:17377754

  5. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  6. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... 40 Protection of Environment 8 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic... convert my 1-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation...

  7. Depressive Symptoms, Brain Volumes and Subclinical Cerebrovascular Disease in Postmenopausal Women: The Women’s Health Initiative MRI Study

    PubMed Central

    Goveas, Joseph S.; Espeland, Mark A.; Hogan, Patricia; Dotson, Vonetta; Tarima, Sergey; Coker, Laura H.; Ockene, Judith; Brunner, Robert; Woods, Nancy F.; Wassertheil-Smoller, Sylvia; Kotchen, Jane M.; Resnick, Susan

    2011-01-01

    Objective Late-life depressive symptoms (DS) increase the risk of incident mild cognitive impairment and probable dementia in the elderly. Our objectives were to examine the relationship between elevated DS and regional brain volumes including frontal lobe subregions, hippocampus and amygdala, and to determine whether elevated DS were associated with increased subclinical cerebrovascular disease in postmenopausal women. Methods DS were assessed an average of 8 years prior to structural brain MRI in 1372 women. The 8-item Burnam regression algorithm was used to define DS with a cut-point of 0.009. Adjusting for potential confounders, mean differences in total brain, frontal lobe subregions, hippocampus and amygdala volumes and total ischemic lesion volumes in the basal ganglia and the cerebral white and gray matter outside the basal ganglia were compared between women with and without DS. Results Depressed women had lower baseline global cognition and were more likely to have prior hormone therapy history. After full adjustment, DS at baseline were associated with smaller superior and middle frontal gyral volumes. Hippocampal and amygdala volumes, and ischemic lesion volumes were similar in depressed and non-depressed women. Limitations Depression was not assessed based on semi-structured interview, and we were unable to determine the temporal relationships between DS and frontal lobe volume differences due to the availability of only one MRI scan. Conclusions Elevated DS were associated with lower volumes in certain frontal lobe subregions but not in the medial temporal lobe structures. Our findings support the role of frontal lobe structures in late-life DS among women. PMID:21349587

  8. Global sea level rise

    SciTech Connect

    Douglas, B.C. )

    1991-04-15

    Published values for the long-term, global mean sea level rise determined from tide gauge records exhibit considerable scatter, from about 1 mm to 3 mm/yr. This disparity is not attributable to instrument error; long-term trends computed at adjacent sites often agree to within a few tenths of a millimeter per year. Instead, the differing estimates of global sea level rise appear to be in large part due to authors' using data from gauges located at convergent tectonic plate boundaries, where changes of land elevation give fictitious sea level trends. In addition, virtually all gauges undergo subsidence or uplift due to postglacial rebound (PGR) from the last deglaciation at a rate comparable to or greater than the secular rise of sea level. Modeling PGR by the ICE-3G model of Tushingham and Peltier (1991) and avoiding tide gauge records in areas of converging tectonic plates produces a highly consistent set of long sea level records. The value for mean sea level rise obtained from a global set of 21 such stations in nine oceanic regions with an average record length of 76 years during the period 1880-1980 is 1.8 mm/yr {plus minus} 0.1. This result provides confidence that carefully selected long tide gauge records measure the same underlying trend of sea level and that many old tide gauge records are of very high quality.

  9. Global protected area impacts.

    PubMed

    Joppa, Lucas N; Pfaff, Alexander

    2011-06-01

    Protected areas (PAs) dominate conservation efforts. They will probably play a role in future climate policies too, as global payments may reward local reductions of loss of natural land cover. We estimate the impact of PAs on natural land cover within each of 147 countries by comparing outcomes inside PAs with outcomes outside. We use 'matching' (or 'apples to apples') for land characteristics to control for the fact that PAs very often are non-randomly distributed across their national landscapes. Protection tends towards land that, if unprotected, is less likely than average to be cleared. For 75 per cent of countries, we find protection does reduce conversion of natural land cover. However, for approximately 80 per cent of countries, our global results also confirm (following smaller-scale studies) that controlling for land characteristics reduces estimated impact by half or more. This shows the importance of controlling for at least a few key land characteristics. Further, we show that impacts vary considerably within a country (i.e. across a landscape): protection achieves less on lands far from roads, far from cities and on steeper slopes. Thus, while planners are, of course, constrained by other conservation priorities and costs, they could target higher impacts to earn more global payments for reduced deforestation.

  10. Australopithecine endocast (Taung specimen, 1924): a new volume determination.

    PubMed

    Holloway, R L

    1970-05-22

    A redetermination of endocranial volume of the original 1924 Taung australopithecine described by Dart indicates a volume of 405 cubic centimeters, rather than the 525 cubic centimeters published earlier. The adult volume is estimated to have been 440 cubic centimeters. This value, plus other redeterminations of australopithecine endocasts, lowers the average to 442 cubic centimeters, and increase the likelihood of statistically significant differences from both robust australopithecines and the Olduvai Gorge hominid No. 7. PMID:5441027

  11. Global Arrays

    SciTech Connect

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav; Palmer, Bruce J.

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  12. Revision of the Branch Technical Position on Concentration Averaging and Encapsulation - 12510

    SciTech Connect

    Heath, Maurice; Kennedy, James E.; Ridge, Christianne; Lowman, Donald; Cochran, John

    2012-07-01

    The U.S. Nuclear Regulatory Commission (NRC) regulation governing low-level waste (LLW) disposal, 'Licensing Requirements for Land Disposal of Radioactive Waste', 10 CFR Part 61, establishes a waste classification system based on the concentration of specific radionuclides contained in the waste. The regulation also states, at 10 CFR 61.55(a)(8), that, 'the concentration of a radionuclide (in waste) may be averaged over the volume of the waste, or weight of the waste if the units are expressed as nanocuries per gram'. The NRC's Branch Technical Position on Concentration Averaging and Encapsulation provides guidance on averaging radionuclide concentrations in waste under 10 CFR 61.55(a)(8) when classifying waste for disposal. In 2007, the NRC staff proposed to revise the Branch Technical Position on Concentration Averaging and Encapsulation. The Branch Technical Position on Concentration Averaging and Encapsulation is an NRC guidance document for averaging and classifying wastes under 10 CFR 61. The Branch Technical Position on Concentration Averaging and Encapsulation is used by nuclear power plants (NPPs) licensees and sealed source users, among others. In addition, three of the four U.S. LLW disposal facility operators are required to honor the Branch Technical Position on Concentration Averaging and Encapsulation as a licensing condition. In 2010, the Commission directed the staff to develop guidance regarding large scale blending of similar homogenous waste types, as described in SECY-10-0043 as part of its Branch Technical Position on Concentration Averaging and Encapsulation revision. The Commission is improving the regulatory approach used in the Branch Technical Position on Concentration Averaging and Encapsulation by moving towards a making it more risk-informed and performance-based approach, which is more consistent with the agency's regulatory policies. Among the improvements to the Branch Technical Position on Concentration Averaging and Encapsulation

  13. Infinite-time average of local fields in an integrable quantum field theory after a quantum quench.

    PubMed

    Mussardo, G

    2013-09-01

    The infinite-time average of the expectation values of local fields of any interacting quantum theory after a global quench process are key quantities for matching theoretical and experimental results. For quantum integrable field theories, we show that they can be obtained by an ensemble average that employs a particular limit of the form factors of local fields and quantities extracted by the generalized Bethe ansatz.

  14. MODEL AVERAGING BASED ON KULLBACK-LEIBLER DISTANCE

    PubMed Central

    Zhang, Xinyu; Zou, Guohua; Carroll, Raymond J.

    2016-01-01

    This paper proposes a model averaging method based on Kullback-Leibler distance under a homoscedastic normal error term. The resulting model average estimator is proved to be asymptotically optimal. When combining least squares estimators, the model average estimator is shown to have the same large sample properties as the Mallows model average (MMA) estimator developed by Hansen (2007). We show via simulations that, in terms of mean squared prediction error and mean squared parameter estimation error, the proposed model average estimator is more efficient than the MMA estimator and the estimator based on model selection using the corrected Akaike information criterion in small sample situations. A modified version of the new model average estimator is further suggested for the case of heteroscedastic random errors. The method is applied to a data set from the Hong Kong real estate market.

  15. Random time averaged diffusivities for Lévy walks

    NASA Astrophysics Data System (ADS)

    Froemberg, D.; Barkai, E.

    2013-07-01

    We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ⟨x2⟩ ∝ t2, the latter to enhanced diffusion with ⟨x2⟩ ∝ tν, 1 < ν < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.

  16. Deep water temperature, carbonate ion, and ice volume changes across the Eocene-Oligocene climate transition

    NASA Astrophysics Data System (ADS)

    Pusz, A. E.; Thunell, R. C.; Miller, K. G.

    2011-06-01

    Paired benthic foraminiferal stable isotope and Mg/Ca data are used to estimate bottom water temperature (BWT) and ice volume changes associated with the Eocene-Oligocene Transition (EOT), the largest global climate event of the past 50 Myr. We utilized ODP Sites 1090 and 1265 in the South Atlantic to assess seawater δ18O (δw), Antarctic ice volume, and sea level changes across the EOT (˜33.8-33.54 Ma). We also use benthic δ13C data to reconstruct the sources of the deep water masses in this region during the EOT. Our data, together with previously published records, indicate that a pulse of Northern Component Water influenced the South Atlantic immediately prior to and following the EOT. Benthic δ18O records show a 0.5‰ increase at ˜33.8 Ma (EOT-1) that represents a ˜2°C cooling and a small (˜10 m) eustatic fall that is followed by a 1.0‰ increase associated with Oi-1. The expected cooling of deep waters at Oi-1 (˜33.54 Ma) is not apparent in our Mg/Ca records. We suggest the cooling is masked by coeval changes in the carbonate saturation state (Δ[CO32-]) which affect the Mg/Ca data. To account for this, the BWT, ice volume, and δw estimates are corrected for a change in the Δ[CO32-] of deep waters on the basis of recently published work. Corrected BWT at Sites 1090 and 1265 show a ˜1.5°C cooling coincident with Oi-1 and an average δw increase of ˜0.75‰. The increase in ice volume during Oi-1 resulted in a ˜70 m drop in global sea level and the development of an Antarctic ice sheet that was near modern size or slightly larger.

  17. Fiber-optic large area average temperature sensor

    SciTech Connect

    Looney, L.L.; Forman, P.R.

    1994-05-01

    In many instances the desired temperature measurement is only the spatial average temperature over a large area; eg. ground truth calibration for satellite imaging system, or average temperature of a farm field. By making an accurate measurement of the optical length of a long fiber-optic cable, we can determine the absolute temperature averaged over its length and hence the temperature of the material in contact with it.

  18. Global Hail Model

    NASA Astrophysics Data System (ADS)

    Werner, A.; Sanderson, M.; Hand, W.; Blyth, A.; Groenemeijer, P.; Kunz, M.; Puskeiler, M.; Saville, G.; Michel, G.

    2012-04-01

    Hail risk models are rare for the insurance industry. This is opposed to the fact that average annual hail losses can be large and hail dominates losses for many motor portfolios worldwide. Insufficient observational data, high spatio-temporal variability and data inhomogenity have hindered creation of credible models so far. In January 2012, a selected group of hail experts met at Willis in London in order to discuss ways to model hail risk at various scales. Discussions aimed at improving our understanding of hail occurrence and severity, and covered recent progress in the understanding of microphysical processes and climatological behaviour and hail vulnerability. The final outcome of the meeting was the formation of a global hail risk model initiative and the launch of a realistic global hail model in order to assess hail loss occurrence and severities for the globe. The following projects will be tackled: Microphysics of Hail and hail severity measures: Understand the physical drivers of hail and hailstone size development in different regions on the globe. Proposed factors include updraft and supercooled liquid water content in the troposphere. What are the thresholds drivers of hail formation around the globe? Hail Climatology: Consider ways to build a realistic global climatological set of hail events based on physical parameters including spatial variations in total availability of moisture, aerosols, among others, and using neural networks. Vulnerability, Exposure, and financial model: Use historical losses and event footprints available in the insurance market to approximate fragility distributions and damage potential for various hail sizes for property, motor, and agricultural business. Propagate uncertainty distributions and consider effects of policy conditions along with aggregating and disaggregating exposure and losses. This presentation provides an overview of ideas and tasks that lead towards a comprehensive global understanding of hail risk for

  19. Automatic volume calibration system

    SciTech Connect

    Gates, A.J.; Aaron, C.C.

    1985-05-06

    The Automatic Volume Calibration System presently consists of three independent volume-measurement subsystems and can possibly be expanded to five subsystems. When completed, the system will manually or automatically perform the sequence of valve-control and data-acquisition operations required to measure given volumes. An LSI-11 minicomputer controls the vacuum and pressure sources and controls solenoid control valves to open and close various volumes. The input data are obtained from numerous displacement, temperature, and pressure sensors read by the LSI-11. The LSI-11 calculates the unknown volume from the data acquired during the sequence of valve operations. The results, based on the Ideal Gas Law, also provide information for feedback and control. This paper describes the volume calibration system, its subsystems, and the integration of the various instrumentation used in the system's design and development. 11 refs., 13 figs., 4 tabs.

  20. Averaging of viral envelope glycoprotein spikes from electron cryotomography reconstructions using Jsubtomo.

    PubMed

    Huiskonen, Juha T; Parsy, Marie-Laure; Li, Sai; Bitto, David; Renner, Max; Bowden, Thomas A

    2014-01-01

    Enveloped viruses utilize membrane glycoproteins on their surface to mediate entry into host cells. Three-dimensional structural analysis of these glycoprotein 'spikes' is often technically challenging but important for understanding viral pathogenesis and in drug design. Here, a protocol is presented for viral spike structure determination through computational averaging of electron cryo-tomography data. Electron cryo-tomography is a technique in electron microscopy used to derive three-dimensional tomographic volume reconstructions, or tomograms, of pleomorphic biological specimens such as membrane viruses in a near-native, frozen-hydrated state. These tomograms reveal structures of interest in three dimensions, albeit at low resolution. Computational averaging of sub-volumes, or sub-tomograms, is necessary to obtain higher resolution detail of repeating structural motifs, such as viral glycoprotein spikes. A detailed computational approach for aligning and averaging sub-tomograms using the Jsubtomo software package is outlined. This approach enables visualization of the structure of viral glycoprotein spikes to a resolution in the range of 20-40 Å and study of the study of higher order spike-to-spike interactions on the virion membrane. Typical results are presented for Bunyamwera virus, an enveloped virus from the family Bunyaviridae. This family is a structurally diverse group of pathogens posing a threat to human and animal health. PMID:25350719

  1. Averaging of Viral Envelope Glycoprotein Spikes from Electron Cryotomography Reconstructions using Jsubtomo

    PubMed Central

    Huiskonen, Juha T.; Parsy, Marie-Laure; Li, Sai; Bitto, David; Renner, Max; Bowden, Thomas A.

    2014-01-01

    Enveloped viruses utilize membrane glycoproteins on their surface to mediate entry into host cells. Three-dimensional structural analysis of these glycoprotein ‘spikes’ is often technically challenging but important for understanding viral pathogenesis and in drug design. Here, a protocol is presented for viral spike structure determination through computational averaging of electron cryo-tomography data. Electron cryo-tomography is a technique in electron microscopy used to derive three-dimensional tomographic volume reconstructions, or tomograms, of pleomorphic biological specimens such as membrane viruses in a near-native, frozen-hydrated state. These tomograms reveal structures of interest in three dimensions, albeit at low resolution. Computational averaging of sub-volumes, or sub-tomograms, is necessary to obtain higher resolution detail of repeating structural motifs, such as viral glycoprotein spikes. A detailed computational approach for aligning and averaging sub-tomograms using the Jsubtomo software package is outlined. This approach enables visualization of the structure of viral glycoprotein spikes to a resolution in the range of 20-40 Å and study of the study of higher order spike-to-spike interactions on the virion membrane. Typical results are presented for Bunyamwera virus, an enveloped virus from the family Bunyaviridae. This family is a structurally diverse group of pathogens posing a threat to human and animal health. PMID:25350719

  2. Global teaching of global seismology

    NASA Astrophysics Data System (ADS)

    Stein, S.; Wysession, M.

    2005-12-01

    Our recent textbook, Introduction to Seismology, Earthquakes, & Earth Structure (Blackwell, 2003) is used in many countries. Part of the reason for this may be our deliberate attempt to write the book for an international audience. This effort appears in several ways. We stress seismology's long tradition of global data interchange. Our brief discussions of the science's history illustrate the contributions of scientists around the world. Perhaps most importantly, our discussions of earthquakes, tectonics, and seismic hazards take a global view. Many examples are from North America, whereas others are from other areas. Our view is that non-North American students should be exposed to North American examples that are type examples, and that North American students should be similarly exposed to examples elsewhere. For example, we illustrate how the Euler vector geometry changes a plate boundary from spreading, to strike-slip, to convergence using both the Pacific-North America boundary from the Gulf of California to Alaska and the Eurasia-Africa boundary from the Azores to the Mediterranean. We illustrate diffuse plate boundary zones using western North America, the Andes, the Himalayas, the Mediterranean, and the East Africa Rift. The subduction zone discussions examine Japan, Tonga, and Chile. We discuss significant earthquakes both in the U.S. and elsewhere, and explore hazard mitigation issues in different contexts. Both comments from foreign colleagues and our experience lecturing overseas indicate that this approach works well. Beyond the specifics of our text, we believe that such a global approach is facilitated by the international traditions of the earth sciences and the world youth culture that gives students worldwide common culture. For example, a video of the scene in New Madrid, Missouri that arose from a nonsensical earthquake prediction in 1990 elicits similar responses from American and European students.

  3. Thermodynamic properties of average-atom interatomic potentials for alloys

    NASA Astrophysics Data System (ADS)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  4. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  5. Front Matter: Volume 8454

    NASA Astrophysics Data System (ADS)

    SPIE, Proceedings of

    2012-05-01

    This PDF file contains the front matter associated with SPIE Proceedings Volume 8454, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.

  6. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  7. Global Geomorphology

    NASA Technical Reports Server (NTRS)

    Douglas, I.

    1985-01-01

    Any global view of landforms must include an evaluation of the link between plate tectonics and geomorphology. To explain the broad features of the continents and ocean floors, a basic distinction between the tectogene and cratogene part of the Earth's surface must be made. The tectogene areas are those that are dominated by crustal movements, earthquakes and volcanicity at the present time and are essentially those of the great mountain belts and mid ocean ridges. Cratogene areas comprise the plate interiors, especially the old lands of Gondwanaland and Laurasia. Fundamental as this division between plate margin areas and plate interiors is, it cannot be said to be a simple case of a distinction between tectonically active and stable areas. Indeed, in terms of megageomorphology, former plate margins and tectonic activity up to 600 million years ago have to be considered.

  8. Global warming

    NASA Astrophysics Data System (ADS)

    Houghton, John

    2005-06-01

    'Global warming' is a phrase that refers to the effect on the climate of human activities, in particular the burning of fossil fuels (coal, oil and gas) and large-scale deforestation, which cause emissions to the atmosphere of large amounts of 'greenhouse gases', of which the most important is carbon dioxide. Such gases absorb infrared radiation emitted by the Earth's surface and act as blankets over the surface keeping it warmer than it would otherwise be. Associated with this warming are changes of climate. The basic science of the 'greenhouse effect' that leads to the warming is well understood. More detailed understanding relies on numerical models of the climate that integrate the basic dynamical and physical equations describing the complete climate system. Many of the likely characteristics of the resulting changes in climate (such as more frequent heat waves, increases in rainfall, increase in frequency and intensity of many extreme climate events) can be identified. Substantial uncertainties remain in knowledge of some of the feedbacks within the climate system (that affect the overall magnitude of change) and in much of the detail of likely regional change. Because of its negative impacts on human communities (including for instance substantial sea-level rise) and on ecosystems, global warming is the most important environmental problem the world faces. Adaptation to the inevitable impacts and mitigation to reduce their magnitude are both necessary. International action is being taken by the world's scientific and political communities. Because of the need for urgent action, the greatest challenge is to move rapidly to much increased energy efficiency and to non-fossil-fuel energy sources.

  9. Global gamesmanship.

    PubMed

    MacMillan, Ian C; van Putten, Alexander B; McGrath, Rita Gunther

    2003-05-01

    Competition among multinationals these days is likely to be a three-dimensional game of global chess: The moves an organization makes in one market are designed to achieve goals in another in ways that aren't immediately apparent to its rivals. The authors--all management professors-call this approach "competing under strategic interdependence," or CSI. And where this interdependence exists, the complexity of the situation can quickly overwhelm ordinary analysis. Indeed, most business strategists are terrible at anticipating the consequences of interdependent choices, and they're even worse at using interdependency to their advantage. In this article, the authors offer a process for mapping the competitive landscape and anticipating how your company's moves in one market can influence its competitive interactions in others. They outline the six types of CSI campaigns--onslaughts, contests, guerrilla campaigns, feints, gambits, and harvesting--available to any multiproduct or multimarket corporation that wants to compete skillfully. They cite real-world examples such as the U.S. pricing battle Philip Morris waged with R.J. Reynolds--not to gain market share in the domestic cigarette market but to divert R.J. Reynolds's resources and attention from the opportunities Philip Morris was pursuing in Eastern Europe. And, using data they collected from their studies of consumer-products companies Procter & Gamble and Unilever, the authors describe how to create CSI tables and bubble charts that present a graphical look at the competitive landscape and that may uncover previously hidden opportunities. The CSI mapping process isn't just for global corporations, the authors explain. Smaller organizations that compete with a portfolio of products in just one national or regional market may find it just as useful for planning their next business moves.

  10. Rapid growth in agricultural trade: effects on global area efficiency and the role of management

    NASA Astrophysics Data System (ADS)

    Kastner, Thomas; Erb, Karl-Heinz; Haberl, Helmut

    2014-03-01

    Cropland is crucial for supplying humans with biomass products, above all, food. Globalization has led to soaring volumes of international trade, resulting in strongly increasing distances between the locations where land use takes place and where the products are consumed. Based on a dataset that allows tracing the flows of almost 450 crop and livestock products and consistently allocating them to cropland areas in over 200 nations, we analyze this rapidly growing spatial disconnect between production and consumption for the period from 1986 to 2009. At the global level, land for export production grew rapidly (by about 100 Mha), while land supplying crops for direct domestic use remained virtually unchanged. We show that international trade on average flows from high-yield to low-yield regions: compared to a hypothetical no-trade counterfactual that assumes equal consumption and yield levels, trade lowered global cropland demand by almost 90 Mha in 2008 (3-year mean). An analysis using yield gap data (which quantify the distance of prevailing yields to those attainable through the best currently available production techniques) revealed that differences in land management and in natural endowments contribute almost equally to the yield differences between exporting and importing nations. A comparison of the effect of yield differences between exporting and importing regions with the potential of closing yield gaps suggests that increasing yields holds greater potentials for reducing future cropland demand than increasing and adjusting trade volumes based on differences in current land productivity.

  11. Exploring Students' Conceptual Understanding of the Averaging Algorithm.

    ERIC Educational Resources Information Center

    Cai, Jinfa

    1998-01-01

    Examines 250 sixth-grade students' understanding of arithmetic average by assessing their understanding of the computational algorithm. Results indicate that the majority of the students knew the "add-them-all-up-and-divide" averaging algorithm, but only half of the students were able to correctly apply the algorithm to solve a…

  12. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  13. 7 CFR 701.17 - Average adjusted gross income limitation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  14. 27 CFR 19.613 - Average effective tax rate records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Average effective tax rate records. 19.613 Section 19.613 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Records and Reports Tax Records § 19.613 Average effective tax rate...

  15. Path-averaged differential meter of atmospheric turbulence parameters

    NASA Astrophysics Data System (ADS)

    Antoshkin, L. V.; Botygina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Lukin, V. P.

    2010-10-01

    A path-averaged differential meter of the structure constant of the atmospheric refractive index, C {/n 2}, has been developed and tested. The results of a model numerical experiment on measuring C {/n 2} and the horizontal component of average wind velocity transverse to the path are reported.

  16. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  17. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  18. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  19. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  20. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  1. 78 FR 49770 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...

  2. Hadley circulations for zonally averaged heating centered off the equator

    NASA Technical Reports Server (NTRS)

    Lindzen, Richard S.; Hou, Arthur Y.

    1988-01-01

    Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.

  3. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548... moisture content determination. (a) Determining average moisture content of the lot is not a requirement of... drawn composite sample. Official certification shall be based on the air-oven method or other...

  4. Interculturalism, Education and Dialogue. Global Studies in Education. Volume 13

    ERIC Educational Resources Information Center

    Besley, Tina, Ed.; Peters, Michael A., Ed.

    2012-01-01

    Intercultural dialogue is a concept and discourse that dates back to the 1980s. It is the major means for managing diversity and strengthening democracy within Europe and beyond. It has been adopted by the United Nations, UNESCO and the Council of Europe as the basis for interreligious and interfaith initiatives and has become increasingly…

  5. On various definitions of shadowing with average error in tracing

    NASA Astrophysics Data System (ADS)

    Wu, Xinxing; Oprocha, Piotr; Chen, Guanrong

    2016-07-01

    When computing a trajectory of a dynamical system, influence of noise can lead to large perturbations which can appear, however, with small probability. Then when calculating approximate trajectories, it makes sense to consider errors small on average, since controlling them in each iteration may be impossible. Demand to relate approximate trajectories with genuine orbits leads to various notions of shadowing (on average) which we consider in the paper. As the main tools in our studies we provide a few equivalent characterizations of the average shadowing property, which also partly apply to other notions of shadowing. We prove that almost specification on the whole space induces this property on the measure center which in turn implies the average shadowing property. Finally, we study connections among sensitivity, transitivity, equicontinuity and (average) shadowing.

  6. Average cross-responses in correlated financial markets

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  7. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  8. Some series of intuitionistic fuzzy interactive averaging aggregation operators.

    PubMed

    Garg, Harish

    2016-01-01

    In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail. PMID:27441128

  9. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  10. Gyrokinetic simulations of electrostatic microinstabilities with bounce-averaged kinetic electrons for shaped tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Qi, Lei; Kwon, Jaemin; Hahm, T. S.; Jo, Gahyung

    2016-06-01

    Nonlinear bounce-averaged kinetic theory [B. H. Fong and T. S. Hahm, Phys. Plasmas 6, 188 (1999)] is used for magnetically trapped electron dynamics for the purpose of achieving efficient gyrokinetic simulations of Trapped Electron Mode (TEM) and Ion Temperature Gradient mode with trapped electrons (ITG-TEM) in shaped tokamak plasmas. The bounce-averaged kinetic equations are explicitly extended to shaped plasma equilibria from the previous ones for concentric circular plasmas, and implemented to a global nonlinear gyrokinetic code, Gyro-Kinetic Plasma Simulation Program (gKPSP) [J. M. Kwon et al., Nucl. Fusion 52, 013004 (2012)]. Verification of gKPSP with the bounce-averaged kinetic trapped electrons in shaped plasmas is successfully carried out for linear properties of the ITG-TEM mode and Rosenbluth-Hinton residual zonal flow [M. N. Rosenbluth and F. L. Hinton, Phys. Rev. Lett. 80, 724 (1998)]. Physics responsible for stabilizing effects of elongation on both ITG mode and TEM is identified using global gKPSP simulations. These can be understood in terms of magnetic flux expansion, leading to the effective temperature gradient R / L T ( 1 - E ') [P. Angelino et al., Phys. Rev. Lett. 102, 195002 (2009)] and poloidal wave length contraction at low field side, resulting in the effective poloidal wave number kθρi/κ.

  11. Cognitive Capitalism: Economic Freedom Moderates the Effects of Intellectual and Average Classes on Economic Productivity.

    PubMed

    Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale

    2016-10-01

    Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. PMID:27458006

  12. Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.

    PubMed

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-03-01

    Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, (1/2), is directly related to average B-factors () and (1/2). We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is approximately 1.1 A, under the assumption that the principal contribution to experimental B-factors is conformational variability.

  13. Cognitive Capitalism: Economic Freedom Moderates the Effects of Intellectual and Average Classes on Economic Productivity.

    PubMed

    Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale

    2016-10-01

    Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom.

  14. Non-chain pulsed DF laser with an average power of the order of 100 W

    NASA Astrophysics Data System (ADS)

    Pan, Qikun; Xie, Jijiang; Wang, Chunrui; Shao, Chunlei; Shao, Mingzhen; Chen, Fei; Guo, Jin

    2016-07-01

    The design and performance of a closed-cycle repetitively pulsed DF laser are described. The Fitch circuit and thyratron switch are introduced to realize self-sustained volume discharge in SF6-D2 mixtures. The influences of gas parameters and charging voltage on output characteristics of non-chain pulsed DF laser are experimentally investigated. In order to improve the laser power stability over a long period of working time, zeolites with different apertures are used to scrub out the de-excitation particles produced in electric discharge. An average output power of the order of 100 W was obtained at an operating repetition rate of 50 Hz, with amplitude difference in laser pulses <8 %. And under the action of micropore alkaline zeolites, the average power fell by 20 % after the laser continuing working 100 s at repetition frequency of 50 Hz.

  15. Global trends

    NASA Technical Reports Server (NTRS)

    Megie, G.; Chanin, M.-L.; Ehhalt, D.; Fraser, P.; Frederick, J. F.; Gille, J. C.; Mccormick, M. P.; Schoebert, M.; Bishop, L.; Bojkov, R. D.

    1990-01-01

    Measuring trends in ozone, and most other geophysical variables, requires that a small systematic change with time be determined from signals that have large periodic and aperiodic variations. Their time scales range from the day-to-day changes due to atmospheric motions through seasonal and annual variations to 11 year cycles resulting from changes in the sun UV output. Because of the magnitude of all of these variations is not well known and highly variable, it is necessary to measure over more than one period of the variations to remove their effects. This means that at least 2 or more times the 11 year sunspot cycle. Thus, the first requirement is for a long term data record. The second related requirement is that the record be consistent. A third requirement is for reasonable global sampling, to ensure that the effects are representative of the entire Earth. The various observational methods relevant to trend detection are reviewed to characterize their quality and time and space coverage. Available data are then examined for long term trends or recent changes in ozone total content and vertical distribution, as well as related parameters such as stratospheric temperature, source gases and aerosols.

  16. Global cooling?

    PubMed

    Damon, P E; Kunen, S M

    1976-08-01

    The world's inhabitants, including Scientists, live primarily in the Northern Hemisphere. It is quite natural to be concerned about events that occur close to home and neglect faraway events. Hence, it is not surprising that so little attention has been given to the Southern Hemisphere. Evidence for global cooling has been based, in large part, on a severe cooling trend at high northern latitudes. This article points out that the Northern Hemisphere cooling trend appears to be out of phase with a warming trend at high latitudes in the Southern Hemisphere. The data are scanty. We cannot be sure that these temperature fluctuations are be not the result of natural causes. How it seems most likely that human activity has already significantly perturbed the atmospheric weather system. The effect of particulate matter pollution should be most severe in the highly populated and industrialized Northern Hemisphere. Because of the rapid diffusion of CO(2) molecules within the atmosphere, both hemispheres will be subject to warming due to the atmospheric (greenhouse) effect as the CO(2) content of the atmosphere builds up from the combustion of fossil fuels. Because of the differential effects of the two major sources of atmospheric pollution, the CO(2) greenhouse effect warming trend should first become evident in the Southern Hemisphere. The socioeconomic and political consequences of climate change are profound. We need an early warning system such as would be provided by a more intensive international world weather watch, particularly at high northern and southern latitudes.

  17. The causal meaning of Fisher’s average effect

    PubMed Central

    LEE, JAMES J.; CHOW, CARSON C.

    2013-01-01

    Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113

  18. Programmable noise bandwidth reduction by means of digital averaging

    NASA Technical Reports Server (NTRS)

    Poklemba, John J. (Inventor)

    1993-01-01

    Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

  19. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    PubMed

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  20. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into appropriate averaging times and units? 60.1755 Section 60.1755 Protection of Environment... or Before August 30, 1999 Model Rule-Continuous Emission Monitoring § 60.1755 How do I convert my...