Science.gov

Sample records for global volume averaged

  1. A volume averaged global model for inductively coupled HBr/Ar plasma discharge

    NASA Astrophysics Data System (ADS)

    Chung, Sang-Young; Kwon, Deuk-Chul; Choi, Heechol; Song, Mi-Young

    2015-09-01

    A global model for inductively coupled HBr/Ar plasma was developed. The model was based on a self-consistent global model had been developed by Kwon et al., and a set of chemical reactions in the HBr/Ar plasma was compiled by surveying theoretical, experimental and evaluative researches. In this model vibrational excitations of bi-atomic molecules and electronic excitations of hydrogen atom were taken into account. Neutralizations by collisions between positive and negative ions were considered with Hakman's approximate formula achieved by fitting of theoretical result. For some reactions that were not supplied from literatures the reaction parameters of Cl2 and HCl were adopted as them Br2 and HBr, respectively. For validation calculation results using this model were compared with experimental results from literatures for various plasma discharge parameters and it showed overall good agreement.

  2. A volume averaged global model study of the influence of the electron energy distribution and the wall material on an oxygen discharge

    NASA Astrophysics Data System (ADS)

    Toneli, D. A.; Pessoa, R. S.; Roberto, M.; Gudmundsson, J. T.

    2015-12-01

    A low pressure high density oxygen discharge is studied through a global (volume averaged) model in the pressure range 0.5-100 mTorr. The goal of this work is to evaluate the dependence of collisional energy loss per electron-ion pair created, effective electron temperature, mean density of species, and mean electronegativity on the electron energy distribution function. Differences in the results for Maxwellian and non-Maxwellian distributions show the importance of using a proper electron energy distribution function in discharge modelling. We also explore the differences due to different reactor wall materials comparing the results for an anodized aluminium reactor with a stainless steel reactor. Due to the low recombination coefficient for oxygen atoms on the anodized aluminium walls, the yield of atomic oxygen in anodized aluminium reactors increases significantly as compared to stainless steel reactors. However, the difference of the yield of atomic oxygen in these reactors decreases as pressure increases. Thus, anodized aluminium reactors can be desired for applications where a high concentration of atomic oxygen is required. Finally, the importance of quenching coefficient for plasma modelling is stressed through the quenching coefficient at the walls for {{\\text{O}}2} ({{\\text{b}}1}Σ\\text{g}+ ). Low quenching coefficients result in high densities of {{\\text{O}}2} ({{\\text{b}}1}Σ\\text{g}+ ) affecting the mean electronegativity of the plasma due to the decrease in the density of \\text{O}2- .

  3. Global bioconversions. Volume 2

    SciTech Connect

    Wise, D.L.

    1987-01-01

    These volumes present the most active bioconversion-based research and development projects worldwide, with an emphasis on the important practical aspects of this work. A major focus of the text is the bioconversion of organic residues to useful products, which also encompasses the field of anaerobic methane fermentation. Chapters from an international perspective are also included, which further address the global importance of bioconversion.

  4. Global bioconversions. Volume 4

    SciTech Connect

    Wise, D.L.

    1987-01-01

    These volumes present the most active bioconversion-based research and development projects worldwide, with an emphasis on the important practical aspects of this work. A major focus of the text is the bioconversion of organic residues to useful products, which also encompasses the field of anaerobic methane fermentation. Chapters from an international perspective are also included, which further address the global importance of bioconversion.

  5. Climatology of globally averaged thermospheric mass density

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Picone, J. M.

    2010-09-01

    We present a climatological analysis of daily globally averaged density data, derived from orbit data and covering the years 1967-2007, along with an empirical Global Average Mass Density Model (GAMDM) that encapsulates the 1986-2007 data. The model represents density as a function of the F10.7 solar radio flux index, the day of year, and the Kp geomagnetic activity index. We discuss in detail the dependence of the data on each of the input variables, and demonstrate that all of the terms in the model represent consistent variations in both the 1986-2007 data (on which the model is based) and the independent 1967-1985 data. We also analyze the uncertainty in the results, and quantify how the variance in the data is apportioned among the model terms. We investigate the annual and semiannual variations of the data and quantify the amplitude, height dependence, solar cycle dependence, and interannual variability of these oscillatory modes. The auxiliary material includes Fortran 90 code for evaluating GAMDM.

  6. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  7. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  8. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    NASA Astrophysics Data System (ADS)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  9. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  10. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  11. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  12. Compensation of vector and volume averaging bias in lidar wind speed measurements

    NASA Astrophysics Data System (ADS)

    Clive, P. J. M.

    2008-05-01

    A number of vector and volume averaging considerations arise in relation to remote sensing, and in particular, Lidar. 1) Remote sensing devices obtain vector averages. These values are often compared to the scalar averages associated with cup anemometry. The magnitude of a vector average is less than or equal to the scalar average obtained over the same period. The use of Lidars in wind power applications has entailed the estimation of scalar averages by vector averages and vice versa. The relationship between the two kinds of average must therefore be understood. It is found that the ratio of the averages depends upon wind direction variability according to a Bessel function of the standard deviation of the wind direction during the averaging interval. 2) The finite probe length of remote sensing devices also incurs a volume averaging bias when wind shear is non-linear. The sensitivity of the devices to signals from a range of heights produces volume averages which will be representative of wind speeds at heights within that range. One can distinguish between the effective or apparent height the measured wind speeds represent as a result of volume averaging bias, and the configuration height at which the device has been set to measure wind speeds. If the wind shear is described by a logarithmic wind profile the apparent height is found to depend mainly on simple geometrical arguments concerning configuration height and probe length and is largely independent of the degree of wind shear. 3) The restriction of the locus of points at which radial velocity measurements are made to the circumference of a horizontally oriented disc at a particular height is seen to introduce ambiguity into results when dealing with wind vector fields which are not irrotational.

  13. Compensating for volume and vector averaging biases in lidar wind speed measurements

    NASA Astrophysics Data System (ADS)

    Clive, Peter J. M.

    2008-10-01

    A number of vector and volume averaging considerations arise in relation to remote sensing, and in particular, Lidar. 1) Remote sensing devices obtain vector averages. These values are often compared to the scalar averages associated with cup anemometry. The magnitude of a vector average is less than or equal to the scalar average obtained over the same period. The use of Lidars in wind power applications has entailed the estimation of scalar averages by vector averages and vice versa. The relationship between the two kinds of average must therefore be understood. It is found that the ratio of the averages depends upon wind direction variability according to a Bessel function of the standard deviation of the wind direction during the averaging interval. 2) The finite probe length of remote sensing devices also incurs a volume averaging bias when wind shear is non-linear. The sensitivity of the devices to signals from a range of heights produces volume averages which will be representative of wind speeds at heights within that range. One can distinguish between the effective or apparent height the measured wind speeds represent as a result of volume averaging bias, and the configuration height at which the device has been set to measure wind speeds. If the wind shear is described by a logarithmic wind profile the apparent height is found to depend mainly on simple geometrical arguments concerning configuration height and probe length and is largely independent of the degree of wind shear. 3) The restriction of the locus of points at which radial velocity measurements are made to the circumference of a horizontally oriented disc at a particular height is seen to introduce ambiguity into results when dealing with wind vector fields which are not irrotational.

  14. Volume Averaging of Spectral-Domain Optical Coherence Tomography Impacts Retinal Segmentation in Children

    PubMed Central

    Trimboli-Heidler, Carmelina; Vogt, Kelly; Avery, Robert A.

    2016-01-01

    Purpose To determine the influence of volume averaging on retinal layer thickness measures acquired with spectral-domain optical coherence tomography (SD-OCT) in children. Methods Macular SD-OCT images were acquired using three different volume settings (i.e., 1, 3, and 9 volumes) in children enrolled in a prospective OCT study. Total retinal thickness and five inner layers were measured around an Early Treatment Diabetic Retinopathy Scale (ETDRS) grid using beta version automated segmentation software for the Spectralis. The magnitude of manual segmentation required to correct the automated segmentation was classified as either minor (<12 lines adjusted), moderate (>12 and <25 lines adjusted), severe (>26 and <48 lines adjusted), or fail (>48 lines adjusted or could not adjust due to poor image quality). The frequency of each edit classification was assessed for each volume setting. Thickness, paired difference, and 95% limits of agreement of each anatomic quadrant were compared across volume density. Results Seventy-five subjects (median age 11.8 years, range 4.3–18.5 years) contributed 75 eyes. Less than 5% of the 9- and 3-volume scans required more than minor manual segmentation corrections, compared with 71% of 1-volume scans. The inner (3 mm) region demonstrated similar measures across all layers, regardless of volume number. The 1-volume scans demonstrated greater variability of the retinal nerve fiber layer (RNLF) thickness, compared with the other volumes in the outer (6 mm) region. Conclusions In children, volume averaging of SD-OCT acquisitions reduce retinal layer segmentation errors. Translational Relevance This study highlights the importance of volume averaging when acquiring macula volumes intended for multilayer segmentation. PMID:27570711

  15. Local volume-time averaged equations of motion for dispersed, turbulent, multiphase flows

    SciTech Connect

    Sha, W.T.; Slattery, J.C.

    1980-11-01

    In most flows of liquids and their vapors, the phases are dispersed randomly in both space and time. These dispersed flows can be described only statistically or in terms of averages. Local volume-time averaging is used here to derive a self-consistent set of equations governing momentum and energy transfer in dispersed, turbulent, multiphase flows. The empiricisms required for use with these equations are the subject of current research.

  16. Surface-Based Display of Volume-Averaged Cerebellar Imaging Data

    PubMed Central

    Diedrichsen, Jörn; Zotow, Ewa

    2015-01-01

    The paper presents a flat representation of the human cerebellum, useful for visualizing functional imaging data after volume-based normalization and averaging across subjects. Instead of reconstructing individual cerebellar surfaces, the method uses a white- and grey-matter surface defined on volume-averaged anatomical data. Functional data can be projected along the lines of corresponding vertices on the two surfaces. The flat representation is optimized to yield a roughly proportional relationship between the surface area of the 2D-representation and the volume of the underlying cerebellar grey matter. The map allows users to visualize the activation state of the complete cerebellar grey matter in one concise view, equally revealing both the anterior-posterior (lobular) and medial-lateral organization. As examples, published data on resting-state networks and task-related activity are presented on the flatmap. The software and maps are freely available and compatible with most major neuroimaging packages. PMID:26230510

  17. Atmospheric trends in methylchloroform and the global average for the hydroxyl radical

    NASA Technical Reports Server (NTRS)

    Prinn, R.; Cunnold, D.; Alyea, F.; Rasmussen, R.; Simmonds, P.

    1987-01-01

    ALE-GAGE (Atmospheric Lifetime Experiment-Global Atmospheric Gases Experiment) data obtained over the seven-year period from July 1978 to June 1985 are presented and interpreted. The data, combined with knowledge of industrial emissions, are used in an optimal estimation inversion scheme to deduce a globally average methylchloroform atmospheric lifetime of 6.3(+ 1.2, - 0.9) years (1 sigma uncertainty) and a globally averaged tropospheric hydroxyl radical concentration of (7.7 + or - 1.4) x 10 to the 5th radicals/cu cm (1 sigma uncertainty). These results provide the most accurate estimates yet of the trends and lifetime of methylchloroform and of the global average for tropospheric hydroxyl radical levels.

  18. Bioremediation in Porous Media: Upscaling From the Pore to the Continuum Scales Via Volume Averaging

    NASA Astrophysics Data System (ADS)

    Wood, B. D.; Quintard, M.; Minard, K. R.; Whitaker, S.

    2001-12-01

    Biofilms are involved in many important porous media systems, such as packed-bed bioreactors and in situ bioremediation. Recently, we have completed a study of upscaling mass transfer and reactions in biofilms, with the biofilm itself being treated as a two-phase medium (Wood and Whitaker, 2000, Chem. Engng. Sci., 55, 3397-3418). In that work, we used volume averaging as an upscaling method to predict the effective diffusivity of a biofilm on the basis of its geometric structure, its membrane transport properties, and the diffusivities of the intercellular and extracellular phases. In continuing work, we have proceeded up the sequence of length-scales to consider the process of mass transport and reaction in a porous media containing biofilms. Our current research involves (1) development of the Darcy-scale transport equations via upscaling of the sub-pore-scale description of flow, transport, and reaction within a volume of porous media, (2) prediction of effective parameters (effective dispersion tensor, effective reaction rate coefficients, effectiveness factor) from closure problems defined over a representative volume of media, and (3) correspondence between theory and experiment via direct observation Darcy-scale and sub-pore-scale processes and structures that describe transport and reaction. We will present the macroscopic equations developed via volume averaging, predictions for the effective parameters on the basis of sub-pore-scale phenomena, and preliminary experimental results. Additionally, we will discuss the definition of the concentrations that apply to the biological and aqueous phases in the upscaled representation.

  19. The influence of different El Nino flavours on global average tempeature

    NASA Astrophysics Data System (ADS)

    Donner, S. D.; Banholzer, S. P.

    2014-12-01

    The El Niño-Southern Oscillation is known to influence surface temperatures worldwide. El Niño conditions are thought to lead to anomalously warm global average surface temperature, absent other forcings. Recent research has identified distinct possible types or flavours of El Niño events, based on the location of peak sea surface temperature anomalies and other variables. Here we analyze the relationship between the type of El Niño event and the global surface average temperature anomaly, using three historical temperature data sets. Separating El Niño events into types or flavours reveals that the global average surface temperatures are anomalously warm during and after canonical eastern Pacific El Niño events or "super" El Ninos. However, the global average surface temperatures during and after central Pacific or "mixed" events, like the 2002-3 event, are not statistically distinct from that of neutral or other years. Historical analysis indicated that slowdowns in the rate of global surface warming since the late 1800s may be related to decadal variability in the frequency of different types of El Niño events.

  20. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGESBeta

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  1. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    SciTech Connect

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the charged wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.

  2. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  3. A global average model of atmospheric aerosols for radiative transfer calculations

    NASA Technical Reports Server (NTRS)

    Toon, O. B.; Pollack, J. B.

    1976-01-01

    A global average model is proposed for the size distribution, chemical composition, and optical thickness of stratospheric and tropospheric aerosols. This aerosol model is designed to specify the input parameters to global average radiative transfer calculations which assume the atmosphere is horizontally homogeneous. The model subdivides the atmosphere at multiples of 3 km, where the surface layer extends from the ground to 3 km, the upper troposphere from 3 to 12 km, and the stratosphere from 12 to 45 km. A list of assumptions made in construction of the model is presented and discussed along with major model uncertainties. The stratospheric aerosol is modeled as a liquid mixture of 75% H2SO4 and 25% H2O, while the tropospheric aerosol consists of 60% sulfate and 40% soil particles above 3 km and of 50% sulfate, 35% soil particles, and 15% sea salt below 3 km. Implications and consistency of the model are discussed.

  4. The Global 2000 Report to the President. Volume Three. Documentation on the Government's Global Sectoral Models: The Government's "Global Model."

    ERIC Educational Resources Information Center

    Barney, Gerald O., Ed.

    The third volume of the Global 2000 study presents basic information ("documentation") on the long-term sectoral models used by the U.S. government to project global trends in population, resources, and the environment. Its threefold purposes are: (1) to present all this basic information in a single volume, (2) to provide an explanation, in the…

  5. The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin

    2016-06-01

    On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.

  6. Measurement of average density and relative volumes in a dispersed two-phase fluid

    DOEpatents

    Sreepada, Sastry R.; Rippel, Robert R.

    1992-01-01

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  7. Volume average technique for turbulent flow simulation and its application to room airflow prediction

    NASA Astrophysics Data System (ADS)

    Huang, Xianmin

    Fluid motion turbulence is one of the most important transport phenomena occurring in engineering applications. Although turbulent flow is governed by a set of conservation equations for momentum, mass, and energy, a Direct Numerical Simulation (DNS) of the flow by solving these equations to include the finest scale motions is impossible due to the extremely large computer resources required. On the other hand, the Reynolds Averaged Modelling (RAM) method has many limitations which hinder its applications to turbulent flows of practical significance. Room airflow featuring co- existence of laminar and turbulence regimes is a typical example of a flow which is difficult to handle with the RAM method. A promising way to avoid the difficulty of the DNS method and the limitation of the RAM method is to use the Large Eddy Simulation (LES) method. In the present thesis, the drawbacks of previously developed techniques for the LES method, particularly those associated with the SGS modelling, are identified. Then a new so called Volume Average Technique (VAT) for turbulent flow simulation is proposed. The main features of the VAT are as follows: (1) The volume averaging approach instead of the more common filtering approach is employed to define solvable scale fields, so that coarse- graining in the LES and space discretization of the numerical scheme are achieved in a single procedure. (2) All components of the SGS Reynolds stress and SGS turbulent heat flux are modelled dynamically using the newly proposed Functional Scale Similarity (FSS) SGS model. The model is superior to many previously developed SGS models in that it can be applied to highly inhomogeneous and/or anisotropic, weak or multi-regime turbulent flows using a relatively coarse grid. (3) The so called SGS turbulent diffusion is identified and modelled as a separate mechanism to that of the SGS turbulent flux represented by the SGS Reynolds stress and SGS turbulent heat flux. The SGS turbulent diffusion is

  8. Average synaptic activity and neural networks topology: a global inverse problem

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Casartelli, Mario; di Volo, Matteo; Livi, Roberto; Vezzani, Alessandro

    2014-03-01

    The dynamics of neural networks is often characterized by collective behavior and quasi-synchronous events, where a large fraction of neurons fire in short time intervals, separated by uncorrelated firing activity. These global temporal signals are crucial for brain functioning. They strongly depend on the topology of the network and on the fluctuations of the connectivity. We propose a heterogeneous mean-field approach to neural dynamics on random networks, that explicitly preserves the disorder in the topology at growing network sizes, and leads to a set of self-consistent equations. Within this approach, we provide an effective description of microscopic and large scale temporal signals in a leaky integrate-and-fire model with short term plasticity, where quasi-synchronous events arise. Our equations provide a clear analytical picture of the dynamics, evidencing the contributions of both periodic (locked) and aperiodic (unlocked) neurons to the measurable average signal. In particular, we formulate and solve a global inverse problem of reconstructing the in-degree distribution from the knowledge of the average activity field. Our method is very general and applies to a large class of dynamical models on dense random networks.

  9. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    The use of individual Global Navigation Satellite Services (GPS, GLONASS, Galileo, and Beidou/COMPASS) for the position, navigation, and timing in the Space Service Volume at altitudes of 300 km, 3000 km, 8000 km, 15000 km, 25000 km, 36500km and 70000 km is examined and the percent availability of at least one and at least four satellites is presented.

  10. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) to terrestrial users, GPS is currently used to provide for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Beidou, and Galileo), it will be possible to provide these services by using other GNSS constellations. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to 70,000 km. This paper will report a similar analysis of the performance of each of the additional GNSS and compare them with GPS alone. The Space Service Volume, defined as the volume between 3,000 km altitude and geosynchronous altitude, as compared with the Terrestrial Service Volume between the surface and 3,000 km. In the Terrestrial Service Volume, GNSS performance will be similar to performance on the Earth's surface. The GPS system has established signal requirements for the Space Service Volume. A separate paper presented at the conference covers the use of multiple GNSS in the Space Service Volume.

  11. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  12. Local and Global Illumination in the Volume Rendering Integral

    SciTech Connect

    Max, N; Chen, M

    2005-10-21

    This article is intended as an update of the major survey by Max [1] on optical models for direct volume rendering. It provides a brief overview of the subject scope covered by [1], and brings recent developments, such as new shadow algorithms and refraction rendering, into the perspective. In particular, we examine three fundamentals aspects of direct volume rendering, namely the volume rendering integral, local illumination models and global illumination models, in a wavelength-independent manner. We review the developments on spectral volume rendering, in which visible light are considered as a form of electromagnetic radiation, optical models are implemented in conjunction with representations of spectral power distribution. This survey can provide a basis for, and encourage, new efforts for developing and using complex illumination models to achieve better realism and perception through optical correctness.

  13. Predicting Climate Change Using Response Theory: Global Averages and Spatial Patterns

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Ragone, Francesco; Lunkeit, Frank

    2016-04-01

    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source general circulation model of the atmosphere featuring O(10^5 ) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Response theory allows one to practically compute the time-dependent measure supported on the pullback attractor of the climate system, whose dynamics is non-autonomous as a result of time-dependent forcings. We propose a simple yet efficient method for predicting—at any lead time and in an ensemble sense—the change in climate properties resulting from increase in the concentration of CO_2 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as in their spatial patterns. The quality of the predictions obtained for the surface temperature fields is rather good, while in the case of precipitation a good skill is observed only for the global average. We also show how it is possible to define accurately concepts like the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change.

  14. Paleosecular Variation and Time-Averaged Field Behavior: Global and Regional Signatures

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Cromwell, G.; Tauxe, L.; Constable, C.

    2012-12-01

    We use an updated global dataset of directional and intensity data from lava flows to investigate time-averaged field (TAF) and paleosecular variation (PSV) signatures regionally and globally. The data set includes observations from the past 10 Ma, but we focus our investigations on the field structure over past 5 Ma, in particular during the Brunhes and Matuyama. We restrict our analyses to sites with at least 5 samples (all of which have been stepwise demagnetized), and for which the estimate of the Fisher precision parameter, k, is at least 50. The data set comprises 1572 sites from the past 5 Ma that span latitudes 78oS to 71oN; of these ˜40% are from the Brunhes chron and ˜20% are from the Matuyama chron. Age control at the site level is variable because radiometric dates are available for only about one third of our sites. New TAF models for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling, and the limited age information available for many sites. Results from Hawaii indicate that resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but a PSV estimate (virtual geomagnetic pole dispersion) that is increased relative to that obtained from the unevenly sampled data. The global distribution of sites in our dataset allows us to investigate possible hemispheric asymmetries in field structure, in particular differences between north and south high latitude field behavior and low latitude differences between the Pacific and Atlantic hemispheres.

  15. Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.

    ERIC Educational Resources Information Center

    Cambridge Conference on School Mathematics, Newton, MA.

    Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…

  16. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) services to traditional terrestrial and airborne users, GPS is also being increasingly used as a tool to enable precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite System (GNSS) constellations being replenished and coming into service (GLONASS, Beidou, and Galileo), it will become possible to benefit from greater signal availability and robustness by using evolving multi-constellation receivers. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to seventy thousand kilometers. This paper will report a similar analysis of the signal coverage of GPS in the space domain; however, the analyses will also consider signal coverage from each of the additional GNSS constellations noted earlier to specifically demonstrate the expected benefits to be derived from using GPS in conjunction with other foreign systems. The Space Service Volume is formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude circa 36,000 km, as compared with the Terrestrial Service Volume between 3,000 km and the surface of the Earth. In the Terrestrial Service Volume, GNSS performance is the same as on or near the Earth's surface due to satellite vehicle availability and geometry similarities. The core GPS system has thereby established signal requirements for the Space Service Volume as part of technical Capability Development Documentation (CDD) that specifies system performance. Besides the technical discussion, we also present diplomatic efforts to extend the GPS Space Service Volume concept to other PNT service providers in an effort to assure that all space users will benefit from the enhanced

  17. The intrinsic dependence structure of peak, volume, duration, and average intensity of hyetographs and hydrographs

    PubMed Central

    Serinaldi, Francesco; Kilsby, Chris G

    2013-01-01

    [1] The information contained in hyetographs and hydrographs is often synthesized by using key properties such as the peak or maximum value Xp, volume V, duration D, and average intensity I. These variables play a fundamental role in hydrologic engineering as they are used, for instance, to define design hyetographs and hydrographs as well as to model and simulate the rainfall and streamflow processes. Given their inherent variability and the empirical evidence of the presence of a significant degree of association, such quantities have been studied as correlated random variables suitable to be modeled by multivariate joint distribution functions. The advent of copulas in geosciences simplified the inference procedures allowing for splitting the analysis of the marginal distributions and the study of the so-called dependence structure or copula. However, the attention paid to the modeling task has overlooked a more thorough study of the true nature and origin of the relationships that link , and I. In this study, we apply a set of ad hoc bootstrap algorithms to investigate these aspects by analyzing the hyetographs and hydrographs extracted from 282 daily rainfall series from central eastern Europe, three 5 min rainfall series from central Italy, 80 daily streamflow series from the continental United States, and two sets of 200 simulated universal multifractal time series. Our results show that all the pairwise dependence structures between , and I exhibit some key properties that can be reproduced by simple bootstrap algorithms that rely on a standard univariate resampling without resort to multivariate techniques. Therefore, the strong similarities between the observed dependence structures and the agreement between the observed and bootstrap samples suggest the existence of a numerical generating mechanism based on the superposition of the effects of sampling data at finite time steps and the process of summing realizations of independent random variables over

  18. Average Spatial Distribution of Cosmic Rays behind the Interplanetary Shock—Global Muon Detector Network Observations

    NASA Astrophysics Data System (ADS)

    Kozai, M.; Munakata, K.; Kato, C.; Kuwabara, T.; Rockenbach, M.; Dal Lago, A.; Schuch, N. J.; Braga, C. R.; Mendonça, R. R. S.; Jassar, H. K. Al; Sharma, M. M.; Duldig, M. L.; Humble, J. E.; Evenson, P.; Sabbah, I.; Tokumaru, M.

    2016-07-01

    We analyze the galactic cosmic ray (GCR) density and its spatial gradient in Forbush Decreases (FDs) observed with the Global Muon Detector Network (GMDN) and neutron monitors (NMs). By superposing the GCR density and density gradient observed in FDs following 45 interplanetary shocks (IP-shocks), each associated with an identified eruption on the Sun, we infer the average spatial distribution of GCRs behind IP-shocks. We find two distinct modulations of GCR density in FDs, one in the magnetic sheath and the other in the coronal mass ejection (CME) behind the sheath. The density modulation in the sheath is dominant in the western flank of the shock, while the modulation in the CME ejecta stands out in the eastern flank. This east–west asymmetry is more prominent in GMDN data responding to ∼60 GV GCRs than in NM data responding to ∼10 GV GCRs, because of the softer rigidity spectrum of the modulation in the CME ejecta than in the sheath. The geocentric solar ecliptic-y component of the density gradient, G y , shows a negative (positive) enhancement in FDs caused by the eastern (western) eruptions, while G z shows a negative (positive) enhancement in FDs caused by the northern (southern) eruptions. This implies that the GCR density minimum is located behind the central flank of IP-shocks and propagating radially outward from the location of the solar eruption. We also confirmed that the average G z changes its sign above and below the heliospheric current sheet, in accord with the prediction of the drift model for the large-scale GCR transport in the heliosphere.

  19. Predicting Climate Change using Response Theory: Global Averages and Spatial Patterns

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Lunkeit, Frank; Ragone, Francesco

    2016-04-01

    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source climate model featuring O(105) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Using the theoretical framework of the pullback attractor and the tools of response theory we propose a simple yet efficient method for predicting - at any lead time and in an ensemble sense - the change in climate properties resulting from increase in the concentration of CO2 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as their spatial patterns. We also show how it is possible to define accurately concepts like the the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change.

  20. Exploring Granger causality between global average observed time series of carbon dioxide and temperature

    SciTech Connect

    Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R

    2010-01-01

    Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

  1. A model ensemble for explaining the seasonal cycle of globally averaged atmospheric carbon dioxide concentration

    NASA Astrophysics Data System (ADS)

    Alexandrov, Georgii; Eliseev, Alexey

    2015-04-01

    The seasonal cycle of the globally averaged atmospheric carbon dioxide concentrations results from the seasonal changes in the gas exchange between the atmosphere and other carbon pools. Terrestrial pools are the most important. Boreal and temperate ecosystems provide a sink for carbon dioxide only during the warm period of the year, and, therefore, the summertime reduction in the atmospheric carbon dioxide concentration is usually explained by the seasonal changes in the magnitude of terrestrial carbon sink. Although this explanation seems almost obvious, it is surprisingly difficult to support it by calculations of the seasonal changes in the strength of the sink provided by boreal and temperate ecosystems. The traditional conceptual framework for modelling net ecosystem exchange (NEE) leads to the estimates of the NEE seasonal cycle amplitude which are too low for explaining the amplitude of the seasonal cycle of the atmospheric carbon dioxide concentration. To propose a more suitable conceptual framework we develop a model ensemble that consists of nine structurally different models and covers various approaches to modelling gross primary production and heterotrophic respiration, including the effects of light saturation, limited light use efficiency, limited water use efficiency, substrate limitation and microbiological priming. The use of model ensembles is a well recognized methodology for evaluating structural uncertainty of model-based predictions. In this study we use this methodology for exploratory modelling analysis - that is, to identify the mechanisms that cause the observed amplitude of the seasonal cycle of the atmospheric carbon dioxide concentration and its slow but steady growth.

  2. The global volume and distribution of modern groundwater

    NASA Astrophysics Data System (ADS)

    Gleeson, Tom; Befus, Kevin M.; Jasechko, Scott; Luijendijk, Elco; Cardenas, M. Bayani

    2016-02-01

    Groundwater is important for energy and food security, human health and ecosystems. The time since groundwater was recharged--or groundwater age--can be important for diverse geologic processes, such as chemical weathering, ocean eutrophication and climate change. However, measured groundwater ages range from months to millions of years. The global volume and distribution of groundwater less than 50 years old--modern groundwater that is the most recently recharged and also the most vulnerable to global change--are unknown. Here we combine geochemical, geologic, hydrologic and geospatial data sets with numerical simulations of groundwater and analyse tritium ages to show that less than 6% of the groundwater in the uppermost portion of Earth’s landmass is modern. We find that the total groundwater volume in the upper 2 km of continental crust is approximately 22.6 million km3, of which 0.1-5.0 million km3 is less than 50 years old. Although modern groundwater represents a small percentage of the total groundwater on Earth, the volume of modern groundwater is equivalent to a body of water with a depth of about 3 m spread over the continents. This water resource dwarfs all other components of the active hydrologic cycle.

  3. Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) Cost-Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2008-01-01

    Lightning Launch Commit Criteria (LLCC) are designed to prevent space launch vehicles from flight through environments conducive to natural or triggered lightning and are used for all U.S. government and commercial launches at government and civilian ranges. They are maintained by a committee known as the NASA/USAF Lightning Advisory Panel (LAP). The previous LLCC for anvil cloud, meant to avoid triggered lightning, have been shown to be overly restrictive. Some of these rules have had such high safety margins that they prohibited flight under conditions that are now thought to be safe 90% of the time, leading to costly launch delays and scrubs. The LLCC for anvil clouds was upgraded in the summer of 2005 to incorporate results from the Airborne Field Mill (ABFM) experiment at the Eastern Range (ER). Numerous combinations of parameters were considered to develop the best correlation of operational weather observations to in-cloud electric fields capable of rocket triggered lightning in anvil clouds. The Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) was the best metric found. Dr. Harry Koons of Aerospace Corporation conducted a risk analysis of the VAHIRR product. The results indicated that the LLCC based on the VAHIRR product would pose a negligible risk of flying through hazardous electric fields. Based on these findings, the Kennedy Space Center Weather Office is considering seeking funding for development of an automated VAHIRR algorithm for the new ER 45th Weather Squadron (45 WS) RadTec 431250 weather radar and Weather Surveillance Radar-1988 Doppler (WSR-88D) radars. Before developing an automated algorithm, the Applied Meteorology Unit (AMU) was tasked to determine the frequency with which VAHIRR would have allowed a launch to safely proceed during weather conditions otherwise deemed "red" by the Launch Weather Officer. To do this, the AMU manually calculated VAHIRR values based on candidate cases from past launches with known anvil cloud

  4. Grade Point Average and Student Outcomes. Data Notes. Volume 5, Number 1, January/February 2010

    ERIC Educational Resources Information Center

    Clery, Sue; Topper, Amy

    2010-01-01

    Using data from Achieving the Dream: Community College Count, this issue of Data Notes investigates the academic achievement patterns of students attending Achieving the Dream colleges. The data show that 21 percent of students at Achieving the Dream colleges had grade point averages (GPAs) of 3.50 or higher at the end of their first year. At…

  5. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2013-01-01

    Besides providing position, velocity, and timing (PVT) for terrestrial users, the Global Positioning System (GPS) is also being used to provide PVT information for earth orbiting satellites. In 2006, F. H. Bauer, et. al., defined the Space Service Volume in the paper GPS in the Space Service Volume , presented at ION s 19th international Technical Meeting of the Satellite Division, and looked at GPS coverage for orbiting satellites. With GLONASS already operational, and the first satellites of the Galileo and Beidou/COMPASS constellations already in orbit, it is time to look at the use of the new Global Navigation Satellite Systems (GNSS) coming into service to provide PVT information for earth orbiting satellites. This presentation extends GPS in the Space Service Volume by examining the coverage capability of combinations of the new constellations with GPS GPS was first explored as a system for refining the position, velocity, and timing of other spacecraft equipped with GPS receivers in the early eighties. Because of this, a new GPS utility developed beyond the original purpose of providing position, velocity, and timing services for land, maritime, and aerial applications. GPS signals are now received and processed by spacecraft both above and below the GPS constellation, including signals that spill over the limb of the earth. Support of GPS space applications is now part of the system plan for GPS, and support of the Space Service Volume by other GNSS providers has been proposed to the UN International Committee on GNSS (ICG). GPS has been demonstrated to provide decimeter level position accuracy in real-time for satellites in low Earth orbit (centimeter level in non-real-time applications). GPS has been proven useful for satellites in geosynchronous orbit, and also for satellites in highly elliptical orbits. Depending on how many satellites are in view, one can keep time locked to the GNSS standard, and through that to Universal Time as long as at least one

  6. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  7. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  8. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  9. The determination of the global average OH concentration using a deuteroethane tracer

    NASA Technical Reports Server (NTRS)

    Stevens, Charles M.; Cicerone, Ralph

    1986-01-01

    It is proposed to measure the decreasing global concentration of an OH reactive isotopic tracer, G sub 2 D sub 6, after its introduction into the troposphere in a manner to facilitate uniform global mixing. Analyses at the level of 2 x 10 to the -19th power fraction, corresponding to one kg uniformly distributed globally, should be possible by a combination of cryogenic absorption techniques to separate ethane from air and high sensitivity isotopic analysis of ethane by mass spectrometry. Aliquots of C sub 2 D sub 6 totaling one kg would be introduced to numerous southern and northern latitudes over a 10 day period in order to achieve a uniform global concentration within 3 to 6 months by the normal atmospheric circulation. Then samples of air of 1000 l (STP) would be collected periodically at a tropical and temperate zone location in each hemisphere and spiked with a known amount of another isotopic species of ethane, C-13 sub 2 H sub 6, at the level of 10 to the -11th power mole fraction. After separation of the ethanes from air, the absolute concentration of C sub 2 D sub 6 would be analyzed using the Argonne 100-inch radius mass spectrometer.

  10. Effects of volume averaging on the line spectra of vertical velocity from multiple-Doppler radar observations

    NASA Technical Reports Server (NTRS)

    Gal-Chen, T.; Wyngaard, J. C.

    1982-01-01

    Calculations of the ratio of the true one-dimensional spectrum of vertical velocity and that measured with multiple-Doppler radar beams are presented. It was assumed that the effects of pulse volume averaging and objective analysis routines is replacement of a point measurement with a volume integral. A u and v estimate was assumed to be feasible when orthogonal radars are not available. Also, the target fluid was configured as having an infinite vertical dimension, zero vertical velocity at the top and bottom, and having homogeneous and isotropic turbulence with a Kolmogorov energy spectrum. The ratio obtained indicated that equal resolutions among radars yields a monotonically decreasing, wavenumber-dependent response function. A gain of 0.95 was demonstrated in an experimental situation with 40 levels. Possible errors introduced when using unequal resolution radars were discussed. Finally, it was found that, for some flows, the extent of attenuation depends on the number of vertical levels resolvable by the radars.

  11. Calculation of area-averaged vertical profiles of the horizontal wind velocity from volume-imaging lidar data

    NASA Technical Reports Server (NTRS)

    Schols, J. L.; Eloranta, E. W.

    1992-01-01

    Area-averaged horizontal wind measurements are derived from the motion of spatial inhomogeneities in aerosol backscattering observed with a volume-imaging lidar. Spatial averaging provides high precision, reducing sample variations of wind measurements well below the level of turbulent fluctuations, even under conditions of very light mean winds and strong convection or under the difficult conditions represented by roll convection. Wind velocities are measured using the two-dimensional spatial cross correlation computed between successive horizontal plane maps of aerosol backscattering, assembled from three-dimensional lidar scans. Prior to calculation of the correlation function, three crucial steps are performed: (1) the scans are corrected for image distortion by the wind during a finite scan time; (2) a temporal high pass median filtering is applied to eliminate structures that do not move with the wind; and (3) a histogram equalization is employed to reduce biases to the brightest features.

  12. 40 CFR 63.2854 - How do I determine the weighted average volume fraction of HAP in the actual solvent loss?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... weighted average in Equation 2 of § 63.2840 to determine the compliance ratio. (b) To determine the volume... determine chemical properties of the solvent and the volume percentage of all HAP components present in the... by the total volume of all deliveries as expressed in Equation 1 of this section. Record the...

  13. A global approach for image orientation using Lie algebraic rotation averaging and convex L∞ minimisation

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2014-08-01

    In this paper we present a new global image orientation approach for a set of multiple overlapping images with given homologous point tuples which is based on a two-step procedure. The approach is independent on initial values, robust with respect to outliers and yields the global minimum solution under relatively mild constraints. The first step of the approach consists of the estimation of global rotation parameters by averaging relative rotation estimates for image pairs (these are determined from the homologous points via the essential matrix in a pre-processing step). For the averaging we make use of algebraic group theory in which rotations, as part of the special orthogonal group SO(3), form a Lie group with a Riemannian manifold structure. This allows for a mapping to the local Euclidean tangent space of SO(3), the Lie algebra. In this space the redundancy of relative orientations is used to compute an average of the absolute rotation for each image and furthermore to detect and eliminate outliers. In the second step translation parameters and the object coordinates of the homologous points are estimated within a convex L∞ optimisation, in which the rotation parameters are kept fixed. As an optional third step the results can be used as initial values for a final bundle adjustment that does not suffer from bad initialisation and quickly converges to a globally optimal solution. We investigate our approach for global image orientation based on synthetic data. The results are compared to a robust least squares bundle adjustment. In this way we show that our approach is independent of initial values and more robust against outliers than a conventional bundle adjustment.

  14. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes.

    PubMed

    Nitzsche, Björn; Frey, Stephen; Collins, Louis D; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H; Fonov, Vladimir S; Boltze, Johannes

    2015-01-01

    Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species. PMID:26089780

  15. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes

    PubMed Central

    Nitzsche, Björn; Frey, Stephen; Collins, Louis D.; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H.; Fonov, Vladimir S.; Boltze, Johannes

    2015-01-01

    Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species. PMID:26089780

  16. Attributing Rise in Global Average Temperature to Emissions Traceable to Major Industrial Carbon Producer

    NASA Astrophysics Data System (ADS)

    Mera, R. J.; Allen, M. R.; Dalton, M.; Ekwurzel, B.; Frumhoff, P. C.; Heede, R.

    2013-12-01

    The role of human activity on global climate change has been explored in attribution studies based on the total amount of greenhouse gases in the atmosphere. Until now, however, a direct link between emissions traced directly to the major carbon producers has not been addressed. The carbon majors dataset developed by Heede (in review) account for more than 60 percent of the cumulative worldwide emissions of industrial carbon dioxide and methane through 2010. We use a conventional energy balance model coupled to a diffusive ocean, based on Allen et al. 2009, to evaluate the global temperature response to forcing from cumulative emissions traced to these producers. The base case for comparison is the Relative Concentration Pathways 4.5 [RCP4.5 (Moss et al. 2012)] simulation. Sensitivity tests varying climate sensitivity, ocean thermal diffusivity, ocean/atmosphere carbon uptake diffusivity, deep ocean carbon advection, and the carbon cycle temperature-dependent feedback are used to assess whether the fractional attribution for these sources surpasses the uncertainty limits calculated from these parameters The results suggest this dataset can be utilized for an expanded field of climate change impacts. Allen, M. R., D. J. Frame, C. Huntingford, C. D. Jones, J. A. Lowe, M. Meinshausen and N. Meinshausen (2009), Warming caused by cumulative carbon emissions towards the trillionth tonne, Nature, 458, 1163-1166, doi:10.1038/nature08019. Heede, R. (2013), Tracing anthropogenic carbon dioxide and methane emissions to fossil fuel and cement producers, 1854-2010, in review. Moss, R. H., et al. (2010), The next generation of scenarios for climate change research and assessment, Nature, 463, 747-756.

  17. Estimation of the diffuse radiation fraction for hourly, daily and monthly-average global radiation

    NASA Astrophysics Data System (ADS)

    Erbs, D. G.; Klein, S. A.; Duffie, J. A.

    1982-01-01

    Hourly pyrheliometer and pyranometer data from four U.S. locations are used to establish a relationship between the hourly diffuse fraction and the hourly clearness index. This relationship is compared to the relationship established by Orgill and Hollands (1977) and to a set of data from Highett, Australia, and agreement is within a few percent in both cases. The transient simulation program TRNSYS is used to calculate the annual performance of solar energy systems using several correlations. For the systems investigated, the effect of simulating the random distribution of the hourly diffuse fraction is negligible. A seasonally dependent daily diffuse correlation is developed from the data, and this daily relationship is used to derive a correlation for the monthly-average diffuse fraction.

  18. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  19. Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.

    PubMed

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  20. A multi-moment finite volume method for incompressible Navier-Stokes equations on unstructured grids: Volume-average/point-value formulation

    NASA Astrophysics Data System (ADS)

    Xie, Bin; , Satoshi, Ii; Ikebata, Akio; Xiao, Feng

    2014-11-01

    A robust and accurate finite volume method (FVM) is proposed for incompressible viscous fluid dynamics on triangular and tetrahedral unstructured grids. Differently from conventional FVM where the volume integrated average (VIA) value is the only computational variable, the present formulation treats both VIA and the point value (PV) as the computational variables which are updated separately at each time step. The VIA is computed from a finite volume scheme of flux form, and is thus numerically conservative. The PV is updated from the differential form of the governing equation that does not have to be conservative but can be solved in a very efficient way. Including PV as the additional variable enables us to make higher-order reconstructions over compact mesh stencil to improve the accuracy, and moreover, the resulting numerical model is more robust for unstructured grids. We present the numerical formulations in both two and three dimensions on triangular and tetrahedral mesh elements. Numerical results of several benchmark tests are also presented to verify the proposed numerical method as an accurate and robust solver for incompressible flows on unstructured grids.

  1. Monthly Averages of Aerosol Properties: A Global Comparison Among Models, Satellite Data, and AERONET Ground Data

    SciTech Connect

    Kinne, S.; Lohmann, U; Feichter, J; Schulz, M.; Timmreck, C.; Ghan, Steven J.; Easter, Richard C.; Chin, M; Ginoux, P.; Takemura, T.; Tegen, I.; Koch, D; Herzog, M.; Penner, J.; Pitari, G.; Holben, B. N.; Eck, T.; Smirnov, A.; Dubovik, O.; Slutsker, I.; Tanre, D.; Torres, O.; Mishchenko, M.; Geogdzhayev, I.; Chu, D. A.; Kaufman, Yoram J.

    2003-10-21

    Aerosol introduces the largest uncertainties in model-based estimates of anthropogenic sources on the Earth's climate. A better representation of aerosol in climate models can be expected from an individual processing of aerosol type and new aerosol modules have been developed, that distinguish among at least five aerosol types: sulfate, organic carbon, black carbon, sea-salt and dust. In this study intermediate results of aerosol mass and aerosol optical depth of new aerosol modules from seven global models are evaluated. Among models, differences in predicted mass-fields are expected with differences to initialization and processing. Nonetheless, unusual discrepancies in source strength and in removal rates for particular aerosol types were identified. With simultaneous data for mass and optical depth, type conversion factors were compared. Differences among the tested models cover a factor of 2 for each, even hydrophobic, aerosol type. This is alarming and suggests that efforts of good mass-simulations could be wasted or that conversions are misused to cover for poor mass-simulations. An individual assessment, however, is difficult, as only part of the conversion determining factors (size assumption, permitted humidification and prescribed ambient relative humidity) were revealed. These differences need to be understood and minimized, if conclusions on aerosol processing in models can be drawn from comparisons to aerosol optical depth measurements.

  2. A conservative lattice Boltzmann model for the volume-averaged Navier-Stokes equations based on a novel collision operator

    NASA Astrophysics Data System (ADS)

    Blais, Bruno; Tucny, Jean-Michel; Vidal, David; Bertrand, François

    2015-08-01

    The volume-averaged Navier-Stokes (VANS) equations are at the basis of numerous models used to investigate flows in porous media or systems containing multiple phases, one of which is made of solid particles. Although they are traditionally solved using the finite volume, finite difference or finite element method, the lattice Boltzmann method is an interesting alternative solver for these equations since it is explicit and highly parallelizable. In this work, we first show that the most common implementation of the VANS equations in the LBM, based on a redefined collision operator, is not valid in the case of spatially varying void fractions. This is illustrated through five test cases designed using the so-called method of manufactured solutions. We then present an LBM scheme for these equations based on a novel collision operator. Using the Chapman-Enskog expansion and the same five test cases, we show that this scheme is second-order accurate, explicit and stable for large void fraction gradients.

  3. Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume

    NASA Astrophysics Data System (ADS)

    Berto, Filippo; Lazzarin, Paolo

    2014-01-01

    When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.

  4. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    NASA Astrophysics Data System (ADS)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell

  5. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene.

    PubMed

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-10-28

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet's dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to -134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 10(6) km(3) greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 m⋅ka(-1) punctuated by periods of greater, particularly at 14.5-14.0 ka BP at ≥40 mm⋅y(-1) (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100-150 y ago, with no evidence of oscillations exceeding ∼15-20 cm in time intervals ≥200 y from 6 to 0.15 ka BP. PMID:25313072

  6. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene

    NASA Astrophysics Data System (ADS)

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-10-01

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet's dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to -134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 106 km3 greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 mṡka-1 punctuated by periods of greater, particularly at 14.5-14.0 ka BP at ≥40 mmṡy-1 (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100-150 y ago, with no evidence of oscillations exceeding ∼15-20 cm in time intervals ≥200 y from 6 to 0.15 ka BP.

  7. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene

    PubMed Central

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-01-01

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet’s dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to −134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 106 km3 greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 m⋅ka−1 punctuated by periods of greater, particularly at 14.5–14.0 ka BP at ≥40 mm⋅y−1 (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100–150 y ago, with no evidence of oscillations exceeding ∼15–20 cm in time intervals ≥200 y from 6 to 0.15 ka BP. PMID:25313072

  8. Paleosecular variation and time-averaged field analysis over the last 10 Ma from a new global dataset (PSV10)

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.

    2015-12-01

    Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.

  9. The effect of stress and incentive magnetic field on the average volume of magnetic Barkhausen jump in iron

    NASA Astrophysics Data System (ADS)

    Shu, Di; Guo, Lei; Yin, Liang; Chen, Zhaoyang; Chen, Juan; Qi, Xin

    2015-11-01

    The average volume of magnetic Barkhausen jump (AVMBJ) v bar generated by magnetic domain wall irreversible displacement under the effect of the incentive magnetic field H for ferromagnetic materials and the relationship between irreversible magnetic susceptibility χirr and stress σ are adopted in this paper to study the theoretical relationship among AVMBJ v bar(magneto-elasticity noise) and the incentive magnetic field H. Then the numerical relationship among AVMBJ v bar, stress σ and the incentive magnetic field H is deduced. Utilizing this numerical relationship, the displacement process of magnetic domain wall for single crystal is analyzed and the effect of the incentive magnetic field H and the stress σ on the AVMBJ v bar (magneto-elasticity noise) is explained from experimental and theoretical perspectives. The saturation velocity of Barkhausen jump characteristic value curve is different when tensile or compressive stress is applied on ferromagnetic materials, because the resistance of magnetic domain wall displacement is different. The idea of critical magnetic field in the process of magnetic domain wall displacement is introduced in this paper, which solves the supersaturated calibration problem of AVMBJ - σ calibration curve.

  10. Global Education: What the Research Shows. Information Capsule. Volume 0604

    ERIC Educational Resources Information Center

    Blazer, Christie

    2006-01-01

    Teaching from a global perspective is important because the lives of people around the world are increasingly interconnected through politics, economics, technology, and the environment. Global education teaches students to understand and appreciate people from different cultural backgrounds; view events from a variety of perspectives; recognize…

  11. Computation and use of volume-weighted-average concentrations to determine long-term variations of selected water-quality constituents in lakes and reservoirs

    USGS Publications Warehouse

    Wells, Frank C.; Schertz, Terry L.

    1984-01-01

    A computer program using the Statistical Analysis System has been developed to perform the arithmetic calculations and regression analyses to determine volume-weighted-average concentrations of selected water-quality constituents in lakes and reservoirs. The program has been used in Texas to show decreasing trends in dissolved-solids and total-phosphorus concentrations in Lake Arlington after the discharge of sewage effluent into the reservoir was stopped. The program also was used to show that the August 1978 and October 1981 floods on the Brazos River greatly decreased the volume-weighted-average concentrations of selected constituents in Hubbard Creek Reservoir and Possum Kingdom Lake.

  12. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  13. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City.

    PubMed

    Metson, Geneviève S; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world's main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region's "phosphorus footprint" - the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident's annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management. PMID:27617261

  14. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City

    PubMed Central

    Metson, Geneviève S.; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world’s main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region’s “phosphorus footprint” – the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident’s annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management. PMID:27617261

  15. Nocturnal respiratory failure in a child with congenital myopathy – management using average volume-assured pressure support (AVAPS)

    PubMed Central

    Gentin, Natalie; Williamson, Bruce; Thambipillay, Ganesh; Teng, Arthur

    2015-01-01

    This is a case report of the effective use of bi-level positive airway pressure support (BPAP) using the volume-assured pressure support feature in a pediatric patient with a congenital myopathy and significant nocturnal hypoventilation. Our patient was started on nocturnal nasal mask BPAP but required high pressures to improve her oxygen saturations and CO2 baseline. She was then trialed on a BPAP machine with the volume-assured pressure support feature on. The ability of this machine to adjust inspiratory pressures to give a targeted tidal volume allowed the patient to be on lower pressure settings for periods of the night, with the higher pressures only when required. She tolerated the ventilation well and her saturations, CO2 profiles, and clinical condition improved. This case report highlights the benefits of the volume-assured pressure support feature on a BPAP machine in a child with a neuromuscular disorder. PMID:26392861

  16. Distributed volume rendering of global models of seismic wave propagation

    NASA Astrophysics Data System (ADS)

    Schwarz, N.; van Keken, P.; Renambot, L.; Tromp, J.; Komatitsch, D.; Johnson, A.; Leigh, J.

    2004-12-01

    Modeling the dynamics and structure of the Earth's interior now routinely involves massively distributed computational techniques, which makes it feasible to study time-dependent processes in the 3D Earth. Accurate, high-resolution models require the use of distributed simulations that run on, at least, moderately large PC clusters and produce large amounts of data on the order of terabytes distributed across the cluster. Visualizing such large data sets efficiently necessitates the use of the same type and magnitude of resources employed by the simulation. Generic, distributed volumetric rendering methods that produce high-quality monoscopic and stereoscopic visualizations currently exist, but rely on a different distributed data layout than is produced during simulation. This presents a challenge during the visualization process because an expensive data gather and redistribution stage is required before the distributed volume visualization algorithm can operate. We will compare different general purpose techniques and tools for visualizing volumetric data sets that are widely used in the field of scientific visualization, and propose a new approach that eliminates the data gather and redistribution stage by working directly on the data as distributed by, e.g., a seismic wave propagation simulation.

  17. An investigation into the sensitivity of the atmospheric chlorine and bromine loading using a globally averaged mass balance model

    NASA Astrophysics Data System (ADS)

    Dowdell, David C.; Matthews, G. Peter; Wells, Ian

    Two globally averaged mass balance models have been developed to investigate the sensitivity and future level of atmospheric chlorine and bromine as a result of the emission of 14 chloro- and 3 bromo-carbons. The models use production, growth, lifetime and concentration data for each of the halocarbons and divide the production into one of eight uses, these being aerosol propellants, cleaning agents, blowing agents in open and closed cell foams, non-hermetic and hermetic refrigeration, fire retardants and a residual "other" category. Each use category has an associated emission profile which is built into the models to take into account the proportion of halocarbon retained in equipment for a characteristic period of time before its release. Under the Montreal Protocol 3 requirements, a peak chlorine loading of 3.8 ppb is attained in 1994, which does not reduce to 2.0 ppb (the approximate level of atmospheric chlorine when the ozone hole formed) until 2053. The peak bromine loading is 22 ppt, also in 1994, which decays to 12 ppt by the end of next century. The models have been used to (i) compare the effectiveness of Montreal Protocols 1, 2 and 3 in removing chlorine from the atmosphere, (ii) assess the influence of the delayed emission assumptions used in these models compared to immediate emission assumptions used in previous models, (iii) assess the relative effect on the chlorine loading of a tightening of the Montreal Protocol 3 restrictions, and (iv) calculate the influence of chlorine and bromine chemistry as well as the faster phase out of man-made methyl bromide on the bromine loading.

  18. Plantation Pedagogy: A Postcolonial and Global Perspective. Global Studies in Education. Volume 16

    ERIC Educational Resources Information Center

    Bristol, Laurette S. M.

    2012-01-01

    "Plantation Pedagogy" originates from an Afro-Caribbean primary school teacher's experience. It provides a discourse which extends and illuminates the limitations of current neo-liberal and global rationalizations of the challenges posed to a teacher's practice. Plantation pedagogy is distinguished from critical pedagogy by its historical presence…

  19. Global Inventory of Regional and National Qualifications Frameworks. Volume II: National and Regional Cases

    ERIC Educational Resources Information Center

    UNESCO Institute for Lifelong Learning, 2015

    2015-01-01

    This second volume of the "Global Inventory of Regional and National Qualifications Frameworks" focuses on national and regional cases of national qualifications frameworks for eighty- six countries from Afghanistan to Uzbekistan and seven regional qualifications frameworks. Each country profile provides a thorough review of the main…

  20. Transforming Education: Global Perspectives, Experiences and Implications. Educational Psychology: Critical Pedagogical Perspectives. Volume 24

    ERIC Educational Resources Information Center

    DeVillar, Robert A., Ed.; Jiang, Binbin, Ed.; Cummins, Jim, Ed.

    2013-01-01

    This research-based volume presents a substantive, panoramic view of ways in which Australia and countries in Africa, Asia, Europe, and North and South America engage in educational programs and practices to transform the learning processes and outcomes of their students. It reveals and analyzes national and global trajectories in key areas of…

  1. Mass and volume contributions to twentieth-century global sea level rise.

    PubMed

    Miller, Laury; Douglas, Bruce C

    2004-03-25

    The rate of twentieth-century global sea level rise and its causes are the subjects of intense controversy. Most direct estimates from tide gauges give 1.5-2.0 mm yr(-1), whereas indirect estimates based on the two processes responsible for global sea level rise, namely mass and volume change, fall far below this range. Estimates of the volume increase due to ocean warming give a rate of about 0.5 mm yr(-1) (ref. 8) and the rate due to mass increase, primarily from the melting of continental ice, is thought to be even smaller. Therefore, either the tide gauge estimates are too high, as has been suggested recently, or one (or both) of the mass and volume estimates is too low. Here we present an analysis of sea level measurements at tide gauges combined with observations of temperature and salinity in the Pacific and Atlantic oceans close to the gauges. We find that gauge-determined rates of sea level rise, which encompass both mass and volume changes, are two to three times higher than the rates due to volume change derived from temperature and salinity data. Our analysis supports earlier studies that put the twentieth-century rate in the 1.5-2.0 mm yr(-1) range, but more importantly it suggests that mass increase plays a larger role than ocean warming in twentieth-century global sea level rise. PMID:15042085

  2. Insolation data manual: Long-term monthly averages of solar radiation, temperature, degree-days and global KT for 248 National Weather Service stations

    NASA Astrophysics Data System (ADS)

    Knapp, C. L.; Stoffel, T. L.; Whitaker, S. D.

    1980-10-01

    Monthly averaged data is presented which describes the availability of solar radiation at 248 National Weather Service stations. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data. Average daily maximum, minimum, and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3 (0) C (65(0)F). For each station, global anti K/sub T/ (cloudiness index) were calculated on a monthly and annual basis.

  3. Insolation data manual: long-term monthly averages of solar radiation, temperature, degree-days and global anti K/sub T/ for 248 national weather service stations

    SciTech Connect

    Knapp, C L; Stoffel, T L; Whitaker, S D

    1980-10-01

    Monthly averaged data is presented which describes the availability of solar radiation at 248 National Weather Service stations. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data. Average daily maximum, minimum, and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3/sup 0/C (65/sup 0/F). For each station, global anti K/sub T/ (cloudiness index) were calculated on a monthly and annual basis. (MHR)

  4. Drive-Response Analysis of Global Ice Volume, CO2, and Insolation using Information Transfer

    NASA Astrophysics Data System (ADS)

    Brendryen, J.; Hannisdal, B.

    2014-12-01

    The processes and interactions that drive global ice volume variability and deglaciations are a topic of considerable debate. Here we analyze the drive-response relationships between data sets representing global ice volume, CO2 and insolation over the past 800 000 years using an information theoretic approach. Specifically, we use a non-parametric measure of directional information transfer (IT) based on the construct of transfer entropy to detect the relative strength and directionality of interactions in the potentially chaotic and non-linear glacial-interglacial climate system. Analyses of unfiltered data suggest a tight coupling between CO2 and ice volume, detected as strong, symmetric information flow consistent with a two-way interaction. In contrast, IT from Northern Hemisphere (NH) summer insolation to CO2 is highly asymmetric, suggesting that insolation is an important driver of CO2. Conditional analysis further suggests that CO2 is a dominant influence on ice volume, with the effect of insolation also being significant but limited to smaller-scale variability. However, the strong correlation between CO2 and ice volume renders them information redundant with respect to insolation, confounding further drive-response attribution. We expect this information redundancy to be partly explained by the shared glacial-interglacial "sawtooth" pattern and its overwhelming influence on the transition probability distributions over the target interval. To test this, we filtered out the abrupt glacial terminations from the ice volume and CO2 records to focus on the residual variability. Preliminary results from this analysis confirm insolation as a driver of CO2 and two-way interactions between CO2 and ice volume. However, insolation is reduced to a weak influence on ice volume. Conditional analyses support CO2 as a dominant driver of ice volume, while ice volume and insolation both have a strong influence on CO2. These findings suggest that the effect of orbital

  5. SU-D-213-04: Accounting for Volume Averaging and Material Composition Effects in An Ionization Chamber Array for Patient Specific QA

    SciTech Connect

    Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K

    2015-06-15

    Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly

  6. Shifting Tides in Global Higher Education: Agency, Autonomy, and Governance in the Global Network. Global Studies in Education, Volume 9

    ERIC Educational Resources Information Center

    Witt, Mary Allison

    2011-01-01

    The increasing connection among higher education institutions worldwide is well documented. What is less understood is how this connectivity is enacted and manifested on specific levels of the global education network. This book details the planning process of a multi-institutional program in engineering between institutions in the US and…

  7. The Global 2000 Report to the President: Entering the Twenty-First Century. Volume Two--The Technical Report.

    ERIC Educational Resources Information Center

    Council on Environmental Quality, Washington, DC.

    This second volume of the Global 2000 study presents a technical report of detailed projections and analyses. It is a U.S. government effort to present a long-term global perspective on population, resources, and environment. The volume has four parts. Approximately half of the report, part one, deals with projections for the future in the areas…

  8. International conference on the role of the polar regions in global change: Proceedings. Volume 2

    SciTech Connect

    Weller, G.; Wilson, C.L.; Severin, B.A.B.

    1991-12-01

    The International Conference on the Role of the Polar Regions in Global Change took place on the campus of the University of Alaska Fairbanks on June 11--15, 1990. The goal of the conference was to define and summarize the state of knowledge on the role of the polar regions in global change, and to identify gaps in knowledge. To this purpose experts in a wide variety of relevant disciplines were invited to present papers and hold panel discussions. While there are numerous conferences on global change, this conference dealt specifically with the polar regions which occupy key positions in the global system. These two volumes of conference proceedings include papers on (1) detection and monitoring of change; (2) climate variability and climate forcing; (3) ocean, sea ice, and atmosphere interactions and processes; and (4) effects on biota and biological feedbacks; (5) ice sheet, glacier and permafrost responses and feedbacks, (6) paleoenvironmental studies; and, (7) aerosol and trace gases.

  9. International conference on the role of the polar regions in global change: Proceedings. Volume 1

    SciTech Connect

    Weller, G.; Wilson, C.L.; Severin, B.A.B.

    1991-12-01

    The International Conference on the Role of the Polar Regions in Global Change took place on the campus of the University of Alaska Fairbanks on June 11--15, 1990. The goal of the conference was to define and summarize the state of knowledge on the role of the polar regions in global change, and to identify gaps in knowledge. To this purpose experts in a wide variety of relevant disciplines were invited to present papers and hold panel discussions. While there are numerous conferences on global change, this conference dealt specifically with polar regions which occupy key positions in the global system. These two volumes of conference proceedings include papers on (1) detection and monitoring of change; (2) climate variability and climate forcing; (3) ocean, sea ice, and atmosphere interactions and processes; (4) effects on biota and biological feedbacks; (5) ice sheet, glacier and permafrost responses and feedbacks; (6) paleoenvironmental studies; and, (7) aerosols and trace gases.

  10. Automated segmentation and measurement of global white matter lesion volume in patients with multiple sclerosis.

    PubMed

    Alfano, B; Brunetti, A; Larobina, M; Quarantelli, M; Tedeschi, E; Ciarmiello, A; Covelli, E M; Salvatore, M

    2000-12-01

    A fully automated magnetic resonance (MR) segmentation method for identification and volume measurement of demyelinated white matter has been developed. Spin-echo MR brain scans were performed in 38 patients with multiple sclerosis (MS) and in 46 healthy subjects. Segmentation of normal tissues and white matter lesions (WML) was obtained, based on their relaxation rates and proton density maps. For WML identification, additional criteria included three-dimensional (3D) lesion shape and surrounding tissue composition. Segmented images were generated, and normal brain tissues and WML volumes were obtained. Sensitivity, specificity, and reproducibility of the method were calculated, using the WML identified by two neuroradiologists as the gold standard. The average volume of "abnormal" white matter in normal subjects (false positive) was 0.11 ml (range 0-0.59 ml). In MS patients the average WML volume was 31.0 ml (range 1.1-132.5 ml), with a sensitivity of 87.3%. In the reproducibility study, the mean SD of WML volumes was 2.9 ml. The procedure appears suitable for monitoring disease changes over time. J. Magn. Reson. Imaging 2000;12:799-807. PMID:11105017

  11. Development of the average lattice phase-strain and global elastic macro-strain in Al/TiC composites

    SciTech Connect

    Shi, N.; Bourke, M.A.M.; Goldstone, J.A.; Allison, J.E.

    1994-02-01

    The development of elastic lattice phase strains and global elastic macro-strain in a 15 vol% TiC particle reinforced 2219-T6 Al composite was modeled by finite element method (FEM) as a function of tensile uniaxial loading. The numerical predictions are in excellent agreement with strain measurements at a spallation neutron source. Results from the measurements and modeling indicate that the lattice phase-strains go through a ``zigzag`` increase with the applied load in the direction perpendicular to the load, while the changes of slope in the parallel direction are monotonic. FEM results further showed that it is essential to consider the effect of thermal residual stresses (TRS) in understanding this anomalous behavior. It was demonstrated that, due to TRS, the site of matrix plastic flow initiation changed. On the other hand, the changes of slope of the elastic global macrostrain is solely determined by the phase-stress partition in the composite. An analytical calculation showed that both experimental and numerical slope changes during elastic global strain response under loading could be accurately reproduced by accounting for the changes of phase-stress ratio between the matrix and the matrix.

  12. Mars: Crustal pore volume, cryospheric depth, and the global occurrence of groundwater

    NASA Technical Reports Server (NTRS)

    Clifford, Stephen M.

    1987-01-01

    It is argued that most of the Martian hydrosphere resides in a porous outer layer of crust that, based on a lunar analogy, appears to extend to a depth of about 10 km. The total pore volume of this layer is sufficient to store the equivalent of a global ocean of water some 500 to 1500 m deep. Thermal modeling suggests that about 300 to 500 m of water could be stored as ice within the crust. Any excess must exist as groundwater.

  13. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    This paper extends the results I reported at this year's ION International Technical Meeting on multi-constellation GNSS coverage by showing how the use of multi-constellation GNSS improves Geometric Dilution of Precision (GDOP). Originally developed to provide position, navigation, and timing for terrestrial users, GPS has found increasing use for in space for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Galileo, and Beidou) and the development of Satellite Based Augmentation Services, it is possible to obtain improved precision by using evolving multi-constellation receiver. The Space Service Volume formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude ((is) approximately 36,500 km), with the volume below three thousand kilometers defined as the Terrestrial Service Volume (TSV). The USA has established signal requirements for the Space Service Volume (SSV) as part of the GPS Capability Development Documentation (CDD). Diplomatic efforts are underway to extend Space service Volume commitments to the other Position, Navigation, and Timing (PNT) service providers in an effort to assure that all space users will benefit from the enhanced capabilities of interoperating GNSS services in the space domain.

  14. Size and distribution of the global volume of surgery in 2012

    PubMed Central

    Haynes, Alex B; Molina, George; Lipsitz, Stuart R; Esquivel, Micaela M; Uribe-Leitz, Tarsicio; Fu, Rui; Azad, Tej; Chao, Tiffany E; Berry, William R; Gawande, Atul A

    2016-01-01

    Abstract Objective To estimate global surgical volume in 2012 and compare it with estimates from 2004. Methods For the 194 Member States of the World Health Organization, we searched PubMed for studies and contacted key informants for reports on surgical volumes between 2005 and 2012. We obtained data on population and total health expenditure per capita for 2012 and categorized Member States as very-low, low, middle and high expenditure. Data on caesarean delivery were obtained from validated statistical reports. For Member States without recorded surgical data, we estimated volumes by multiple imputation using data on total health expenditure. We estimated caesarean deliveries as a proportion of all surgery. Findings We identified 66 Member States reporting surgical data. We estimated that 312.9 million operations (95% confidence interval, CI: 266.2–359.5) took place in 2012, an increase from the 2004 estimate of 226.4 million operations. Only 6.3% (95% CI: 1.7–22.9) and 23.1% (95% CI: 14.8–36.7) of operations took place in very-low- and low-expenditure Member States representing 36.8% (2573 million people) and 34.2% (2393 million people) of the global population of 7001 million people, respectively. Caesarean deliveries comprised 29.6% (5.8/19.6 million operations; 95% CI: 9.7–91.7) of the total surgical volume in very-low-expenditure Member States, but only 2.7% (5.1/187.0 million operations; 95% CI: 2.2–3.4) in high-expenditure Member States. Conclusion Surgical volume is large and growing, with caesarean delivery comprising nearly a third of operations in most resource-poor settings. Nonetheless, there remains disparity in the provision of surgical services globally. PMID:26966331

  15. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 4; Determination of surface and atmosphere fluxes and temporally and spatially averaged products (subsystems 5-12); Determination of surface and atmosphere fluxes and temporally and spatially averaged products

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.

  16. Global average concentration and trend for hydroxyl radicals deduced from ALE/GAGE trichloroethane (methyl chloroform) data for 1978-1990

    NASA Technical Reports Server (NTRS)

    Prinn, R.; Cunnold, D.; Simmonds, P.; Alyea, F.; Boldi, R.; Crawford, A.; Fraser, P.; Gutzler, D.; Hartley, D.; Rosen, R.

    1992-01-01

    An optimal estimation inversion scheme is utilized with atmospheric data and emission estimates to determined the globally averaged CH3CCl3 tropospheric lifetime and OH concentration. The data are taken from atmospheric measurements from surface stations of 1,1,1-trichloroethane and show an annual increase of 4.4 +/- 0.2 percent. Industrial emission estimates and a small oceanic loss rate are included, and the OH concentration for the same period (1978-1990) are incorporated at 1.0 +/- 0.8 percent/yr. The positive OH trend is consistent with theories regarding OH and ozone trends with respect to land use and global warming. Attention is given to the effects of the ENSO on the CH3CCl3 data and the assumption of continuing current industrial anthropogenic emissions. A novel tropical atmospheric tracer-transport mechanism is noted with respect to the CH3CCl3 data.

  17. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    SciTech Connect

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-04-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A{sub 1}) was set in the range of 0.0-12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of {gamma} index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within {+-} 0.7%. From the dose area histograms on the film, the mean {+-} standard deviation of the dose covering 100% of the cross section of the target was 102.32 {+-} 1.20% (range, 100.59-103.49%). By contrast, the irradiated areas receiving more than 95% dose for A{sub 1} = 12 mm were 1.46 and 1.33 times larger than those for A{sub 1} = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  18. Transports and budgets of volume, heat, and salt from a global eddy-resolving ocean model

    SciTech Connect

    McCann, M.P.; Semtner, A.J. Jr.; Chervin, R.M.

    1994-07-01

    The results from an integration of a global ocean circulation model have been condensed into an analysis of the volume, heat, and salt transports among the major ocean basins. Transports are also broken down between the model`s Ekman, thermocline, and deep layers. Overall, the model does well. Horizontal exchanges of mass, heat, and salt between ocean basins have reasonable values: and the volume of North Atlantic Deep Water (NADW) transport is in general agreement with what limited observations exist. On a global basis the zonally integrated meridional heat transport is poleward at all latitudes except for the latitude band 30{degrees}S to 45{degrees}S. This anomalous transport is most likely a signature of the model`s inability to form Antarctic Intermediate (AAIW) and Antarctic bottom water (AABW) properly. Eddy heat transport is strong at the equator where its convergence heats the equatorial Pacific about twice as much as it heats the equatorial Atlantic. The greater heating in the Pacific suggests that mesoscale eddies may be a vital mechanism for warming and maintaining an upwelling portion of the global conveyor-belt circulation. The model`s fresh water transport compares well with observations. However, in the Atlantic there is an excessive southward transport of fresh water due to the absence of the Mediterranean outflow and weak northward flow of AAIW. Perhaps the model`s greatest weakness is the lack of strong AAIW and AABW circulation cells. Accurate thermohaline forcing in the North Atlantic (based on numerous hydrographic observations) helps the model adequately produce NADW. In contrast, the southern ocean is an area of sparse observation. Better thermohaline observations in this area may be needed if models such as this are to produce the deep convection that will achieve more accurate simulations of the global 3-dimensional circulation. 41 refs., 18 figs., 1 tab.

  19. A finite-volume module for simulating global all-scale atmospheric flows

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Deconinck, Willem; Hamrud, Mats; Kühnlein, Christian; Mozdzynski, George; Szmelter, Joanna; Wedi, Nils P.

    2016-06-01

    The paper documents the development of a global nonhydrostatic finite-volume module designed to enhance an established spectral-transform based numerical weather prediction (NWP) model. The module adheres to NWP standards, with formulation of the governing equations based on the classical meteorological latitude-longitude spherical framework. In the horizontal, a bespoke unstructured mesh with finite-volumes built about the reduced Gaussian grid of the existing NWP model circumvents the notorious stiffness in the polar regions of the spherical framework. All dependent variables are co-located, accommodating both spectral-transform and grid-point solutions at the same physical locations. In the vertical, a uniform finite-difference discretisation facilitates the solution of intricate elliptic problems in thin spherical shells, while the pliancy of the physical vertical coordinate is delegated to generalised continuous transformations between computational and physical space. The newly developed module assumes the compressible Euler equations as default, but includes reduced soundproof PDEs as an option. Furthermore, it employs semi-implicit forward-in-time integrators of the governing PDE systems, akin to but more general than those used in the NWP model. The module shares the equal regions parallelisation scheme with the NWP model, with multiple layers of parallelism hybridising MPI tasks and OpenMP threads. The efficacy of the developed nonhydrostatic module is illustrated with benchmarks of idealised global weather.

  20. An analysis of the global spatial variability of column-averaged CO2 from SCIAMACHY and its implications for CO2 sources and sinks

    USGS Publications Warehouse

    Zhang, Zhen; Jiang, Hong; Liu, Jinxun; Zhang, Xiuying; Huang, Chunlin; Lu, Xuehe; Jin, Jiaxin; Zhou, Guomo

    2014-01-01

    Satellite observations of carbon dioxide (CO2) are important because of their potential for improving the scientific understanding of global carbon cycle processes and budgets. We present an analysis of the column-averaged dry air mole fractions of CO2 (denoted XCO2) of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) retrievals, which were derived from a satellite instrument with relatively long-term records (2003–2009) and with measurements sensitive to the near surface. The spatial-temporal distributions of remotely sensed XCO2 have significant spatial heterogeneity with about 6–8% variations (367–397 ppm) during 2003–2009, challenging the traditional view that the spatial heterogeneity of atmospheric CO2 is not significant enough (2 and surface CO2 were found for major ecosystems, with the exception of tropical forest. In addition, when compared with a simulated terrestrial carbon uptake from the Integrated Biosphere Simulator (IBIS) and the Emissions Database for Global Atmospheric Research (EDGAR) carbon emission inventory, the latitudinal gradient of XCO2 seasonal amplitude was influenced by the combined effect of terrestrial carbon uptake, carbon emission, and atmospheric transport, suggesting no direct implications for terrestrial carbon sinks. From the investigation of the growth rate of XCO2 we found that the increase of CO2 concentration was dominated by temperature in the northern hemisphere (20–90°N) and by precipitation in the southern hemisphere (20–90°S), with the major contribution to global average occurring in the northern hemisphere. These findings indicated that the satellite measurements of atmospheric CO2 improve not only the estimations of atmospheric inversion, but also the understanding of the terrestrial ecosystem carbon dynamics and its feedback to atmospheric CO2.

  1. SU-C-304-01: Investigation of Various Detector Response Functions and Their Geometry Dependence in a Novel Method to Address Ion Chamber Volume Averaging Effect

    SciTech Connect

    Barraclough, B; Lebron, S; Li, J; Fan, Qiyong; Liu, C; Yan, G

    2015-06-15

    Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and

  2. Modelling the flow of a second order fluid through and over a porous medium using the volume averages. II. The stress boundary condition

    NASA Astrophysics Data System (ADS)

    Minale, Mario

    2016-02-01

    In this paper, a stress boundary condition at the interface between a porous medium saturated by a viscoelastic fluid and the free viscoelastic fluid is derived. The volume averages are used to upscale the problem. The boundary condition is obtained on the assumption that the free fluid stress is transferred partially to the fluid within the porous medium and partially to the solid skeleton. To this end the momentum balance on the solid skeleton saturated by the viscoelastic fluid is derived and a generalised Biot's equation is obtained, which is coupled with the generalised Brinkman's equation derived in Part I of the paper. They together state that the whole stress carried by the porous medium, sum of that of the fluid and that of the solid skeleton, is not dissipated. The boundary condition here derived does not show any stress jump and as in Part I, to emphasize the effect of elasticity, a second order fluid of Coleman and Noll is considered as viscoelastic fluid. Also the stress boundary condition at the interface between a homogeneous solid and the porous medium saturated by the viscoelastic fluid is obtained.

  3. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    PubMed

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found. PMID:22005524

  4. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  5. The Global Classroom: A Thematic Multicultural Model for the K-6 and ESL Classroom. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    De Cou-Landberg, Michelle

    This two-volume resource guide is designed to help K-6 and ESL teachers implement multicultural whole language learning through thematic social studies units. The four chapters in Volume 1 address universal themes: (1) "Climates and Seasons: Watching the Weather"; (2) "Trees and Plants: Our Rich, Green World"; (3) "Animals around the World: Tame,…

  6. TH-E-BRE-03: A Novel Method to Account for Ion Chamber Volume Averaging Effect in a Commercial Treatment Planning System Through Convolution

    SciTech Connect

    Barraclough, B; Li, J; Liu, C; Yan, G

    2014-06-15

    Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle{sup 3}), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA

  7. Proceedings of the First National Workshop on the Global Weather Experiment: Current Achievements and Future Directions, volume 2, part 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An assessment of the status of research using Global Weather Experiment (GWE) data and of the progress in meeting the objectives of the GWE, i.e., better knowledge and understanding of the atmosphere in order to provide more useful weather prediction services. Volume Two consists of a compilation of the papers presented during the workshop. These cover studies that addressed GWE research objectives and utilized GWE information. The titles in Part 2 of this volume include General Circulation Planetary Waves, Interhemispheric, Cross-Equatorial Exchange, Global Aspects of Monsoons, Midlatitude-Tropical Interactions During Monsoons, Stratosphere, Southern Hemisphere, Parameterization, Design of Observations, Oceanography, Future Possibilities, Research Gaps, with an Appendix.

  8. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can

  9. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students

    PubMed Central

    Lee, Romeo B.; Baring, Rito V.; Sta. Maria, Madelene A.

    2016-01-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700

  10. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students.

    PubMed

    Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A

    2016-02-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700

  11. Validation of the global distribution of CO2 volume mixing ratio in the mesosphere and lower thermosphere from SABER

    NASA Astrophysics Data System (ADS)

    Rezac, L.; Jian, Y.; Yue, J.; Russell, J. M.; Kutepov, A.; Garcia, R.; Walker, K.; Bernath, P.

    2015-12-01

    The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument on board the Thermosphere Ionosphere Mesosphere Energetics and Dynamics satellite has been measuring the limb radiance in 10 broadband infrared channels over the altitude range from ~ 400 km to the Earth's surface since 2002. The kinetic temperatures and CO2 volume mixing ratios (VMRs) in the mesosphere and lower thermosphere have been simultaneously retrieved using SABER limb radiances at 15 and 4.3 µm under nonlocal thermodynamic equilibrium (non-LTE) conditions. This paper presents results of a validation study of the SABER CO2 VMRs obtained with a two-channel, self-consistent temperature/CO2 retrieval algorithm. Results are based on comparisons with coincident CO2 measurements made by the Atmospheric Chemistry Experiment Fourier transform spectrometer (ACE-FTS) and simulations using the Specified Dynamics version of the Whole Atmosphere Community Climate Model (SD-WACCM). The SABER CO2 VMRs are in agreement with ACE-FTS observations within reported systematic uncertainties from 65 to 110 km. The annual average SABER CO2 VMR falls off from a well-mixed value above ~80 km. Latitudinal and seasonal variations of CO2 VMRs are substantial. SABER observations and the SD-WACCM simulations are in overall agreement for CO2 seasonal variations, as well as global distributions in the mesosphere and lower thermosphere. Not surprisingly, the CO2 seasonal variation is shown to be driven by the general circulation, converging in the summer polar mesopause region and diverging in the winter polar mesopause region.

  12. Handbook of solar energy data for south-facing surfaces in the United States. Volume 2: Average hourly and total daily insolation data for 235 localities. Alaska - Montana

    NASA Technical Reports Server (NTRS)

    Smith, J. H.

    1980-01-01

    Average hourly and daily total insolation estimates for 235 United States locations are presented. Values are presented for a selected number of array tilt angles on a monthly basis. All units are in kilowatt hours per square meter.

  13. Insolation data manual: Long-term monthly averages of solar radiation, temperature, degree-days, and global KT for 248 National Weather Service stations and direct normal solar radiation data manual: Long-term, monthly mean, daily totals for 235 National Weather Service stations

    NASA Astrophysics Data System (ADS)

    1990-07-01

    The Insolation Data Manual presents monthly averaged data which describes the availability of solar radiation at 248 National Weather Service (NWS) stations, principally in the United States. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data, generally from 1952 to 1975, and listed for each location. Insolation values represent monthly average daily totals of global radiation on a horizontal surface and are depicted using the three units of measurement: kJ/sq m per day, Btu/sq ft per day and langleys per day. Average daily maximum, minimum and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3 C (65 F). For each station, global KT (cloudiness index) values were calculated on a monthly and annual basis. Global KT is an index of cloudiness and indicates fractional transmittance of horizontal radiation, from the top of the atmosphere to the earth's surface. The second section of this volume presents long-term monthly and annual averages of direct normal solar radiation for 235 NWS stations, including a discussion of the basic derivation process. This effort is in response to a generally recognized need for reliable direct normal data and the recent availability of 23 years of hourly averages for 235 stations. The relative inaccessibility of these data on microfiche further justifies reproducing at least the long-term averages in a useful format. In addition to a definition of terms and an overview of the ADIPA model, a discussion of model validation results is presented.

  14. Variations of the earth's magnetic field and rapid climatic cooling: A possible link through changes in global ice volume

    NASA Technical Reports Server (NTRS)

    Rampino, M. R.

    1979-01-01

    A possible relationship between large scale changes in global ice volume, variations in the earth's magnetic field, and short term climatic cooling is investigated through a study of the geomagnetic and climatic records of the past 300,000 years. The calculations suggest that redistribution of the Earth's water mass can cause rotational instabilities which lead to geomagnetic excursions; these magnetic variations in turn may lead to short-term coolings through upper atmosphere effects. Such double coincidences of magnetic excursions and sudden coolings at times of ice volume changes have occurred at 13,500, 30,000, 110,000, and 135,000 YBP.

  15. Citizenship and Citizenship Education in a Global Age: Politics, Policies, and Practices in China. Global Studies in Education. Volume 2

    ERIC Educational Resources Information Center

    Law, Wing-Wah

    2011-01-01

    This book examines issues of citizenship, citizenship education, and social change in China, exploring the complexity of interactions among global forces, the nation-state, local governments, schools, and individuals--including students--in selecting and identifying with elements of citizenship and citizenship education in a multileveled polity.…

  16. Single mammalian cells compensate for differences in cellular volume and DNA copy number through independent global transcriptional mechanisms

    PubMed Central

    Padovan-Merhar, Olivia; Nair, Gautham P.; Biaesch, Andrew; Mayer, Andreas; Scarfone, Steven; Foley, Shawn W.; Wu, Angela R.; Churchman, L. Stirling; Singh, Abhyudai; Raj, Arjun

    2015-01-01

    Summary Individual mammalian cells exhibit large variability in cellular volume even with the same absolute DNA content and so must compensate for differences in DNA concentration in order to maintain constant concentration of gene expression products. Using single molecule counting and computational image analysis, we show that transcript abundance correlates with cellular volume at the single cell level due to increased global transcription in larger cells. Cell fusion experiments establish that increased cellular content itself can directly increase transcription. Quantitative analysis shows that this mechanism measures the ratio of cellular volume to DNA content, mostly likely through sequestration of a transcriptional factor to DNA. Analysis of transcriptional bursts reveals a separate mechanism for gene dosage compensation after DNA replication that enables proper transcriptional output during early and late S-phase. Our results provide a framework for quantitatively understanding the relationships between DNA content, cell size and gene expression variability in single cells. PMID:25866248

  17. No significant brain volume decreases or increases in adults with high-functioning autism spectrum disorder and above average intelligence: a voxel-based morphometric study.

    PubMed

    Riedel, Andreas; Maier, Simon; Ulbrich, Melanie; Biscaldi, Monica; Ebert, Dieter; Fangmeier, Thomas; Perlov, Evgeniy; Tebartz van Elst, Ludger

    2014-08-30

    Autism spectrum disorder (ASD) is increasingly being recognized as an important issue in adult psychiatry and psychotherapy. High intelligence indicates overall good brain functioning and might thus present a particularly good opportunity to study possible cerebral correlates of core autistic features in terms of impaired social cognition, communication skills, the need for routines, and circumscribed interests. Anatomical MRI data sets for 30 highly intelligent patients with high-functioning autism and 30 pairwise-matched control subjects were acquired and analyzed with voxel-based morphometry. The gray matter volume of the pairwise-matched patients and the controls did not differ significantly. When correcting for total brain volume influences, the patients with ASD exhibited smaller left superior frontal volumes on a trend level. Heterogeneous volumetric findings in earlier studies might partly be explained by study samples biased by a high inclusion rate of secondary forms of ASD, which often go along with neuronal abnormalities. Including only patients with high IQ scores might have decreased the influence of secondary forms of ASD and might explain the absence of significant volumetric differences between the patients and the controls in this study. PMID:24953998

  18. Global Sentry: NASA/USRA high altitude reconnaissance aircraft design, volume 2

    NASA Technical Reports Server (NTRS)

    Alexandru, Mona-Lisa; Martinez, Frank; Tsou, Jim; Do, Henry; Peters, Ashish; Chatsworth, Tom; Yu, YE; Dhillon, Jaskiran

    1990-01-01

    The Global Sentry is a high altitude reconnaissance aircraft design for the NASA/USRA design project. The Global Sentry uses proven technologies, light-weight composites, and meets the R.F.P. requirements. The mission requirements for the Global Sentry are described. The configuration option is discussed and a description of the final design is given. Preliminary sizing analyses and the mass properties of the design are presented. The aerodynamic features of the Global Sentry are described along with the stability and control characteristics designed into the flight control system. The performance characteristics are discussed as is the propulsion installation and system layout. The Global Sentry structural design is examined, including a wing structural analysis. The cockpit, controls and display layouts are covered. Manufacturing is covered and the life cost estimation. Reliability is discussed. Conclusions about the current Global Sentry design are presented, along with suggested areas for future engineering work.

  19. Infusing a Global Perspective into the Study of Agriculture: Student Activities Volume II.

    ERIC Educational Resources Information Center

    Martin, Robert A., Ed.

    These student activities are designed to be used in a variety of places in the curriculum to provide a global perspective for students as they study agriculture. This document is not a unit of instruction; rather, teachers are encouraged to study the materials and decide which will be helpful in adding a global perspective to the learning…

  20. The computational structural mechanics testbed architecture. Volume 4: The global-database manager GAL-DBM

    NASA Technical Reports Server (NTRS)

    Wright, Mary A.; Regelbrugge, Marc E.; Felippa, Carlos A.

    1989-01-01

    This is the fourth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 4 describes the nominal-record data management component of the NICE software. It is intended for all users.

  1. Global and regional brain volumes normalization in weight-recovered adolescents with anorexia nervosa: preliminary findings of a longitudinal voxel-based morphometry study

    PubMed Central

    Bomba, Monica; Riva, Anna; Morzenti, Sabrina; Grimaldi, Marco; Neri, Francesca; Nacinovich, Renata

    2015-01-01

    The recent literature on anorexia nervosa (AN) suggests that functional and structural abnormalities of cortico-limbic areas might play a role in the evolution of the disease. We explored global and regional brain volumes in a cross-sectional and follow-up study on adolescents affected by AN. Eleven adolescents with AN underwent a voxel-based morphometry study at time of diagnosis and immediately after weight recovery. Data were compared to volumes carried out in eight healthy, age and sex matched controls. Subjects with AN showed increased cerebrospinal fluid volumes and decreased white and gray matter volumes, when compared to controls. Moreover, significant regional gray matter decrease in insular cortex and cerebellum was found at time of diagnosis. No regional white matter decrease was found between samples and controls. Correlations between psychological evaluation and insular volumes were explored. After weight recovery gray matter volumes normalized while reduced global white matter volumes persisted. PMID:25834442

  2. Ocean basin volume constraints on global sea level since the Jurassic

    NASA Astrophysics Data System (ADS)

    Seton, M.; Müller, R. D.

    2011-12-01

    Changes in the volume of the ocean basins, predominately via changes in the age-area distribution of oceanic lithosphere, have been suggested as the main driver for long-term eustatic sea-level change. As ocean lithosphere cools and thickens, ocean depth increases. The balance between the abundance of hot and buoyant crust along mid ocean ridges relative to abyssal plains is the primary driving force of long-term sea level changes. The emplacement of volcanic plateaus and chains as well as sedimentation contribute to raising eustatic sea level. Quantifying the average ocean basin depth through time primarily relies on the present day preserved seafloor spreading record, an analysis of the spatio-temporal record of plate boundary processes recorded on the continental margins adjacent to ocean basins as well as a consideration of the rules of plate tectonics, to reconstruct the history of seafloor spreading in the oceanic basins through time. This approach has been successfully applied to predict the magnitude and pattern of eustatic sea-level change since the Cretaceous (Müller et. al. 2008) but uncertainties in reconstructing mid ocean ridges and flanks increase back through time, given that we mainly depend on information preserved in preserved ocean crust. We have reconstructed the age-area distribution of oceanic lithosphere and the plate boundary configurations back to the Jurassic (200 Ma) in order to assess long-term sea-level change from amalgamation to dispersal of Pangaea. We follow the methodology presented in Müller et. al. (2008) but incorporate a new absolute plate motion model derived from Steinberger and Torsvik (2008) prior to 100 Ma, a merged Wessel et. al. (2006) and Wessel and Kroenke (2008) fixed Pacific hotspot reference frame, and a revised model for the formation of Panthalassa and the Cretaceous Pacific. Importantly, we incorporate a model for the break-up of the Ontong Java-Manihiki-Hikurangi plateaus between 120-86 Ma. We extend a

  3. Global bifurcation and stability of steady states for a reaction-diffusion-chemotaxis model with volume-filling effect

    NASA Astrophysics Data System (ADS)

    Ma, Manjun; Wang, Zhi-An

    2015-08-01

    This paper is devoted to studying a reaction-diffusion-chemotaxis model with a volume-filling effect in a bounded domain with Neumann boundary conditions. We first establish the global existence of classical solutions bounded uniformly in time. Then applying the asymptotic analysis and bifurcation theory, we obtain both the local and global structure of steady states bifurcating from the homogeneous steady states in one dimension by treating the chemotactic coefficient as a bifurcation parameter. Moveover we find the stability criterion of the bifurcating steady states and give a sufficient condition for the stability of steady states with small amplitude. The pattern formation of the model is numerically shown and the stability criterion is verified by our numerical simulations.

  4. Computational fluid dynamic studies of certain ducted bluff-body flowfields relevant to turbojet combustors. Volume 2: Time-averaged flowfield predictions for a proposed centerbody combustor

    NASA Astrophysics Data System (ADS)

    Raju, M. S.; Krishnamurthy, L.

    1986-07-01

    The near-wake region in a ducted bluff-body combustor was investigated by finite-difference computations. The numerical predictions are based upon the time-independent, Reynolds-averaged Navier-Stokes equations and the k-epsilon turbulence model. The steady-state calculations address both nonreacting and reacting flowfields in a novel configuration to more realistically simulate some of the essential features of the primary zone of a gas turbine combustion chamber. This configuration is characterized by turbulent mixing and combustion in the recirculating near-wake region downstream of an axisymmetric bluff body due to two annular air streams--an outer swirl-free flow and an inner swirling flow--and a central fuel jet. The latter contains propane for reacting flows and carbon dioxide for nonreacting flows. In view of the large number of geometrical and flow parameters involved, the reported results are concerned with only a limited parametric examination with the major emphasis being on nonreacting flows. Questions addressed for a particular set of geometric parameters include the effects of variation of mass flow rates in all three streams and the influence of swirl in the middle stream. Reacting computations investigate the influence of swirl on combustion, as well as that of combustion on the flowfield.

  5. Global energy and water balance: Characteristics from finite-volume atmospheric model of the IAP/LASG (FAMIL1)

    SciTech Connect

    Zhou, Linjiong; Bao, Qing; Liu, Yimin; Wu, Guoxiong; Wang, Wei-Chyung; Wang, Xiaocong; He, Bian; Yu, Haiyang; Li, Jiandong

    2015-03-01

    This paper documents version 1 of the Finite-volume Atmospheric Model of the IAP/LASG (FAMIL1), which has a flexible horizontal resolution up to a quarter of 1°. The model, currently running on the ‘‘Tianhe 1A’’ supercomputer, is the atmospheric component of the third-generation Flexible Global Ocean-Atmosphere-Land climate System model (FGOALS3) which will participate in the Coupled Model Intercomparison Project Phase 6 (CMIP6). In addition to describing the dynamical core and physical parameterizations of FAMIL1, this paper describes the simulated characteristics of energy and water balances and compares them with observational/reanalysis data. The comparisons indicate that the model simulates well the seasonal and geographical distributions of radiative fluxes at the top of the atmosphere and at the surface, as well as the surface latent and sensible heat fluxes. A major weakness in the energy balance is identified in the regions where extensive and persistent marine stratocumulus is present. Analysis of the global water balance also indicates realistic seasonal and geographical distributions with the global annual mean of evaporation minus precipitation being approximately 10⁻⁵ mm d⁻¹. We also examine the connections between the global energy and water balance and discuss the possible link between the two within the context of the findings from the reanalysis data. Finally, the model biases as well as possible solutions are discussed.

  6. Global energy and water balance: Characteristics from finite-volume atmospheric model of the IAP/LASG (FAMIL1)

    DOE PAGESBeta

    Zhou, Linjiong; Bao, Qing; Liu, Yimin; Wu, Guoxiong; Wang, Wei-Chyung; Wang, Xiaocong; He, Bian; Yu, Haiyang; Li, Jiandong

    2015-03-01

    This paper documents version 1 of the Finite-volume Atmospheric Model of the IAP/LASG (FAMIL1), which has a flexible horizontal resolution up to a quarter of 1°. The model, currently running on the ‘‘Tianhe 1A’’ supercomputer, is the atmospheric component of the third-generation Flexible Global Ocean-Atmosphere-Land climate System model (FGOALS3) which will participate in the Coupled Model Intercomparison Project Phase 6 (CMIP6). In addition to describing the dynamical core and physical parameterizations of FAMIL1, this paper describes the simulated characteristics of energy and water balances and compares them with observational/reanalysis data. The comparisons indicate that the model simulates well the seasonalmore » and geographical distributions of radiative fluxes at the top of the atmosphere and at the surface, as well as the surface latent and sensible heat fluxes. A major weakness in the energy balance is identified in the regions where extensive and persistent marine stratocumulus is present. Analysis of the global water balance also indicates realistic seasonal and geographical distributions with the global annual mean of evaporation minus precipitation being approximately 10⁻⁵ mm d⁻¹. We also examine the connections between the global energy and water balance and discuss the possible link between the two within the context of the findings from the reanalysis data. Finally, the model biases as well as possible solutions are discussed.« less

  7. Mars Global Digital Dune Database (MGD3): North polar region (MC-1) distribution, applications, and volume estimates

    USGS Publications Warehouse

    Hayward, R.K.

    2011-01-01

    The Mars Global Digital Dune Database (MGD3) now extends from 90??N to 65??S. The recently released north polar portion (MC-1) of MGD3 adds ~844 000km2 of moderate- to large-size dark dunes to the previously released equatorial portion (MC-2 to MC-29) of the database. The database, available in GIS- and tabular-format in USGS Open-File Reports, makes it possible to examine global dune distribution patterns and to compare dunes with other global data sets (e.g. atmospheric models). MGD3 can also be used by researchers to identify areas suitable for more focused studies. The utility of MGD3 is demonstrated through three example applications. First, the uneven geographic distribution of the dunes is discussed and described. Second, dune-derived wind direction and its role as ground truth for atmospheric models is reviewed. Comparisons between dune-derived winds and global and mesoscale atmospheric models suggest that local topography may have an important influence on dune-forming winds. Third, the methods used here to estimate north polar dune volume are presented and these methods and estimates (1130km3 to 3250km3) are compared with those of previous researchers (1158km3 to 15 000km3). In the near future, MGD3 will be extended to include the south polar region. ?? 2011 by John Wiley and Sons, Ltd.

  8. Global Trends in Educational Policy. International Perspectives on Education and Society. Volume 6

    ERIC Educational Resources Information Center

    Baker, David, Ed.; Wiseman, Alex, Ed.

    2005-01-01

    This volume of International Perspectives on Education and Society highlights the valuable role that educational policy plays in the development of education and society around the world. The role of policy in the development of education is crucial. Much rests on the decisions, support, and most of all resources that policymakers can either give…

  9. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    GPS has been used for spacecraft navigation for many years center dot In support of this, the US has committed that future GPS satellites will continue to provide signals in the Space Service Volume center dot NASA is working with international agencies to obtain similar commitments from other providers center dot In support of this effort, I simulated multi-constellation navigation in the Space Service Volume In this presentation, I extend the work to examine the navigational benefits and drawbacks of the new constellations center dot A major benefit is the reduced geometric dilution of precision (GDOP). I show that there is a substantial reduction in GDOP by using all of the GNSS constellations center dot The increased number of GNSS satellites broadcasting does produce mutual interference, raising the noise floor. A near/far signal problem can also occur where a nearby satellite drowns out satellites that are far away. - In these simulations, no major effect was observed Typically, the use of multi-constellation GNSS navigation improves GDOP by a factor of two or more over GPS alone center dot In addition, at the higher altitudes, four satellite solutions can be obtained much more often center dot This show the value of having commitments to provide signals in the Space Service Volume Besides a commitment to provide a minimum signal in the Space Service Volume, detailed signal gain information is useful for mission planning center dot Knowledge of group and phase delay over the pattern would also reduce the navigational uncertainty

  10. Proceedings of Eco-Informa `96 - global networks for environmental information. Volume 10 and 11

    SciTech Connect

    1996-12-31

    This fourth Eco-Informa forum has been designed to bridge the gap between scientific knowledge and real world applications. Enhancement of of international and exchange of global environmental technology among scientific, governmental, and commercial communities is the goal. Researchers, policy makers, and information managers presented papers that integrate scientific and technical issues with the global needs for expanded networks effective communication and responsible decision making. Special emphasis was given to environmental information management and decision support systems, including environmental computing and modeling, data banks and environmental education. In addition, fields such as waste management and remediation, sustainable food production, life-cycle analysis and auditing were also addressed.

  11. The deep sea oxygen isotopic record: Significance for tertiary global ice volume history, with emphasis on the latest Miocene/early Pliocene

    SciTech Connect

    Prentice, M.L.

    1988-01-01

    Planktonic and benthic isotopic records as well as carbonate sedimentation records extending from 6.1 to 4.1 Ma for eastern South Atlantic Holes 526A and 525B are presented. These data suggest ice volume variations about a constant mean sufficient to drive sea level between 10 m and 75 m below present. Isotopic records at the deeper (2500 m) site have been enriched by up to 0.5% by dissolution. Carbonate accumulation rates at both sites quadrupled at 4.6 Ma primarily because of increased production and, secondarily, decreased dissolution. The second part presents a Cenozoic-long composite {delta}{sup 18}O curve for tropical shallow-dwelling planktonic foraminifers and the benthic foraminifer Cibicides at 2-4 km depths. Surface {delta}{sup 18}O gradients between various low-and-mid latitude sites reflect: (1) widespread SST stability through the Cenozoic and (2) significant change in Tasman Sea SST through the Tertiary. Assuming average SST for tropical non-upwelling areas was constant, the planktonic composite suggest that global ice volume for the last 40 my has not been significantly less than today. Residual benthic {delta}{sup 18}O reflect relatively warm and saline deep water until the early Miocene after which time deep water progressively cooled. The third part presents {delta}{sup 18}O for Recent Orbulina universa from 44 core-tops distributed through the Atlantic and Indian Oceans. The purpose was to test the hypothesis that Orbulina calcifies at constant temperature and so records only ice volume changes. Orbulina commonly calcifies at intermediate depths over a wide range of temperatures salinities, and densities. These physical factors are not the primary controls on the spatial and vertical distribution of Orbulina.

  12. Global Journal of Computer Science and Technology. Volume 1.2

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2009-01-01

    Articles in this issue of "Global Journal of Computer Science and Technology" include: (1) Input Data Processing Techniques in Intrusion Detection Systems--Short Review (Suhair H. Amer and John A. Hamilton, Jr.); (2) Semantic Annotation of Stock Photography for CBIR Using MPEG-7 standards (R. Balasubramani and V. Kannan); (3) An Experimental Study…

  13. Technical Report Series on Global Modeling and Data Assimilation, Volume 41 : GDIS Workshop Report

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Schubert, Siegfried; Pozzi, Will; Mo, Kingtse; Wood, Eric F.; Stahl, Kerstin; Hayes, Mike; Vogt, Juergen; Seneviratne, Sonia; Stewart, Ron; Pulwarty, Roger; Stefanski, Robert

    2015-01-01

    The workshop "An International Global Drought Information System Workshop: Next Steps" was held on 10-13 December 2014 in Pasadena, California. The more than 60 participants from 15 countries spanned the drought research community and included select representatives from applications communities as well as providers of regional and global drought information products. The workshop was sponsored and supported by the US National Integrated Drought Information System (NIDIS) program, the World Climate Research Program (WCRP: GEWEX, CLIVAR), the World Meteorological Organization (WMO), the Group on Earth Observations (GEO), the European Commission Joint Research Centre (JRC), the US Climate Variability and Predictability (CLIVAR) program, and the US National Oceanic and Atmospheric Administration (NOAA) programs on Modeling, Analysis, Predictions and Projections (MAPP) and Climate Variability & Predictability (CVP). NASA/JPL hosted the workshop with logistical support provided by the GEWEX program office. The goal of the workshop was to build on past Global Drought Information System (GDIS) progress toward developing an experimental global drought information system. Specific goals were threefold: (i) to review recent research results focused on understanding drought mechanisms and their predictability on a wide range of time scales and to identify gaps in understanding that could be addressed by coordinated research; (ii) to help ensure that WRCP research priorities mesh with efforts to build capacity to address drought at the regional level; and (iii) to produce an implementation plan for a short duration pilot project to demonstrate current GDIS capabilities. See http://www.wcrp-climate.org/gdis-wkshp-2014-objectives for more information.

  14. Transforming America: Cultural Cohesion, Educational Achievement, and Global Competitiveness. Educational Psychology. Volume 7

    ERIC Educational Resources Information Center

    DeVillar, Robert A.; Jiang, Binbin

    2011-01-01

    Creatively and rigorously blending historical research and contemporary data from various disciplines, this book cogently and comprehensively illustrates the problems and opportunities the American nation faces in education, economics, and the global arena. The authors propose a framework of transformation that would render American culture no…

  15. Global Inventory of Regional and National Qualifications Frameworks. Volume I: Thematic Chapters

    ERIC Educational Resources Information Center

    Deij, Arjen; Graham, Michael; Bjornavold, Jens; Grm, Slava Pevec; Villalba, Ernesto; Christensen, Hanne; Chakroun, Borhene; Daelman, Katrien; Carlsen, Arne; Singh, Madhu

    2015-01-01

    The "Global Inventory of Regional and National Qualifications Frameworks," the result of collaborative work between the European Training Foundation (ETF), the European Centre for the Development of Vocational Training (Cedefop), UNESCO [United Nations Educational, Scientific and Cultural Organization] and UIL [UNESCO Institute for…

  16. Global Journal of Computer Science and Technology. Volume 9, Issue 5 (Ver. 2.0)

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2010-01-01

    This is a special issue published in version 1.0 of "Global Journal of Computer Science and Technology." Articles in this issue include: (1) [Theta] Scheme (Orthogonal Milstein Scheme), a Better Numerical Approximation for Multi-dimensional SDEs (Klaus Schmitz Abe); (2) Input Data Processing Techniques in Intrusion Detection Systems--Short Review…

  17. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  18. Comparison of average global exposure of population induced by a macro 3G network in different geographical areas in France and Serbia.

    PubMed

    Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe

    2016-09-01

    This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc. PMID:27385053

  19. Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-12-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  20. Age Differences in Big Five Behavior Averages and Variabilities Across the Adult Lifespan: Moving Beyond Retrospective, Global Summary Accounts of Personality

    PubMed Central

    Noftle, Erik E.; Fleeson, William

    2009-01-01

    In three intensive cross-sectional studies, age differences in behavior averages and variabilities were examined. Three questions were posed: Does variability differ among age groups? Does the sizable variability in young adulthood persist throughout the lifespan? Do past conclusions about trait development, based on trait questionnaires, hold up when actual behavior is examined? Three groups participated: younger adults (18-23 years), middle-aged adults (35-55 years), and older adults (65-81 years). In two experience-sampling studies, participants reported their current behavior multiple times per day for one or two week spans. In a third study, participants interacted in standardized laboratory activities on eight separate occasions. First, results revealed a sizable amount of intraindividual variability in behavior for all adult groups, with standard deviations ranging from about half a point to well over one point on 6-point scales. Second, older adults were most variable in Openness whereas younger adults were most variable in Agreeableness and Emotional Stability. Third, most specific patterns of maturation-related age differences in actual behavior were both more greatly pronounced and differently patterned than those revealed by the trait questionnaire method. When participants interacted in standardized situations, personality differences between younger adults and middle-aged adults were larger, and older adults exhibited a more positive personality profile than they exhibited in their everyday lives. PMID:20230131

  1. The balanced-force volume tracking algorithm and global embedded interface formulation for droplet dynamics with mass transfer

    SciTech Connect

    Francois, Marianne M; Carlson, Neil N

    2010-01-01

    Understanding the complex interaction of droplet dynamics with mass transfer and chemical reactions is of fundamental importance in liquid-liquid extraction. High-fidelity numerical simulation of droplet dynamics with interfacial mass transfer is particularly challenging because the position of the interface between the fluids and the interface physics need to be predicted as part of the solution of the flow equations. In addition, the discontinuity in fluid density, viscosity and species concentration at the interface present additional numerical challenges. In this work, we extend our balanced-force volume-tracking algorithm for modeling surface tension force (Francois et al., 2006) and we propose a global embedded interface formulation to model the interfacial conditions of an interface in thermodynamic equilibrium. To validate our formulation, we perform simulations of pure diffusion problems in one- and two-dimensions. Then we present two and three-dimensional simulations of a single droplet dynamics rising by buoyancy with mass transfer.

  2. Temperature minima in the average thermal structure of the middle mesosphere (70 - 80 km) from analysis of 40- to 92-km SME global temperature profiles

    NASA Technical Reports Server (NTRS)

    Clancy, R. Todd; Rusch, David W.; Callan, Michael T.

    1994-01-01

    Global temperatures have been derived for the upper stratosphere and mesosphere from analysis of Solar Mesosphere Explorer (SME) limb radiance profiles. The SME temperature represent fixed local time observations at 1400 - 1500 LT, with partial zonal coverage of 3 - 5 longitudes per day over the 1982-1986 period. These new SME temperatures are compared to the COSPAR International Ionosphere Reference Atmosphere 86 (CIRA 86) climatology (Fleming et al., 1990) as well as stratospheric and mesospheric sounder (SAMS); Barnett and Corney, 1984), National Meteorological Center (NMC); (Gelman et al., 1986), and individual lidar and rocket observations. Significant areas of disagreement between the SME and CIRA 86 mesospheric temperatures are 10 K warmer SME temperatures at altitudes above 80 km. The 1981-1982 SAMS temperatures are in much closer agreement with the SME temperatures between 40 and 75 km. Although much of the SME-CIRA 86 disagreement probably stems from the poor vertical resolution of the observations comprising the CIRA 86 modelm, some portion of the differences may reflect 5- to 10-year temporal variations in mesospheric temperatures. The CIRA 86 climatology is based on 1973-1978 measurements. Relatively large (1 K/yr) 5- to 10-year trends in temperatures as functions of longitude, latitude, and altitude have been observed for both the upper stratosphere (Clancy and Rusch, 1989a) and mesosphere (Clancy and Rusch, 1989b; Hauchecorne et al., 1991). The SME temperatures also exhibit enhanced amplitudes for the semiannual oscillation (SAO) of upper mesospheric temperatures at low latitudes, which are not evident in the CIRA 86 climatology. The so-called mesospheric `temperature inversions' at wintertime midlatitudes, which have been observed by ground-based lidar (Hauschecorne et al., 1987) and rocket in situ measurements (Schmidlin, 1976), are shown to be a climatological aspect of the mesosphere, based on the SME observations.

  3. Evaluation of the skill of North-American Multi-Model Ensemble (NMME) Global Climate Models in predicting average and extreme precipitation and temperature over the continental USA

    NASA Astrophysics Data System (ADS)

    Slater, Louise J.; Villarini, Gabriele; Bradley, Allen A.

    2016-08-01

    This paper examines the forecasting skill of eight Global Climate Models from the North-American Multi-Model Ensemble project (CCSM3, CCSM4, CanCM3, CanCM4, GFDL2.1, FLORb01, GEOS5, and CFSv2) over seven major regions of the continental United States. The skill of the monthly forecasts is quantified using the mean square error skill score. This score is decomposed to assess the accuracy of the forecast in the absence of biases (potential skill) and in the presence of conditional (slope reliability) and unconditional (standardized mean error) biases. We summarize the forecasting skill of each model according to the initialization month of the forecast and lead time, and test the models' ability to predict extended periods of extreme climate conducive to eight `billion-dollar' historical flood and drought events. Results indicate that the most skillful predictions occur at the shortest lead times and decline rapidly thereafter. Spatially, potential skill varies little, while actual model skill scores exhibit strong spatial and seasonal patterns primarily due to the unconditional biases in the models. The conditional biases vary little by model, lead time, month, or region. Overall, we find that the skill of the ensemble mean is equal to or greater than that of any of the individual models. At the seasonal scale, the drought events are better forecast than the flood events, and are predicted equally well in terms of high temperature and low precipitation. Overall, our findings provide a systematic diagnosis of the strengths and weaknesses of the eight models over a wide range of temporal and spatial scales.

  4. Seasonal cycle of volume transport through Kerama Gap revealed by a 20-year global HYbrid Coordinate Ocean Model reanalysis

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Metzger, E. Joseph; Thoppil, Prasad; Hurlburt, Harley E.; Zamudio, Luis; Smedstad, Ole Martin; Na, Hanna; Nakamura, Hirohiko; Park, Jae-Hun

    2015-12-01

    The temporal variability of volume transport from the North Pacific Ocean to the East China Sea (ECS) through Kerama Gap (between Okinawa Island and Miyakojima Island - a part of Ryukyu Islands Arc) is investigated using a 20-year global HYbrid Coordinate Ocean Model (HYCOM) reanalysis with the Navy Coupled Ocean Data Assimilation from 1993 to 2012. The HYCOM mean transport is 2.1 Sv (positive into the ECS, 1 Sv = 106 m3/s) from June 2009 to June 2011, in good agreement with the observed 2.0 Sv transport during the same period. This is similar to the 20-year mean Kerama Gap transport of 1.95 ± 4.0 Sv. The 20-year monthly mean volume transport (transport seasonal cycle) is maximum in October (3.0 Sv) and minimum in November (0.5 Sv). The annual variation component (345-400 days), mesoscale eddy component (70-345 days), and Kuroshio meander component (< 70 days) are separated to determine their contributions to the transport seasonal cycle. The annual variation component has a close relation with the local wind field and increases (decreases) transport into the ECS through Kerama Gap in summer (winter). Most of the variations in the transport seasonal cycle come from the mesoscale eddy component. The impinging mesoscale eddies increase the transport into the ECS during January, February, May, and October, and decrease it in March, April, November, and December, but have little effect in summer (June-September). The Kuroshio meander components cause smaller transport variations in summer than in winter.

  5. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function

    PubMed Central

    Yotti, Raquel; del Villar, Candelas Pérez; del Álamo, Juan C.; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Carlos Antoranz, J.; Mar Desco, M.; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-01-01

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart. PMID:23743396

  6. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  7. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  8. Educational Policy Transfer in an Era of Globalization: Theory--History--Comparison. Comparative Studies Series. Volume 23

    ERIC Educational Resources Information Center

    Rappleye, Jeremy

    2012-01-01

    As education becomes increasingly global, the processes and politics of transfer have become a central focus of research. This study provides a comprehensive analysis of contemporary theoretical and analytical work aimed at exploring international educational reform and reveals the myriad ways that globalization is now fundamentally altering our…

  9. Suicide triggers as sex-specific threats in domains of evolutionary import: negative correlation between global male-to-female suicide ratios and average per capita gross national income.

    PubMed

    Saad, Gad

    2007-01-01

    From an evolutionary perspective, suicide is a paradoxical phenomenon given its fatal consequences on one's reproductive fitness. That fact notwithstanding, evolutionists have typically used kin and group selection arguments in proposing that suicide might indeed be viewed as an adaptive behavioral response. The current paper posits that in some instances, suicide might be construed as the ultimate maladaptive response to "crushing defeats" in domains of great evolutionary import (e.g., mating). Specifically, it is hypothesized that numerous sex-specific triggers of suicide are universally consistent because they correspond to dire sex-specific attacks on one's reproductive fitness (e.g., loss of occupational status is much more strongly linked to male suicides). More generally, it is proposed that many epidemiological aspects of suicide are congruent with Darwinian-based frameworks. These include the near-universal finding that men are much more likely to commit suicide (sexual selection theory), the differential motives that drive men and women to commit suicide (evolutionary psychology), and the shifting patterns of suicide across the life span (life-history theory). Using data from the World Health Organization and the World Bank, several evolutionary-informed hypotheses, regarding the correlation between male-to-female suicide ratios and average per capita Gross National Income, are empirically tested. Overall, the findings are congruent with Darwinian-based expectations namely as economic conditions worsen the male-to-female suicide ratio is exacerbated, with the negative correlation being the strongest for the "working age" brackets. The hypothesized evolutionary outlook provides a consilient framework in comprehending universal sex-specific triggers of suicide. Furthermore, it allows suicidologists to explore new research avenues that might remain otherwise untapped if one were to restrict their research interests on the identification of proximate causes

  10. Technical Report Series on Global Modeling and Data Assimilation. Volume 31; Global Surface Ocean Carbon Estimates in a Model Forced by MERRA

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; Rousseaux, Cecile S.

    2013-01-01

    MERRA products were used to force an established ocean biogeochemical model to estimate surface carbon inventories and fluxes in the global oceans. The results were compared to public archives of in situ carbon data and estimates. The model exhibited skill for ocean dissolved inorganic carbon (DIC), partial pressure of ocean CO2 (pCO2) and air-sea fluxes (FCO2). The MERRA-forced model produced global mean differences of 0.02% (approximately 0.3 microns) for DIC, -0.3% (about -1.2 (micro) atm; model lower) for pCO2, and -2.3% (-0.003 mol C/sq m/y) for FCO2 compared to in situ estimates. Basin-scale distributions were significantly correlated with observations for all three variables (r=0.97, 0.76, and 0.73, P<0.05, respectively for DIC, pCO2, and FCO2). All major oceanographic basins were represented as sources to the atmosphere or sinks in agreement with in situ estimates. However, there were substantial basin-scale and local departures.

  11. Drilling and dating New Jersey oligocene-miocene sequences: Ice volume, global sea level, and Exxon records

    SciTech Connect

    Miller, K.G.; Mountain, G.S.

    1996-02-23

    Oligocene to middle Miocene sequence boundaries on the New Jersey coastal plain (Ocean Drilling Project Leg 150X) and continental slope (Ocean Drilling Project Leg 150) were dated by integrating strontium isotopic stratigraphy, magnetostratigraphy, and biostratigraphy (planktonic foraminifera, nannofossils, dinocysts, and diatoms). The ages of coastal plain unconformities and slope seismic reflectors (unconformities or stratal breaks with no discernible hiatuses) match the ages of global {delta}{sup 18}O increases (inferred glacioeustatic lowerings) measured in deep-sea sites. These correlations confirm a causal link between coastal plain and slope sequence boundaries: both formed during global sea-level lowerings. The ages of New Jersey sequence boundaries and global {delta}{sup 18}O increases also correlate well within the Exxon Production Research sea-level records of Haq et al. and Vail et al., validating and refining their compilations. 33 refs., 2 figs., 1 tab.

  12. Educating American Students for Life in a Global Society. Policy Briefs: Education Reform. Volume 2, Number 4

    ERIC Educational Resources Information Center

    Lansford, Jennifer E.

    2002-01-01

    Progress in travel, technology, and other domains has contributed to the breaking down of barriers between countries and allowed for the development of an increasingly global society. International cooperation and competition are now pervasive in areas as diverse as business, science, arts, politics, and athletics. Educating students to navigate…

  13. Technical Report Series on Global Modeling and Data Assimilation, Volume 43. MERRA-2; Initial Evaluation of the Climate

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Bosilovich, Michael G.; Akella, Santha; Lawrence, Coy; Cullather, Richard; Draper, Clara; Gelaro, Ronald; Kovach, Robin; Liu, Qing; Molod, Andrea; Norris, Peter; Wargan, Krzysztof; Chao, Winston; Reichle, Rolf; Takacs, Lawrence; Todling, Ricardo; Vikhliaev, Yury; Bloom, Steve; Collow, Allison; Partyka, Gary; Labow, Gordon; Pawson, Steven; Reale, Oreste; Schubert, Siegfried; Suarez, Max

    2015-01-01

    The years since the introduction of MERRA have seen numerous advances in the GEOS-5 Data Assimilation System as well as a substantial decrease in the number of observations that can be assimilated into the MERRA system. To allow continued data processing into the future, and to take advantage of several important innovations that could improve system performance, a decision was made to produce MERRA-2, an updated retrospective analysis of the full modern satellite era. One of the many advances in MERRA-2 is a constraint on the global dry mass balance; this allows the global changes in water by the analysis increment to be near zero, thereby minimizing abrupt global interannual variations due to changes in the observing system. In addition, MERRA-2 includes the assimilation of interactive aerosols into the system, a feature of the Earth system absent from previous reanalyses. Also, in an effort to improve land surface hydrology, observations-corrected precipitation forcing is used instead of model-generated precipitation. Overall, MERRA-2 takes advantage of numerous updates to the global modeling and data assimilation system. In this document, we summarize an initial evaluation of the climate in MERRA-2, from the surface to the stratosphere and from the tropics to the poles. Strengths and weaknesses of the MERRA-2 climate are accordingly emphasized.

  14. Proceedings of the First National Workshop on the Global Weather Experiment: Current Achievements and Future Directions, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A summary of the proceedings in which the most important findings stemming from the Global Weather Experiment (GWE) are highlighted, additional key results and recommendations are comered, and the presentations and discussion are summarized. Detailed achievements, unresolved problems, and recommendations are included.

  15. On the Berdichevsky average

    NASA Astrophysics Data System (ADS)

    Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi

    2016-04-01

    Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.

  16. Averaging the inhomogeneous universe

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2012-03-01

    A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.

  17. Assessment of treatment response by total tumor volume and global apparent diffusion coefficient using diffusion-weighted MRI in patients with metastatic bone disease: a feasibility study.

    PubMed

    Blackledge, Matthew D; Collins, David J; Tunariu, Nina; Orton, Matthew R; Padhani, Anwar R; Leach, Martin O; Koh, Dow-Mu

    2014-01-01

    We describe our semi-automatic segmentation of whole-body diffusion-weighted MRI (WBDWI) using a Markov random field (MRF) model to derive tumor total diffusion volume (tDV) and associated global apparent diffusion coefficient (gADC); and demonstrate the feasibility of using these indices for assessing tumor burden and response to treatment in patients with bone metastases. WBDWI was performed on eleven patients diagnosed with bone metastases from breast and prostate cancers before and after anti-cancer therapies. Semi-automatic segmentation incorporating a MRF model was performed in all patients below the C4 vertebra by an experienced radiologist with over eight years of clinical experience in body DWI. Changes in tDV and gADC distributions were compared with overall response determined by all imaging, tumor markers and clinical findings at serial follow up. The segmentation technique was possible in all patients although erroneous volumes of interest were generated in one patient because of poor fat suppression in the pelvis, requiring manual correction. Responding patients showed a larger increase in gADC (median change = +0.18, range = -0.07 to +0.78 × 10(-3) mm2/s) after treatment compared to non-responding patients (median change = -0.02, range = -0.10 to +0.05 × 10(-3) mm2/s, p = 0.05, Mann-Whitney test), whereas non-responding patients showed a significantly larger increase in tDV (median change = +26%, range = +3 to +284%) compared to responding patients (median change = -50%, range = -85 to +27%, p = 0.02, Mann-Whitney test). Semi-automatic segmentation of WBDWI is feasible for metastatic bone disease in this pilot cohort of 11 patients, and could be used to quantify tumor total diffusion volume and median global ADC for assessing response to treatment. PMID:24710083

  18. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  19. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  20. Technical Report Series on Global Modeling and Data Assimilation. Volume 20; The Climate of the FVCCM-3 Model

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Chang, Yehui; Schubert, Siegfried D.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2001-01-01

    This document describes the climate of version 1 of the NASA-NCAR model developed at the Data Assimilation Office (DAO). The model consists of a new finite-volume dynamical core and an implementation of the NCAR climate community model (CCM-3) physical parameterizations. The version of the model examined here was integrated at a resolution of 2 degrees latitude by 2.5 degrees longitude and 32 levels. The results are based on assimilation that was forced with observed sea surface temperature and sea ice for the period 1979-1995, and are compared with NCEP/NCAR reanalyses and various other observational data sets. The results include an assessment of seasonal means, subseasonal transients including the Madden Julian Oscillation, and interannual variability. The quantities include zonal and meridional winds, temperature, specific humidity, geopotential height, stream function, velocity potential, precipitation, sea level pressure, and cloud radiative forcing.

  1. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  2. Globecom '83 - Global Telecommunications Conference, San Diego, CA, November 28-December 1, 1983, Conference Record. Volumes 1, 2 & 3

    NASA Astrophysics Data System (ADS)

    A collection of papers on global telecommunications is presented. The general topics addressed include: baseband equalization techniques in digital radio systems; packet-switched networks for voice and data communications; users' view of telecommunications needs; network evolution of lightwave systems; image processing; speech processsing for communications; spectrum management and orbital efficiency; new rural applications of satellite communication; high-capacity digital radio relay systems in Europe; design tools and methods for telecommunications systems. Also considered are: convergence of integrated services and local area networks; optical communication theory; data network performance; communication over distribution power lines; selected topics on satellite communications; digital techniques and systems for teleconferencing; evolution plans and perspectives for local digital switches; performance modelling issues in distributed switching; distributed protocols; standard on telecommunications network transmission performance; coherent optical fiber systems; modulation and coding techniques; and digital electronic message services.

  3. Technical Report Series on Global Modeling and Data Assimilation. Volume 13; Interannual Variability and Potential Predictability in Reanalysis Products

    NASA Technical Reports Server (NTRS)

    Min, Wei; Schubert, Siegfried D.; Suarez, Max J. (Editor)

    1997-01-01

    The Data Assimilation Office (DAO) at Goddard Space Flight Center and the National Center for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) have produced multi-year global assimilations of historical data employing fixed analysis systems. These "reanalysis" products are ideally suited for studying short-term climatic variations. The availability of multiple reanalysis products also provides the opportunity to examine the uncertainty in the reanalysis data. The purpose of this document is to provide an updated estimate of seasonal and interannual variability based on the DAO and NCEP/NCAR reanalyses for the 15-year period 1980-1995. Intercomparisons of the seasonal means and their interannual variations are presented for a variety of prognostic and diagnostic fields. In addition, atmospheric potential predictability is re-examined employing selected DAO reanalysis variables.

  4. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP ENTITLED "GLOBAL ANALYSIS OF POLARIZED PARTON DESTRIBUTIONS IN THE RHIC ERA" (VOLUME 86).

    SciTech Connect

    DESHPANDE,A.; VOGELSANG, W.

    2007-10-08

    The determination of the polarized gluon distribution is a central goal of the RHIC spin program. Recent achievements in polarization and luminosity of the proton beams in RHIC, has enabled the RHIC experiments to acquire substantial amounts of high quality data with polarized proton beams at 200 and 62.4 GeV center of mass energy, allowing a first glimpse of the polarized gluon distribution at RHIC. Short test operation at 500 GeV center of mass energy has also been successful, indicating absence of any fundamental roadblocks for measurements of polarized quark and anti-quark distributions planned at that energy in a couple of years. With this background, it has now become high time to consider how all these data sets may be employed most effectively to determine the polarized parton distributions in the nucleon, in general, and the polarized gluon distribution, in particular. A global analysis of the polarized DIS data from the past and present fixed target experiments jointly with the present and anticipated RHIC Spin data is needed.

  5. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  6. Technical Report Series on Global Modeling and Data Assimilation. Volume 14; A Comparison of GEOS Assimilated Data with FIFE Observations

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Suarez, Max J. (Editor); Schubert, Siegfried D.

    1998-01-01

    First ISLSCP Field Experiment (FIFE) observations have been used to validate the near-surface proper- ties of various versions of the Goddard Earth Observing System (GEOS) Data Assimilation System. The site- averaged FIFE data set extends from May 1987 through November 1989, allowing the investigation of several time scales, including the annual cycle, daily means and diurnal cycles. Furthermore, the development of the daytime convective planetary boundary layer is presented for several days. Monthly variations of the surface energy budget during the summer of 1988 demonstrate the affect of the prescribed surface soil wetness boundary conditions. GEOS data comes from the first frozen version of the assimilation system (GEOS-1 DAS) and two experimental versions of GEOS (v. 2.0 and 2.1) with substantially greater vertical resolution and other changes that influence the boundary layer. This report provides a baseline for future versions of the GEOS data assimilation system that will incorporate a state-of-the-art land surface parameterization. Several suggestions are proposed to improve the generality of future comparisons. These include the use of more diverse field experiment observations and an estimate of gridpoint heterogeneity from the new land surface parameterization.

  7. Changes in Global Ocean Bottom Properties and Volume Transports in CMIP5 Models under Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    Heuzé, C.; Heywood, K. J.; Stevens, D. P.; Ridley, J. K.

    2014-12-01

    Changes in bottom temperature, salinity and density in the global ocean by 2100 for 24 CMIP5 climate models are investigated for the climate change scenarios RCP4.5 and RCP8.5. The multimodel mean shows a decrease in density in all deep basins except for the North Atlantic which becomes denser. The individual model responses to climate change forcing are more complex: regarding temperature, only one model predicts a cooling of the bottom waters while the 23 others predict a warming; in salinity, there is less agreement regarding the sign of the change, especially in the Southern Ocean. The magnitude and equatorward extent of these changes also vary strongly among models. The changes in properties can be linked with the changes in transport of key water masses. The Atlantic Meridional Overturning Circulation weakens in most models and is directly linked to changes in bottom density in the North Atlantic. These changes are due to the intrusion of modified Antarctic Bottom Water, made possible by the decrease in North Atlantic Deep Water formation. In the Indian, Pacific and South Atlantic basins, changes in bottom density are congruent with the weakening in Antarctic Bottom Water transport through these basins. We argue that the greater the meridional transport, the more the change is propagated towards the equator. Then strong decreases in density over 100 years of climate change cause a weakening of the transports. The speed at which these property changes reach the deep basins is critical for a correct assessment of the heat storage capacity of the oceans as well as for predictions of future sea level rise.

  8. Incorporating global warming risks in power sector planning: A case study of the New England region. Volume 1

    SciTech Connect

    Krause, F.; Busch, J.; Koomey, J.

    1992-11-01

    Growing international concern over the threat of global climate change has led to proposals to buy insurance against this threat by reducing emissions of carbon (short for carbon dioxide) and other greenhouse gases below current levels. Concern over these and other, non-climatic environmental effects of electricity generation has led a number of states to adopt or explore new mechanisms for incorporating environmental externalities in utility resource planning. For example, the New York and Massachusetts utility commissions have adopted monetized surcharges (or adders) to induce emission reductions of federally regulated air pollutants (notably, SO{sub 2}, NO{sub x}, and particulates) beyond federally mandated levels. These regulations also include preliminary estimates of the cost of reducing carbon emissions, for which no federal regulations exist at this time. Within New England, regulators and utilities have also held several workshops and meetings to discuss alternative methods of incorporating externalities as well as the feasibility of regional approaches. This study examines the potential for reduced carbon emissions in the New England power sector as well as the cost and rate impacts of two policy approaches: environmental externality surcharges and a target- based approach. We analyze the following questions: Does New England have sufficient low-carbon resources to achieve significant reductions (10% to 20% below current levels) in fossil carbon emissions in its utility sector? What reductions could be achieved at a maximum? What is the expected cost of carbon reductions as a function of the reduction goal? How would carbon reduction strategies affect electricity rates? How effective are environmental externality cost surcharges as an instrument in bringing about carbon reductions? To what extent could the minimization of total electricity costs alone result in carbon reductions relative to conventional resource plans?

  9. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  10. Impact of the seasonal cycle on the decadal predictability of the North Atlantic volume and heat transport under global warming

    NASA Astrophysics Data System (ADS)

    Fischer, Matthias; Müller, Wolfgang A.; Domeisen, Daniela I. V.; Baehr, Johanna

    2016-04-01

    latitude dependence is similar to changes in the seasonal cycle that shows continuous and more robust changes until the 23rd century in RCP8.5. Longterm changes in the seasonal cycle can be related to changes in the surface wind stress and the associated Ekman transport, that is the main driver of the AMOC's and OHT's seasonal variability. Overall, the results show an impact of changes in the seasonal cycle on the decadal predictability of the AMOC and the OHT under global warming.

  11. Volume, heat, and freshwater transports of the global ocean circulation 1993-2000, estimated from a general circulation model constrained by World Ocean Circulation Experiment (WOCE) data

    NASA Astrophysics Data System (ADS)

    Stammer, D.; Wunsch, C.; Giering, R.; Eckert, C.; Heimbach, P.; Marotzke, J.; Adcroft, A.; Hill, C. N.; Marshall, J.

    2003-01-01

    An analysis of ocean volume, heat, and freshwater transports from a fully constrained general circulation model (GCM) is described. Output from a data synthesis, or state estimation, method is used by which the model was forced to large-scale, time-varying global ocean data sets over 1993 through 2000. Time-mean horizontal transports, estimated from this fully time-dependent circulation, have converged with independent time-independent estimates from box inversions over most parts of the world ocean but especially in the southern hemisphere. However, heat transport estimates differ substantially in the North Atlantic where our estimates result in only 1/2 previous results. The models drift over the estimation period is consistent with observations from TOPEX/Poseidon in their spatial pattern, but smaller in their amplitudes by about a factor of 2. Associated temperature and salinity changes are complex, and both point toward air-sea interaction over water mass formation regions as the primary source for changes in the deep ocean. The estimated mean circulation around Australia involves a net volume transport of 11 Sv through the Indonesian Throughflow and the Mozambique Channel. In addition, we show that this flow regime exists on all timescales above 1 month, rendering the variability in the South Pacific strongly coupled to the Indian Ocean. Moreover, the dynamically consistent variations in the model show temporal variability of oceanic heat transports, heat storage, and atmospheric exchanges that are complex and with a strong dependence upon location, depth, and timescale. Our results demonstrate the great potential of an ocean state estimation system to provide a dynamical description of the time-dependent observed heat transport and heat content changes and their relation to air-sea interactions.

  12. Global Distribution of CO2 Volume Mixing Ratio in the Mesosphere and Lower Thermosphere and Long-Term Changes Observed By Saber

    NASA Astrophysics Data System (ADS)

    Russell, J. M., III; Rezac, L.; Yue, J.; Jian, Y.; Kutepov, A. A.; Garcia, R. R.; Walker, K. A.; Bernath, P. F.

    2014-12-01

    The SABER 10-channel limb scanning radiometer has been operating onboard the TIMED satellite nearly continuously since launch on December 7, 2001. Beginning in late January, 2002 and continuing to the present day, SABER has been measuring limb radiance profiles used to retrieve vertical profiles of temperature, volume mixing ratios (VMRs) of O3, CO2, H2O, [O], and [H], and volume emission rates of NO, OH(2.1μm), OH(1.6μm) and O2(singlet delta). The measurements extend from the tropopause to the lower thermosphere, and span from 54S to 84N or 54N to 84S daily with alternating latitude coverage every ~ 60 days. Currently more than six million profiles of each parameter have been retrieved. The CO2 VMR is a new SABER data product that just became available this year. The temperature and CO2 VMRs are simultaneously retrieved in the ~65 km to 110 km range using limb radiances measured at 4.3 and 15 micrometers. Results will be presented of CO2 validation studies done using comparisons with coincident ACE-FTS CO2 data and SD-WACCM model simulations. The CO2 VMRs agree with ACE-FTS observations to within reported measurement uncertainties and they are in good agreement with SD-WACCM seasonal and global distributions. The SABER observed CO2 VMR departure from uniform mixing tends to start above ~80 km which is generally higher than what the model calculates. Variations of CO2 VMR with latitude and season are substantial. Seasonal zonal mean cross sections and CO2 time series for selected latitudes and altitudes over the 12.5-year time period, will also be shown. The CO2 VMR increase rate at 100 km is in close agreement with in situ results measured at the Mauna Loa Observatory.

  13. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  14. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  15. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  16. Volcanoes and global catastrophes

    NASA Technical Reports Server (NTRS)

    Simkin, Tom

    1988-01-01

    The search for a single explanation for global mass extinctions has let to polarization and the controversies that are often fueled by widespread media attention. The historic record shows a roughly linear log-log relation between the frequency of explosive volcanic eruptions and the volume of their products. Eruptions such as Mt. St. Helens 1980 produce on the order of 1 cu km of tephra, destroying life over areas in the 10 to 100 sq km range, and take place, on the average, once or twice a decade. Eruptions producing 10 cu km take place several times a century and, like Krakatau 1883, destroy life over 100 to 1000 sq km areas while producing clear global atmospheric effects. Eruptions producting 10,000 cu km are known from the Quaternary record, and extrapolation from the historic record suggests that they occur perhaps once in 20,000 years, but none has occurred in historic time and little is known of their biologic effects. Even larger eruptions must also exist in the geologic record, but documentation of their volume becomes increasingly difficult as their age increases. The conclusion is inescapable that prehistoric eruptions have produced catastrophes on a global scale: only the magnitude of the associated mortality is in question. Differentiation of large magma chambers is on a time scale of thousands to millions of years, and explosive volcanoes are clearly concentrated in narrow belts near converging plate margins. Volcanism cannot be dismissed as a producer of global catastrophes. Its role in major extinctions is likely to be at least contributory and may well be large. More attention should be paid to global effects of the many huge eruptions in the geologic record that dwarf those known in historic time.

  17. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  18. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  19. Influence of Type 2 Diabetes on Brain Volumes and Changes in Brain Volumes

    PubMed Central

    Espeland, Mark A.; Bryan, R. Nick; Goveas, Joseph S.; Robinson, Jennifer G.; Siddiqui, Mustafa S.; Liu, Simin; Hogan, Patricia E.; Casanova, Ramon; Coker, Laura H.; Yaffe, Kristine; Masaki, Kamal; Rossom, Rebecca; Resnick, Susan M.

    2013-01-01

    OBJECTIVE To study how type 2 diabetes adversely affects brain volumes, changes in volume, and cognitive function. RESEARCH DESIGN AND METHODS Regional brain volumes and ischemic lesion volumes in 1,366 women, aged 72–89 years, were measured with structural brain magnetic resonance imaging (MRI). Repeat scans were collected an average of 4.7 years later in 698 women. Cross-sectional differences and changes with time between women with and without diabetes were compared. Relationships that cognitive function test scores had with these measures and diabetes were examined. RESULTS The 145 women with diabetes (10.6%) at the first MRI had smaller total brain volumes (0.6% less; P = 0.05) and smaller gray matter volumes (1.5% less; P = 0.01) but not white matter volumes, both overall and within major lobes. They also had larger ischemic lesion volumes (21.8% greater; P = 0.02), both overall and in gray matter (27.5% greater; P = 0.06), in white matter (18.8% greater; P = 0.02), and across major lobes. Overall, women with diabetes had slightly (nonsignificant) greater loss of total brain volumes (3.02 cc; P = 0.11) and significant increases in total ischemic lesion volumes (9.7% more; P = 0.05) with time relative to those without diabetes. Diabetes was associated with lower scores in global cognitive function and its subdomains. These relative deficits were only partially accounted for by brain volumes and risk factors for cognitive deficits. CONCLUSIONS Diabetes is associated with smaller brain volumes in gray but not white matter and increasing ischemic lesion volumes throughout the brain. These markers are associated with but do not fully account for diabetes-related deficits in cognitive function. PMID:22933440

  20. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  1. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  2. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733

  3. The Averaging Problem in Cosmology

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2009-06-01

    This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.

  4. Average luminosity distance in inhomogeneous universes

    SciTech Connect

    Kostov, Valentin

    2010-04-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained

  5. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119

  6. Volcanic Signatures in Estimates of Stratospheric Aerosol Size, Distribution Width, Surface Area, and Volume Deduced from Global Satellite-Based Observations

    NASA Technical Reports Server (NTRS)

    Bauman, J. J.; Russell, P. B.

    2000-01-01

    Volcanic signatures in the stratospheric aerosol layer are revealed by two independent techniques which retrieve aerosol information from global satellite-based observations of particulate extinction. Both techniques combine the 4-wavelength Stratospheric Aerosol and Gas Experiment (SAGE) II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument. The algorithms use the SAGE II/CLAES composite extinction spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub R). The first technique is a multi-wavelength Look-Up-Table (LUT) algorithm which retrieves values and uncertainties of R(sub eff) by comparing ratios of extinctions from SAGE II and CLAES (e.g., E(sub lambda)/E(sub 1.02) to pre-computed extinction ratios which are based on a range of unimodal lognormal size distributions. The pre-computed ratios are presented as a function of R(sub eff) for a given sigma(sub g); thus the comparisons establish the range of R(sub eff) consistent with the measured spectra for that sigma(sub g). The fact that no solutions are found for certain sigma(sub g) values provides information on the acceptable range of sigma(sub g), which is found to evolve in response to volcanic injections and removal periods. Analogous comparisons using absolute extinction spectra and error bars establish the range of S and V. The second technique is a Parameter Search Technique (PST) which estimates R(sub eff) and sigma(sub g) within a month-latitude-altitude bin by minimizing the chi-squared values obtained by comparing the SAGE II/CLAES extinction spectra and error bars with spectra calculated by varying the lognormal fitting parameters: R(sub eff), sigma(sub g), and the total number of particles N(sub 0). For both techniques, possible biases in

  7. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  8. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  9. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  10. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  11. On generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag M.

    2007-09-01

    We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

  12. The German Skills Machine: Sustaining Comparative Advantage in a Global Economy. Policies and Institutions: Germany, Europe, and Transatlantic Relations, Volume 3.

    ERIC Educational Resources Information Center

    Culpepper, Pepper D., Ed.; Finegold, David, Ed.

    This book examines the effectiveness and distributive ramifications of the institutions of German skill provision as they functioned at home in the 1990s and as they served as a template for reform in other industrialized countries. The volume relies on multiple sources of data, including in-firm case studies, larger-scale surveys of companies,…

  13. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  14. Averaged Electroencephalic Audiometry in Infants

    ERIC Educational Resources Information Center

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  15. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  16. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  17. Averaging facial expression over time

    PubMed Central

    Haberman, Jason; Harp, Tom; Whitney, David

    2010-01-01

    The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064

  18. Average Cost of Common Schools.

    ERIC Educational Resources Information Center

    White, Fred; Tweeten, Luther

    The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…

  19. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  20. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866

  1. Exact averaging of laminar dispersion

    NASA Astrophysics Data System (ADS)

    Ratnakar, Ram R.; Balakotaiah, Vemuri

    2011-02-01

    We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.

  2. Higher Education in a Global Society Achieving Diversity, Equity and Excellence (Advances in Education in Diverse Communities: Research Policy and Praxis, Volume 5)

    ERIC Educational Resources Information Center

    Elsevier, 2006

    2006-01-01

    The "problem of the 21st century" is rapidly expanding diversity alongside stubbornly persistent status and power inequities by race, ethnicity, gender, class, language, citizenship and region. Extensive technological, economic, political and social changes, along with immigration, combine to produce a global community of great diversity and…

  3. A Salzburg Global Seminar: "Optimizing Talent: Closing Education and Social Mobility Gaps Worldwide." Policy Notes. Volume 20, Number 3, Fall 2012

    ERIC Educational Resources Information Center

    Schwartz, Robert

    2012-01-01

    This issue of ETS Policy Notes (Vol. 20, No. 3) provides highlights from the Salzburg Global Seminar in December 2011. The seminar focused on bettering the educational and life prospects of students up to age 18 worldwide. [This article was written with the assistance of Beth Brody.

  4. Higher Education in a Global Society Achieving Diversity, Equity and Excellence (Advances in Education in Diverse Communities: Research Policy and Praxis, Volume 5)

    ERIC Educational Resources Information Center

    Elsevier, 2006

    2006-01-01

    The "problem of the 21st century" is rapidly expanding diversity alongside stubbornly persistent status and power inequities by race, ethnicity, gender, class, language, citizenship and region. Extensive technological, economic, political and social changes, along with immigration, combine to produce a global community of great diversity…

  5. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  6. OAST Space Theme Workshop. Volume 2: Theme summary. 5: Global service (no. 11). A. Statement. B. 26 April 1976 presentation. C. Summary

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The benefits to be obtained from cost-effective global observation of the earth, its environment, and its natural and man-made features are examined using typical spacecraft and missions which could enhance the benefits of space operations. The technology needs and areas of interest include: (1) a ten-fold increase in the dimensions of deployable and erectable structures to provide booms, antennas, and platforms for global sensor systems; (2) control and stabilization systems capable of pointing accuracies of 1 arc second or less to locate targets of interest and maintain platform or sensor orientation during operations; (3) a factor of five improvements in spacecraft power capacity to support payloads and supporting electronics; (4) auxiliary propulsion systems capable of 5 to 10 years on orbit operation; (5) multipurpose sensors; and (6) end-to-end data management and an information system configured to accept new components or concepts as they develop.

  7. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  8. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  9. NASA University Research Centers Technical Advances in Aeronautics, Space Sciences and Technology, Earth Systems Sciences, Global Hydrology, and Education. Volumes 2 and 3

    NASA Technical Reports Server (NTRS)

    Coleman, Tommy L. (Editor); White, Bettie (Editor); Goodman, Steven (Editor); Sakimoto, P. (Editor); Randolph, Lynwood (Editor); Rickman, Doug (Editor)

    1998-01-01

    This volume chronicles the proceedings of the 1998 NASA University Research Centers Technical Conference (URC-TC '98), held on February 22-25, 1998, in Huntsville, Alabama. The University Research Centers (URCS) are multidisciplinary research units established by NASA at 11 Historically Black Colleges or Universities (HBCU's) and 3 Other Minority Universities (OMU's) to conduct research work in areas of interest to NASA. The URC Technical Conferences bring together the faculty members and students from the URC's with representatives from other universities, NASA, and the aerospace industry to discuss recent advances in their fields.

  10. Average observational quantities in the timescape cosmology

    SciTech Connect

    Wiltshire, David L.

    2009-12-15

    We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

  11. Optimization of high average power FEL beam for EUV lithography

    NASA Astrophysics Data System (ADS)

    Endo, Akira

    2015-05-01

    Extreme Ultraviolet Lithography (EUVL) is entering into high volume manufacturing (HVM) stage, with high average power (250W) EUV source from laser produced plasma at 13.5nm. Semiconductor industry road map indicates a scaling of the source technology more than 1kW average power by high repetition rate FEL. This paper discusses on the lowest risk approach to construct a prototype based on superconducting linac and normal conducting undulator, to demonstrate a high average power 13.5nm FEL equipped with optimized optical components and solid state lasers, to study FEL application in EUV lithography.

  12. Technical report series on global modeling and data assimilation. Volume 2: Direct solution of the implicit formulation of fourth order horizontal diffusion for gridpoint models on the sphere

    NASA Technical Reports Server (NTRS)

    Li, Yong; Moorthi, S.; Bates, J. Ray; Suarez, Max J.

    1994-01-01

    High order horizontal diffusion of the form K Delta(exp 2m) is widely used in spectral models as a means of preventing energy accumulation at the shortest resolved scales. In the spectral context, an implicit formation of such diffusion is trivial to implement. The present note describes an efficient method of implementing implicit high order diffusion in global finite difference models. The method expresses the high order diffusion equation as a sequence of equations involving Delta(exp 2). The solution is obtained by combining fast Fourier transforms in longitude with a finite difference solver for the second order ordinary differential equation in latitude. The implicit diffusion routine is suitable for use in any finite difference global model that uses a regular latitude/longitude grid. The absence of a restriction on the timestep makes it particularly suitable for use in semi-Lagrangian models. The scale selectivity of the high order diffusion gives it an advantage over the uncentering method that has been used to control computational noise in two-time-level semi-Lagrangian models.

  13. Technical report series on global modeling and data assimilation. Volume 6: A multiyear assimilation with the GEOS-1 system: Overview and results

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Schubert, Siegfried; Rood, Richard; Park, Chung-Kyu; Wu, Chung-Yu; Kondratyeva, Yelena; Molod, Andrea; Takacs, Lawrence; Seablom, Michael; Higgins, Wayne

    1995-01-01

    The Data Assimilation Office (DAO) at Goddard Space Flight Center has produced a multiyear global assimilated data set with version 1 of the Goddard Earth Observing System Data Assimilation System (GEOS-1 DAS). One of the main goals of this project, in addition to benchmarking the GEOS-1 system, was to produce a research quality data set suitable for the study of short-term climate variability. The output, which is global and gridded, includes all prognostic fields and a large number of diagnostic quantities such as precipitation, latent heating, and surface fluxes. Output is provided four times daily with selected quantities available eight times per day. Information about the observations input to the GEOS-1 DAS is provided in terms of maps of spatial coverage, bar graphs of data counts, and tables of all time periods with significant data gaps. The purpose of this document is to serve as a users' guide to NASA's first multiyear assimilated data set and to provide an early look at the quality of the output. Documentation is provided on all the data archives, including sample read programs and methods of data access. Extensive comparisons are made with the corresponding operational European Center for Medium-Range Weather Forecasts analyses, as well as various in situ and satellite observations. This document is also intended to alert users of the data about potential limitations of assimilated data, in general, and the GEOS-1 data, in particular. Results are presented for the period March 1985-February 1990.

  14. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  15. The role of the harmonic vector average in motion integration

    PubMed Central

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  16. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  17. The global frequency-wave number spectrum of oceanic variability estimated from TOPEX/POSEIDON altimetric measurements. Volume 100, No. C12; The Journal of Geophysical Research

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1995-01-01

    Two years of altimetric data from the TOPEX/POSEIDON spacecraft have been used to produce preliminary estimates of the space and time spectra of global variability for both sea surface height and slope. The results are expressed in terms of both degree variances from spherical harmonic expansions and in along-track wavenumbers. Simple analytic approximations both in terms of piece-wise power laws and Pade fractions are provided for comparison with independent measurements and for easy use of the results. A number of uses of such spectra exist, including the possibility of combining the altimetric data with other observations, predictions of spatial coherences, and the estimation of the accuracy of apparent secular trends in sea level.

  18. Advance of East Antarctic outlet glaciers during the Hypsithermal: Implications for the volume state of the Antarctic ice sheet under global warming

    SciTech Connect

    Domack, E.W. ); Jull, A.J.T. ); Nakao, Seizo )

    1991-11-01

    The authors present the first circum-East Antarctic chronology for the Holocene, based on 17 radiocarbon dates generated by the accelerator method. Marine sediments form around East Antarctica contain a consistent, high-resolution record of terrigenous (ice-proximal) and biogenic (open-marine) sedimentation during Holocene time. This record demonstrates that biogenic sedimentation beneath the open-marine environment on the continental shelf has been restricted to approximately the past 4 ka, whereas a period of terrigenous sedimentation related to grounding line advance of ice tongues and ice shelves took place between 7 and 4 ka. An earlier period of open-marine (biogenic sedimentation) conditions following the late Pleistocene glacial maximum is recognized from the Prydz Bay (Ocean Drilling Program) record between 10.7 and 7.3 ka. Clearly, the response of outlet systems along the periphery of the East Antarctic ice sheet during the mid-Holocene was expansion. This may have been a direct consequence of climate warming during an Antarctic Hypsithermal. Temperature-accumulation relations for the Antarctic indicate that warming will cause a significant increase in accumulation rather than in ablation. Models that predict a positive mass balance (growth) of the Antarctic ice sheet under global warming are supported by the mid-Holocene data presented herein.

  19. Technical Report Series on Global Modeling and Data Assimilation. Volume 32; Estimates of AOD Trends (2002 - 2012) Over the World's Major Cities Based on the MERRA Aerosol Reanalysis

    NASA Technical Reports Server (NTRS)

    Provencal, Simon; Kishcha, Pavel; Elhacham, Emily; daSilva, Arlindo M.; Alpert, Pinhas; Suarez, Max J.

    2014-01-01

    NASA's Global Modeling and Assimilation Office has extended the Modern-Era Retrospective Analysis for Research and Application (MERRA) tool with five atmospheric aerosol species (sulfates, organic carbon, black carbon, mineral dust and sea salt). This inclusion of aerosol reanalysis data is now known as MERRAero. This study analyses a ten-year period (July 2002 - June 2012) MERRAero aerosol reanalysis applied to the study of aerosol optical depth (AOD) and its trends for the aforementioned aerosol species over the world's major cities (with a population of over 2 million inhabitants). We found that a proportion of various aerosol species in total AOD exhibited a geographical dependence. Cities in industrialized regions (North America, Europe, central and eastern Asia) are characterized by a strong proportion of sulfate aerosols. Organic carbon aerosols are dominant over cities which are located in regions where biomass burning frequently occurs (South America and southern Africa). Mineral dust dominates other aerosol species in cities located in proximity to the major deserts (northern Africa and western Asia). Sea salt aerosols are prominent in coastal cities but are dominant aerosol species in very few of them. AOD trends are declining over cities in North America, Europe and Japan, as a result of effective air quality regulation. By contrast, the economic boom in China and India has led to increasing AOD trends over most cities in these two highly-populated countries. Increasing AOD trends over cities in the Middle East are caused by increasing desert dust.

  20. Compendium of NASA Data Base for the Global Tropospheric Experiment's Pacific Exploratory Mission - Tropics B (PEM-Tropics B). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Scott, A. Donald, Jr.; Kleb, Mary M.; Raper, James L.

    2000-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) conducted in March and April 1999. PEM-Tropics B was conducted during the southern-tropical wet season when the influence from biomass burning observed in PEM-Tropics A was minimal. Major deployment sites were Hawaii, Kiritimati (Christmas Island), Tahiti, Fiji, and Easter Island. The broad goals of PEM-Tropics B were to improved understanding of the oxidizing power of the atmosphere and the processes controlling sulfur aerosol formation and to establish baseline values for chemical species that are directly coupled to the oxidizing power and aerosol loading of the troposphere. The purpose of this document is to provide a representation of aircraft data that will be available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) or are available through the GTE Project Office archive. The data format is not intended to support original research/analysis, but to assist the reader in identifying data that are of interest.

  1. Compendium of NASA Data Base for the Global Tropospheric Experiment's Pacific Exploratory Mission-Tropics B (PEM-Tropics B). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Scott, A. Donald, Jr.; Kleb, Mary M.; Raper, James L.

    2000-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) conducted in March and April 1999. PEM-Tropics B was conducted during the southern-tropical wet season when the influence from biomass burning observed in PEM-Tropics A was minimal. Major deployment sites were Hawaii, Kiritimati (Christmas Island), Tahiti, Fiji, and Easter Island. The broad goals of PEM-Tropics B were to improved understanding of the oxidizing power of the atmosphere and the processes controlling sulfur aerosol formation and to establish baseline values for chemical species that are directly coupled to the oxidizing power and aerosol loading of the troposphere. The purpose of this document is to provide a representation of aircraft data that will be available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) or are available through the GTE Project Office archive. The data format is not intended to support original research/analysis, but to assist the reader in identifying data that are of interest.

  2. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  3. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  4. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground-based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley s Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  5. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Astrophysics Data System (ADS)

    Suarex, Max J.; Chou, Ming-Dah

    1994-11-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  6. Implementation of the NCAR Community Land Model (CLM) in the NASA/NCAR finite-volume Global Climate Model (fvGCM)

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon D.; Wang, Guiling; Chern, Jiundar; Bosilovich, Michael G.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2002-01-01

    In this study, the NCAR CLM version 2.0 land-surface model was integrated into the NASA/NCAR fvGCM. The CLM was developed collaboratively by an open interagency/university group of scientists and based on well-proven physical parameterizations and numerical schemes that combine the best features of BATS, NCAR-LSM, and IAP94. The CLM design is a one-dimensional point model with 1 vegetation layer, along with sub-grid scale tiles. The features of the CLM include 10-uneven soil layers with water, ice, and temperature states in each soil layer, and five snow layers, with water flow, refreezing, compaction, and aging allowed. In addition, the CLM utilizes two-stream canopy radiative transfer, the Bonan lake model and topographic enhanced streamflow based on TOPMODEL. The DAO fvGCM uses a genuinely conservative Flux-Form Semi-Lagrangian transport algorithm along with terrain- following Lagrangian control-volume vertical coordinates. The physical parameterizations are based on the NCAR Community Atmosphere Model (CAM-2). For our purposes, the fvGCM was run at 2 deg x 2.5 deg horizontal resolution with 55 vertical levels. The 10-year climate from the fvGCM with CLM2 was intercompared with the climate from fvGCM with LSM, ECMWF and NCEP. We concluded that the incorporation of CLM2 did not significantly impact the fvGCM climate from that of LSM. The most striking difference was the warm bias in the CLM2 surface skin temperature over desert regions. We determined that the warm bias can be partially attributed to the value of the drag coefficient for the soil under the canopy, which was too small resulting in a decoupling between the ground surface and the canopy. We also discovered that the canopy interception was high compared to observations in the Amazon region. A number of experiments were then performed focused on implementing model improvements. In order to correct the warm bias, the drag coefficient for the soil under the canopy was considered a function of LAI (Leaf

  7. NASA Global Hawk Overview

    NASA Technical Reports Server (NTRS)

    Naftel, Chris

    2014-01-01

    The NASA Global Hawk Project is supporting Earth Science research customers. These customers include: US Government agencies, civilian organizations, and universities. The combination of the Global Hawks range, endurance, altitude, payload power, payload volume and payload weight capabilities separates the Global Hawk platform from all other platforms available to the science community. This presentation includes an overview of the concept of operations and an overview of the completed science campaigns. In addition, the future science plans, using the NASA Global Hawk System, will be presented.

  8. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  9. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  10. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  11. Contribution of small glaciers to global sea level

    USGS Publications Warehouse

    Meier, M.F.

    1984-01-01

    Observed long-term changes in glacier volume and hydrometeorological mass balance models yield data on the transfer of water from glaciers, excluding those in Greenland and Antarctica, to the oceans, The average observed volume change for the period 1900 to 1961 is scaled to a global average by use of the seasonal amplitude of the mass balance. These data are used to calibrate the models to estimate the changing contribution of glaciers to sea level for the period 1884 to 1975. Although the error band is large, these glaciers appear to accountfor a third to half of observed rise in sea level, approximately that fraction not explained by thermal expansion of the ocean.

  12. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  13. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  14. RHIC BPM system average orbit calculations

    SciTech Connect

    Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

    2009-05-04

    RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

  15. Spatial and frequency averaging techniques for a polarimetric scatterometer system

    SciTech Connect

    Monakov, A.A.; Stjernman, A.S.; Nystroem, A.K. ); Vivekanandan, J. )

    1994-01-01

    An accurate estimation of backscattering coefficients for various types of rough surfaces is the main theme of remote sensing. Radar scattering signals from distributed targets exhibit fading due to interference associated with coherent scattering from individual scatterers within the resolution volume. Uncertainty in radar measurements which arises as a result of fading is reduced by averaging independent samples. Independent samples are obtained by collecting the radar returns from nonoverlapping footprints (spatial averaging) and/or nonoverlapping frequencies (frequency agility techniques). An improved formulation of fading characteristics for the spatial averaging and frequency agility technique is derived by taking into account the rough surface scattering process. Kirchhoff's approximation is used to describe rough surface scattering. Expressions for fading decorrelation distance and decorrelation bandwidth are derived. Rough surface scattering measurements are performed between L and X bands. Measured frequency and spatial correlation coefficients show good agreement with theoretical results.

  16. Is Global Warming Accelerating?

    NASA Astrophysics Data System (ADS)

    Shukla, J.; Delsole, T. M.; Tippett, M. K.

    2009-12-01

    A global pattern that fluctuates naturally on decadal time scales is identified in climate simulations and observations. This newly discovered component, called the Global Multidecadal Oscillation (GMO), is related to the Atlantic Meridional Oscillation and shown to account for a substantial fraction of decadal fluctuations in the observed global average sea surface temperature. IPCC-class climate models generally underestimate the variance of the GMO, and hence underestimate the decadal fluctuations due to this component of natural variability. Decomposing observed sea surface temperature into a component due to anthropogenic and natural radiative forcing plus the GMO, reveals that most multidecadal fluctuations in the observed global average sea surface temperature can be accounted for by these two components alone. The fact that the GMO varies naturally on multidecadal time scales implies that it can be predicted with some skill on decadal time scales, which provides a scientific rationale for decadal predictions. Furthermore, the GMO is shown to account for about half of the warming in the last 25 years and hence a substantial fraction of the recent acceleration in the rate of increase in global average sea surface temperature. Nevertheless, in terms of the global average “well-observed” sea surface temperature, the GMO can account for only about 0.1° C in transient, decadal-scale fluctuations, not the century-long 1° C warming that has been observed during the twentieth century.

  17. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  18. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  19. Chesapeake Bay Hypoxic Volume Forecasts and Results

    USGS Publications Warehouse

    Evans, Mary Anne; Scavia, Donald

    2013-01-01

    Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer's hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  20. Chesapeake Bay hypoxic volume forecasts and results

    USGS Publications Warehouse

    Scavia, Donald; Evans, Mary Anne

    2013-01-01

    The 2013 Forecast - Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer’s hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  1. Averaging procedures for flow within vegetation canopies

    NASA Astrophysics Data System (ADS)

    Raupach, M. R.; Shaw, R. H.

    1982-01-01

    Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.

  2. A thermochemically derived global reaction mechanism for detonation application

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Yang, J.; Sun, M.

    2012-07-01

    A 4-species 4-step global reaction mechanism for detonation calculations is derived from detailed chemistry through thermochemical approach. Reaction species involved in the mechanism and their corresponding molecular weight and enthalpy data are derived from the real equilibrium properties. By substituting these global species into the results of constant volume explosion and examining the evolution process of these global species under varied conditions, reaction paths and corresponding rates are summarized and formulated. The proposed mechanism is first validated to the original chemistry through calculations of the CJ detonation wave, adiabatic constant volume explosion, and the steady reaction structure after a strong shock wave. Good agreement in both reaction scales and averaged thermodynamic properties has been achieved. Two sets of reaction rates based on different detailed chemistry are then examined and applied for numerical simulations of two-dimensional cellular detonations. Preliminary results and a brief comparison between the two mechanisms are presented. The proposed global mechanism is found to be economic in computation and also competent in description of the overall characteristics of detonation wave. Though only stoichiometric acetylene-oxygen mixture is investigated in this study, the method to derive such a global reaction mechanism possesses a certain generality for premixed reactions of most lean hydrocarbon mixtures.

  3. Projection-Based Volume Alignment

    PubMed Central

    Yu, Lingbo; Snapp, Robert R.; Ruiz, Teresa; Radermacher, Michael

    2013-01-01

    When heterogeneous samples of macromolecular assemblies are being examined by 3D electron microscopy (3DEM), often multiple reconstructions are obtained. For example, subtomograms of individual particles can be acquired from tomography, or volumes of multiple 2D classes can be obtained by random conical tilt reconstruction. Of these, similar volumes can be averaged to achieve higher resolution. Volume alignment is an essential step before 3D classification and averaging. Here we present a projection-based volume alignment (PBVA) algorithm. We select a set of projections to represent the reference volume and align them to a second volume. Projection alignment is achieved by maximizing the cross-correlation function with respect to rotation and translation parameters. If data are missing, the cross-correlation functions are normalized accordingly. Accurate alignments are obtained by averaging and quadratic interpolation of the cross-correlation maximum. Comparisons of the computation time between PBVA and traditional 3D cross-correlation methods demonstrate that PBVA outperforms the traditional methods. Performance tests were carried out with different signal-to-noise ratios using modeled noise and with different percentages of missing data using a cryo-EM dataset. All tests show that the algorithm is robust and highly accurate. PBVA was applied to align the reconstructions of a subcomplex of the NADH: ubiquinone oxidoreductase (Complex I) from the yeast Yarrowia lipolytica, followed by classification and averaging. PMID:23410725

  4. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  5. Averaging of Backscatter Intensities in Compounds

    PubMed Central

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging.

  6. Neutron resonance averaging with filtered beams

    SciTech Connect

    Chrien, R.E.

    1985-01-01

    Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.

  7. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  8. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  9. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  10. Making the Grade? Globalisation and the Training Market in Australia. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    Hall, Richard; Buchanan, John; Bretherton, Tanya; van Barneveld, Kristin; Pickersgill, Richard

    This two-volume document reports on a study of globalization and Australia's training market. Volume 1 begins by examining debate on globalization and industry training in Australia. Discussed next is the study methodology, which involved field studies of the metals and engineering industry in South West Sydney and the Hunter and the information…

  11. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  12. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  13. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  14. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...

  15. A note on generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag

    2007-11-01

    We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.

  16. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

  17. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  18. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  19. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  20. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  1. Role of spatial averaging in multicellular gradient sensing

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation–global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation–global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  2. Role of spatial averaging in multicellular gradient sensing.

    PubMed

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-01-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations. PMID:27203129

  3. New results on averaging theory and applications

    NASA Astrophysics Data System (ADS)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  4. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  5. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    ERIC Educational Resources Information Center

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  6. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    PubMed

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  7. Refined similarity hypothesis using three-dimensional local averages

    NASA Astrophysics Data System (ADS)

    Iyer, Kartik P.; Sreenivasan, Katepalli R.; Yeung, P. K.

    2015-12-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number Rλ˜650 , on a periodic box of 40963 grid points to test the hypotheses using three-dimensional averages. In particular, we study the small-scale properties of the stochastic variable V =Δ u (r ) /(rɛr) 1 /3 , where Δ u (r ) is the longitudinal velocity increment and ɛr is the dissipation rate averaged over a three-dimensional volume of linear size r . We show that V is universal in the inertial subrange. In the dissipation range, the statistics of V are shown to depend solely on a local Reynolds number.

  8. Refined similarity hypothesis using three-dimensional local averages.

    PubMed

    Iyer, Kartik P; Sreenivasan, Katepalli R; Yeung, P K

    2015-12-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number R(λ)∼650, on a periodic box of 4096(3) grid points to test the hypotheses using three-dimensional averages. In particular, we study the small-scale properties of the stochastic variable V=Δu(r)/(rε(r))(1/3), where Δu(r) is the longitudinal velocity increment and ε(r) is the dissipation rate averaged over a three-dimensional volume of linear size r. We show that V is universal in the inertial subrange. In the dissipation range, the statistics of V are shown to depend solely on a local Reynolds number. PMID:26764821

  9. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  10. Physics of the spatially averaged snowmelt process

    NASA Astrophysics Data System (ADS)

    Horne, Federico E.; Kavvas, M. Levent

    1997-04-01

    It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.

  11. Cosmic Inhomogeneities and Averaged Cosmological Dynamics

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Singh, T. P.

    2008-10-01

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.

  12. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793

  13. Average Shape of Transport-Limited Aggregates

    NASA Astrophysics Data System (ADS)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  14. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  15. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  16. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  17. 40 CFR 91.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset by positive credits from engine families below the applicable emission standard, as allowed under the provisions of this subpart. Averaging of credits in this manner is used to determine...

  18. Orbit-averaged implicit particle codes

    NASA Astrophysics Data System (ADS)

    Cohen, B. I.; Freis, R. P.; Thomas, V.

    1982-03-01

    The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.

  19. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  20. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  1. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  2. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  3. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  4. Total-pressure averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  5. Stochastic Averaging of Duhem Hysteretic Systems

    NASA Astrophysics Data System (ADS)

    YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.

    2002-06-01

    The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.

  6. Geologic analysis of averaged magnetic satellite anomalies

    NASA Technical Reports Server (NTRS)

    Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.

    1985-01-01

    To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.

  7. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  8. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  9. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  10. Heuristic approach to capillary pressures averaging

    SciTech Connect

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  11. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  12. High-Volume Hospitals with High-Volume and Low-Volume Surgeons: Is There a "Field Effect" for Pancreaticoduodenectomy?

    PubMed

    Wood, Thomas W; Ross, Sharona B; Bowman, Ty A; Smart, Amanda; Ryan, Carrie E; Sadowitz, Benjamin; Downs, Darrell; Rosemurgy, Alexander S

    2016-05-01

    Since the Leapfrog Group established hospital volume criteria for pancreaticoduodenectomy (PD), the importance of surgeon volume versus hospital volume in obtaining superior outcomes has been debated. This study was undertaken to determine whether low-volume surgeons attain the same outcomes after PD as high-volume surgeons at high-volume hospitals. PDs undertaken from 2010 to 2012 were obtained from the Florida Agency for Health Care Administration. High-volume hospitals were identified. Surgeon volumes within were determined; postoperative length of stay (LOS), in-hospital mortality, discharge status, and hospital charges were examined relative to surgeon volume. Six high-volume hospitals were identified. Each hospital had at least one surgeon undertaking ≥ 12 PDs per year and at least one surgeon undertaking < 12 PDs per year. Within these six hospitals, there were 10 "high-volume" surgeons undertaking 714 PDs over the three-year period (average of 24 PDs per surgeon per year), and 33 "low-volume" surgeons undertaking 225 PDs over the three-year period (average of two PDs per surgeon per year). For all surgeons, the frequency with which surgeons undertook PD did not predict LOS, in-hospital mortality, discharge status, or hospital charges. At the six high-volume hospitals examined from 2010 to 2012, low-volume surgeons undertaking PD did not have different patient outcomes from their high-volume counterparts with respect to patient LOS, in-hospital mortality, patient discharge status, or hospital charges. Although the discussion of volume for complex operations has shifted toward surgeon volume, hospital volume must remain part of the discussion as there seems to be a hospital "field effect." PMID:27215720

  13. Quantum volume

    NASA Astrophysics Data System (ADS)

    Ryabov, V. A.

    2015-08-01

    Quantum systems in a mechanical embedding, the breathing mode of a small particles, optomechanical system, etc. are far not the full list of examples in which the volume exhibits quantum behavior. Traditional consideration suggests strain in small systems as a result of a collective movement of particles, rather than the dynamics of the volume as an independent variable. The aim of this work is to show that some problem here might be essentially simplified by introducing periodic boundary conditions. At this case, the volume is considered as the independent dynamical variable driven by the internal pressure. For this purpose, the concept of quantum volume based on Schrödinger’s equation in 𝕋3 manifold is proposed. It is used to explore several 1D model systems: An ensemble of free particles under external pressure, quantum manometer and a quantum breathing mode. In particular, the influence of the pressure of free particle on quantum oscillator is determined. It is shown also that correction to the spectrum of the breathing mode due to internal degrees of freedom is determined by the off-diagonal matrix elements of the quantum stress. The new treatment not using the “force” theorem is proposed for the quantum stress tensor. In the general case of flexible quantum 3D dynamics, quantum deformations of different type might be introduced similarly to monopole mode.

  14. Explicit cosmological coarse graining via spatial averaging

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Singh, T. P.

    2008-01-01

    The present matter density of the Universe, while highly inhomogeneous on small scales, displays approximate homogeneity on large scales. We propose that whereas it is justified to use the Friedmann Lemaître Robertson Walker (FLRW) line element (which describes an exactly homogeneous and isotropic universe) as a template to construct luminosity distances in order to compare observations with theory, the evolution of the scale factor in such a construction must be governed not by the standard Einstein equations for the FLRW metric, but by the modified Friedmann equations derived by Buchert (Gen Relat Gravit 32:105, 2000; 33:1381, 2001) in the context of spatial averaging in Cosmology. Furthermore, we argue that this scale factor, defined in the spatially averaged cosmology, will correspond to the effective FLRW metric provided the size of the averaging domain coincides with the scale at which cosmological homogeneity arises. This allows us, in principle, to compare predictions of a spatially averaged cosmology with observations, in the standard manner, for instance by computing the luminosity distance versus red-shift relation. The predictions of the spatially averaged cosmology would in general differ from standard FLRW cosmology, because the scale-factor now obeys the modified FLRW equations. This could help determine, by comparing with observations, whether or not cosmological inhomogeneities are an alternative explanation for the observed cosmic acceleration.

  15. Probabilistic climate change predictions applying Bayesian model averaging.

    PubMed

    Min, Seung-Ki; Simonis, Daniel; Hense, Andreas

    2007-08-15

    This study explores the sensitivity of probabilistic predictions of the twenty-first century surface air temperature (SAT) changes to different multi-model averaging methods using available simulations from the Intergovernmental Panel on Climate Change fourth assessment report. A way of observationally constrained prediction is provided by training multi-model simulations for the second half of the twentieth century with respect to long-term components. The Bayesian model averaging (BMA) produces weighted probability density functions (PDFs) and we compare two methods of estimating weighting factors: Bayes factor and expectation-maximization algorithm. It is shown that Bayesian-weighted PDFs for the global mean SAT changes are characterized by multi-modal structures from the middle of the twenty-first century onward, which are not clearly seen in arithmetic ensemble mean (AEM). This occurs because BMA tends to select a few high-skilled models and down-weight the others. Additionally, Bayesian results exhibit larger means and broader PDFs in the global mean predictions than the unweighted AEM. Multi-modality is more pronounced in the continental analysis using 30-year mean (2070-2099) SATs while there is only a little effect of Bayesian weighting on the 5-95% range. These results indicate that this approach to observationally constrained probabilistic predictions can be highly sensitive to the method of training, particularly for the later half of the twenty-first century, and that a more comprehensive approach combining different regions and/or variables is required. PMID:17569647

  16. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  17. High Average Power Yb:YAG Laser

    SciTech Connect

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  18. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  19. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  20. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  1. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  2. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  3. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  4. Mean Element Propagations Using Numerical Averaging

    NASA Technical Reports Server (NTRS)

    Ely, Todd A.

    2009-01-01

    The long-term evolution characteristics (and stability) of an orbit are best characterized using a mean element propagation of the perturbed two body variational equations of motion. The averaging process eliminates short period terms leaving only secular and long period effects. In this study, a non-traditional approach is taken that averages the variational equations using adaptive numerical techniques and then numerically integrating the resulting EOMs. Doing this avoids the Fourier series expansions and truncations required by the traditional analytic methods. The resultant numerical techniques can be easily adapted to propagations at most solar system bodies.

  5. Cross-correlations between volume change and price change.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M; Stanley, H Eugene

    2009-12-29

    In finance, one usually deals not with prices but with growth rates R, defined as the difference in logarithm between two consecutive prices. Here we consider not the trading volume, but rather the volume growth rate R, the difference in logarithm between two consecutive values of trading volume. To this end, we use several methods to analyze the properties of volume changes |R|, and their relationship to price changes |R|. We analyze 14,981 daily recordings of the Standard and Poor's (S & P) 500 Index over the 59-year period 1950-2009, and find power-law cross-correlations between |R| and |R| by using detrended cross-correlation analysis (DCCA). We introduce a joint stochastic process that models these cross-correlations. Motivated by the relationship between |R| and |R|, we estimate the tail exponent alpha of the probability density function P(|R|) approximately |R|(-1-alpha) for both the S & P 500 Index as well as the collection of 1819 constituents of the New York Stock Exchange Composite Index on 17 July 2009. As a new method to estimate alpha, we calculate the time intervals tau(q) between events where R > q. We demonstrate that tau(q), the average of tau(q), obeys tau(q) approximately q(alpha). We find alpha approximately 3. Furthermore, by aggregating all tau(q) values of 28 global financial indices, we also observe an approximate inverse cubic law. PMID:20018772

  6. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...

  7. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  8. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  9. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  10. Averaging models for linear piezostructural systems

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.

    2009-03-01

    In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.

  11. A Measure of the Average Intercorrelation

    ERIC Educational Resources Information Center

    Meyer, Edward P.

    1975-01-01

    Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)

  12. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  13. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  14. Reformulation of Ensemble Averages via Coordinate Mapping.

    PubMed

    Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A

    2016-04-12

    A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263

  15. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  16. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...

  17. Average configuration of the induced venus magnetotail

    SciTech Connect

    McComas, D.J.; Spence, H.E.; Russell, C.T.

    1985-01-01

    In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

  18. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  19. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  20. Global Education.

    ERIC Educational Resources Information Center

    Longstreet, Wilma S., Ed.

    1988-01-01

    This issue contains an introduction ("The Promise and Perplexity of Globalism," by W. Longstreet) and seven articles dedicated to exploring the meaning of global education for today's schools. "Global Education: An Overview" (J. Becker) develops possible definitions, identifies objectives and skills, and addresses questions and issues in this…

  1. Global Science.

    ERIC Educational Resources Information Center

    Brophy, Michael

    1991-01-01

    Approaches taken by a school science department to implement a global science curriculum using a range of available resources are outlined. Problems with current curriculum approaches, alternatives to an ethnocentric curriculum, advantages of global science, and possible strategies for implementing a global science policy are discussed. (27…

  2. Global Education.

    ERIC Educational Resources Information Center

    Berkley, June, Ed.

    1982-01-01

    The articles in this collection deal with various methods of global education--education to prepare students to function as understanding and informed citizens of the world. Topics discussed in the 26 articles include: (1) the necessity of global education; (2) global education in the elementary school language arts curriculum; (3) science fiction…

  3. Global HRD.

    ERIC Educational Resources Information Center

    1997

    This document contains four papers from a symposium on global human resource development (HRD). "Globalization of Human Resource Management (HRM) in Government: A Cross-Cultural Perspective" (Pan Suk Kim) relates HRM to national cultures and addresses its specific functional aspects with a unique dimension in a global organization. "An…

  4. Orbit Averaging in Perturbed Planetary Rings

    NASA Astrophysics Data System (ADS)

    Stewart, Glen R.

    2015-11-01

    The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.

  5. Multigrid solution for the compressible Euler equations by an implicit characteristic-flux-averaging

    NASA Astrophysics Data System (ADS)

    Kanarachos, A.; Vournas, I.

    A formulation of an implicit characteristic-flux-averaging method for the compressible Euler equations combined with the multigrid method is presented. The method is based on correction scheme and implicit Gudunov type finite volume scheme and is applied to two dimensional cases. Its principal feature is an averaging procedure based on the eigenvalue analysis of the Euler equations by means of which the fluxes are evaluated at the finite volume faces. The performance of the method is demonstrated for different flow problems around RAE-2922 and NACA-0012 airfoils and an internal flow over a circular arc.

  6. Lidar uncertainty and beam averaging correction

    NASA Astrophysics Data System (ADS)

    Giyanani, A.; Bierbooms, W.; van Bussel, G.

    2015-05-01

    Remote sensing of the atmospheric variables with the use of Lidar is a relatively new technology field for wind resource assessment in wind energy. A review of the draft version of an international guideline (CD IEC 61400-12-1 Ed.2) used for wind energy purposes is performed and some extra atmospheric variables are taken into account for proper representation of the site. A measurement campaign with two Leosphere vertical scanning WindCube Lidars and metmast measurements is used for comparison of the uncertainty in wind speed measurements using the CD IEC 61400-12-1 Ed.2. The comparison revealed higher but realistic uncertainties. A simple model for Lidar beam averaging correction is demonstrated for understanding deviation in the measurements. It can be further applied for beam averaging uncertainty calculations in flat and complex terrain.

  7. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  8. Emissions averaging top option for HON compliance

    SciTech Connect

    Kapoor, S. )

    1993-05-01

    In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.

  9. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  10. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  11. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  12. Representation of average drop sizes in sprays

    NASA Astrophysics Data System (ADS)

    Dodge, Lee G.

    1987-06-01

    Procedures are presented for processing drop-size measurements to obtain average drop sizes that represent overall spray characteristics. These procedures are not currently in general use, but they would represent an improvement over current practice. Clear distinctions are made between processing data for spatial- and temporal-type measurements. The conversion between spatial and temporal measurements is discussed. The application of these procedures is demonstrated by processing measurements of the same spray by two different types of instruments.

  13. Spatiotemporal averaging of perceived brightness along an apparent motion trajectory.

    PubMed

    Nagai, Takehiro; Beer, R Dirk; Krizay, Erin A; Macleod, Donald I A

    2011-01-01

    Objects are critical functional units for many aspects of visual perception and recognition. Many psychophysical experiments support the concept of an "object file" consisting of characteristics attributed to a single object on the basis of successive views of it, but there has been little evidence that object identity influences apparent brightness and color. In this study, we investigated whether the perceptual identification of successive flashed stimuli as views of a single moving object could affect brightness perception. Our target stimulus was composed of eight wedge-shaped sectors. The sectors were presented successively at different inter-flash intervals along an annular trajectory. At inter-flash intervals of around 100 ms, the impression was of a single moving object undergoing long-range apparent motion. By modulating the luminance between successive views, we measured the perception of luminance modulation along the trajectory of this long-range apparent motion. At the inter-flash intervals where the motion perception was strongest, the luminance difference was perceptually underestimated, and forced-choice luminance discrimination thresholds were elevated. Moreover, under such conditions, it became difficult for the observer to correctly associate or "bind" spatial positions and wedge luminances. These results indicate that the different luminances of wedges that were perceived as a single object were averaged along its apparent motion trajectory. The large spatial step size of our stimulus makes it unlikely that the results could be explained by averaging in a low-level mechanism that has a compact spatiotemporal receptive field (such as V1 and V2 neurons); higher level global motion or object mechanisms must be invoked to account for the averaging effect. The luminance averaging and the ambiguity of position-luminance "binding" suggest that the visual system may evade some of the costs of rapidly computing apparent brightness by adopting the

  14. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  15. Viewpoint: observations on scaled average bioequivalence.

    PubMed

    Patterson, Scott D; Jones, Byron

    2012-01-01

    The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. PMID:22162308

  16. STREMR: Numerical model for depth-averaged incompressible flow

    NASA Astrophysics Data System (ADS)

    Roberts, Bernard

    1993-09-01

    The STREMR computer code is a two-dimensional model for depth-averaged incompressible flow. It accommodates irregular boundaries and nonuniform bathymetry, and it includes empirical corrections for turbulence and secondary flow. Although STREMR uses a rigid-lid surface approximation, the resulting pressure is equivalent to the displacement of a free surface. Thus, the code can be used to model free-surface flow wherever the local Froude number is 0.5 or less. STREMR uses a finite-volume scheme to discretize and solve the governing equations for primary flow, secondary flow, and turbulence energy and dissipation rate. The turbulence equations are taken from the standard k-Epsilon turbulence model, and the equation for secondary flow is developed herein. Appendices to this report summarize the principal equations, as well as the procedures used for their discrete solution.

  17. Improving Reading Abilities of Average and Below Average Readers through Peer Tutoring.

    ERIC Educational Resources Information Center

    Galezio, Marne; And Others

    A program was designed to improve the progress of average and below average readers in a first-grade, a second-grade, and a sixth-grade classroom in a multicultural, multi-social economic district located in a three-county area northwest of Chicago, Illinois. Classroom teachers noted that students were having difficulty making adequate progress in…

  18. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    ERIC Educational Resources Information Center

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  19. REVISITING THE SOLAR TACHOCLINE: AVERAGE PROPERTIES AND TEMPORAL VARIATIONS

    SciTech Connect

    Antia, H. M.; Basu, Sarbani E-mail: sarbani.basu@yale.edu

    2011-07-10

    The tachocline is believed to be the region where the solar dynamo operates. With over a solar cycle's worth of data available from the Michelson Doppler Imager and Global Oscillation Network Group instruments, we are in a position to investigate not merely the average structure of the solar tachocline, but also its time variations. We determine the properties of the tachocline as a function of time by fitting a two-dimensional model that takes latitudinal variations of the tachocline properties into account. We confirm that if we consider the central position of the tachocline, it is prolate. Our results show that the tachocline is thicker at latitudes higher than the equator, making the overall shape of the tachocline more complex. Of the tachocline properties examined, the transition of the rotation rate across the tachocline, and to some extent the position of the tachocline, show some temporal variations.

  20. A Green's function quantum average atom model

    DOE PAGESBeta

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  1. Average shape of fluctuations for subdiffusive walks

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Acedo, L.

    2004-03-01

    We study the average shape of fluctuations for subdiffusive processes, i.e., processes with uncorrelated increments but where the waiting time distribution has a broad power-law tail. This shape is obtained analytically by means of a fractional diffusion approach. We find that, in contrast with processes where the waiting time between increments has finite variance, the fluctuation shape is no longer a semicircle: it tends to adopt a tablelike form as the subdiffusive character of the process increases. The theoretical predictions are compared with numerical simulation results.

  2. The averaging method in applied problems

    NASA Astrophysics Data System (ADS)

    Grebenikov, E. A.

    1986-04-01

    The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.

  3. Auto-exploratory average reward reinforcement learning

    SciTech Connect

    Ok, DoKyeong; Tadepalli, P.

    1996-12-31

    We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.

  4. Using four-phase Eulerian volume averaging approach to model macrosegregation and shrinkage cavity

    NASA Astrophysics Data System (ADS)

    Wu, M.; Kharicha, A.; Ludwig, A.

    2015-06-01

    This work is to extend a previous 3-phase mixed columnar-equiaxed solidification model to treat the formation of shrinkage cavity by including an additional phase. In the previous model the mixed columnar and equiaxed solidification with consideration of multiphase transport phenomena (mass, momentum, species and enthalpy) is proposed to calculate the as- cast structure including columnar-to-equiaxed transition (CET) and formation of macrosegregation. In order to incorporate the formation of shrinkage cavity, an additional phase, i.e. gas phase or covering liquid slag phase, must be considered in addition to the previously introduced 3 phases (parent melt, solidifying columnar dendrite trunks and equiaxed grains). No mass and species transfer between the new and other 3 phases is necessary, but the treatment of the momentum and energy exchanges between them is crucially important for the formation of free surface and shrinkage cavity, which in turn influences the flow field and formation of segregation. A steel ingot is preliminarily calculated to exam the functionalities of the model.

  5. Taylor-Aris Dispersion: An Explicit Example for Understanding Multiscale Analysis via Volume Averaging

    ERIC Educational Resources Information Center

    Wood, Brian D.

    2009-01-01

    Although the multiscale structure of many important processes in engineering is becoming more widely acknowledged, making this connection in the classroom is a difficult task. This is due in part because the concept of multiscale structure itself is challenging and it requires the students to develop new conceptual pictures of physical systems,…

  6. ANALYSIS OF MACRODISPERSION THROUGH VOLUME-AVERAGING: MOMENT EQUATIONS. (R825689C037)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  7. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    SciTech Connect

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  8. Vector light shift averaging in paraffin-coated alkali vapor cells.

    PubMed

    Zhivun, Elena; Wickenbrock, Arne; Sudyka, Julia; Patton, Brian; Pustelny, Szymon; Budker, Dmitry

    2016-07-11

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin-coherence time in paraffin-coated cells leads to spatial averaging of the vector light shift over the entire cell volume. This renders the averaged vector light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. Importantly, the demonstrated averaging mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times. PMID:27410814

  9. Atmospheric carbon dioxide and the global carbon cycle

    SciTech Connect

    Trabalka, J R

    1985-12-01

    This state-of-the-art volume presents discussions on the global cycle of carbon, the dynamic balance among global atmospheric CO2 sources and sinks. Separate abstracts have been prepared for the individual papers. (ACR)

  10. Statistical properties of the gyro-averaged standard map

    NASA Astrophysics Data System (ADS)

    da Fonseca, Julio D.; Sokolov, Igor M.; Del-Castillo-Negrete, Diego; Caldas, Ibere L.

    2015-11-01

    A statistical study of the gyro-averaged standard map (GSM) is presented. The GSM is an area preserving map model proposed in as a simplified description of finite Larmor radius (FLR) effects on ExB chaotic transport in magnetized plasmas with zonal flows perturbed by drift waves. The GSM's effective perturbation parameter, gamma, is proportional to the zero-order Bessel function of the particle's Larmor radius. In the limit of zero Larmor radius, the GSM reduces to the standard, Chirikov-Taylor map. We consider plasmas in thermal equilibrium and assume a Larmor radius' probability density function (pdf) resulting from a Maxwell-Boltzmann distribution. Since the particles have in general different Larmor radii, each orbit is computed using a different perturbation parameter, gamma. We present analytical and numerical computations of the pdf of gamma for a Maxwellian distribution. We also compute the pdf of global chaos, which gives the probability that a particle with a given Larmor radius exhibits global chaos, i.e. the probability that Kolmogorov-Arnold-Moser (KAM) transport barriers do not exist.

  11. MHD stability of torsatrons using the average method

    SciTech Connect

    Holmes, J.A.; Carreras, B.A.; Charlton, L.A.; Garcia, L.; Hender, T.C.; Hicks, H.R.; Lynch, V.E.

    1985-01-01

    The stability of torsatrons is studied using the average method, or stellarator expansion. Attention is focused upon the Advanced Toroidal Fusion Device (ATF), an l = 2, 12 field period, moderate aspect ratio configuration which, through a combination of shear and toroidally induced magnetic well, is stable to ideal modes. Using the vertical field (VF) coil system of ATF it is possible to enhance this stability by shaping the plasma to control the rotational transform. The VF coils are also useful tools for exploring the stability boundaries of ATF. By shifting the plasma inward along the major radius, the magnetic well can be removed, leading to three types of long wavelength instabilities: (1) A free boundary ''edge mode'' occurs when the rotational transform at the plasma edge is just less than unity. This mode is stabilized by the placement of a conducting wall at 1.5 times the plasma radius. (2) A free boundary global kink mode is observed at high ..beta... When either ..beta.. is lowered or a conducting wall is placed at the plasma boundary, the global mode is suppressed, and (3) an interchange mode is observed instead. For this interchange mode, calculations of the second, third, etc., most unstable modes are used to understand the nature of the degeneracy breaking induced by toroidal effects. Thus, the ATF configuration is well chosen for the study of torsatron stability limits.

  12. Direct Volume Rendering of Curvilinear Volumes

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Wilhelms, J.; Challinger, J.; Alper, N.; Ramamoorthy, S.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Direct volume rendering can visualize sampled 3D scalar data as a continuous medium, or extract features. However, it is generally slow. Furthermore, most algorithms for direct volume rendering have assumed rectilinear gridded data. This paper discusses methods for using direct volume rendering when the original volume is curvilinear, i.e. is divided into six-sided cells which are not necessarily equilateral hexahedra. One approach is to ray-cast such volumes directly. An alternative approach is to interpolate the sample volumes to a rectilinear grid, and use this regular volume for rendering. Advantages and disadvantages of the two approaches in terms of speed and image quality are explored.

  13. Average Gait Differential Image Based Human Recognition

    PubMed Central

    Chen, Jinyan; Liu, Jiansheng

    2014-01-01

    The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

  14. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171

  15. Average power laser experiment (APLE) design

    NASA Astrophysics Data System (ADS)

    Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.

    1992-07-01

    We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 μm to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 μm with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.

  16. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  17. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  18. Average prime-pair counting formula

    NASA Astrophysics Data System (ADS)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  19. The balanced survivor average causal effect.

    PubMed

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  20. Averaged implicit hydrodynamic model of semiflexible filaments.

    PubMed

    Chandran, Preethi L; Mofrad, Mohammad R K

    2010-03-01

    We introduce a method to incorporate hydrodynamic interaction in a model of semiflexible filament dynamics. Hydrodynamic screening and other hydrodynamic interaction effects lead to nonuniform drag along even a rigid filament, and cause bending fluctuations in semiflexible filaments, in addition to the nonuniform Brownian forces. We develop our hydrodynamics model from a string-of-beads idealization of filaments, and capture hydrodynamic interaction by Stokes superposition of the solvent flow around beads. However, instead of the commonly used first-order Stokes superposition, we do an equivalent of infinite-order superposition by solving for the true relative velocity or hydrodynamic velocity of the beads implicitly. We also avoid the computational cost of the string-of-beads idealization by assuming a single normal, parallel and angular hydrodynamic velocity over sections of beads, excluding the beads at the filament ends. We do not include the end beads in the averaging and solve for them separately instead, in order to better resolve the drag profiles along the filament. A large part of the hydrodynamic drag is typically concentrated at the filament ends. The averaged implicit hydrodynamics methods can be easily incorporated into a string-of-rods idealization of semiflexible filaments that was developed earlier by the authors. The earlier model was used to solve the Brownian dynamics of semiflexible filaments, but without hydrodynamic interactions incorporated. We validate our current model at each stage of development, and reproduce experimental observations on the mean-squared displacement of fluctuating actin filaments . We also show how hydrodynamic interaction confines a fluctuating actin filament between two stationary lateral filaments. Finally, preliminary examinations suggest that a large part of the observed velocity in the interior segments of a fluctuating filament can be attributed to induced solvent flow or hydrodynamic screening. PMID:20365783

  1. The entropy in finite N-unit nonextensive systems: The normal average and q-average

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hideo

    2010-09-01

    We discuss the Tsallis entropy in finite N-unit nonextensive systems by using the multivariate q-Gaussian probability distribution functions (PDFs) derived by the maximum entropy methods with the normal average and the q-average (q: the entropic index). The Tsallis entropy obtained by the q-average has an exponential N dependence: Sq(N)/N≃e(1-q)NS1(1) for large N (≫1/(1-q)>0). In contrast, the Tsallis entropy obtained by the normal average is given by Sq(N)/N≃[1/(q-1)N] for large N (≫1/(q -1)>0). N dependences of the Tsallis entropy obtained by the q- and normal averages are generally quite different, although both results are in fairly good agreement for |q -1|≪1.0. The validity of the factorization approximation (FA) to PDFs, which has been commonly adopted in the literature, has been examined. We have calculated correlations defined by Cm=⟨(δxiδxj)m⟩-⟨(δxi)m⟩⟨(δxj)m⟩ for i ≠j where δxi=xi-⟨xi⟩, and the bracket ⟨ṡ⟩ stands for the normal and q-averages. The first-order correlation (m =1) expresses the intrinsic correlation and higher-order correlations with m ≥2 include nonextensivity-induced correlation, whose physical origin is elucidated in the superstatistics.

  2. Global Academe: Engaging Intellectual Discourse

    ERIC Educational Resources Information Center

    Nagy-Zekmi, Silvia, Ed.; Hollis, Karyn, Ed.

    2012-01-01

    The representation of the economic, political, cultural and, more importantly, global interrelations between agents involved in the process of intellectual activity is at the core of the inquiry in this volume that scrutinizes a distinct transformation occurring in the modalities of intellectual production also detectable in the changing role of…

  3. Using tomography of GPS TEC to routinely determine ionospheric average electron density profiles

    NASA Astrophysics Data System (ADS)

    Yizengaw, E.; Moldwin, M. B.; Dyson, P. L.; Essex, E. A.

    2007-03-01

    This paper introduces a technique that calculates average electron density (Ne) profiles over a wide geographic area of coverage, using tomographic ionospheric Ne profiles. These Ne profiles, which can provide information of the Ne distribution up to global positioning system (GPS) orbiting altitude (with the coordination of space-based GPS tomographic profiles), can be incorporated into the next generation of the international reference ionosphere (IRI) model. An additional advantage of tomography is that it enables accurate modeling of the topside ionosphere. By applying the tomographic reconstruction approach to ground-based GPS slant total electron content (STEC), we calculate 3-h average Ne profiles over a wide region. Since it uses real measurement data, tomographic average Ne profiles describe the ionosphere during quiet and disturbed periods. The computed average Ne profiles are compared with IRI model profiles and average Ne profiles obtained from ground-based ionosondes.

  4. On Infinite-Volume Mixing

    NASA Astrophysics Data System (ADS)

    Lenci, Marco

    2010-09-01

    In the context of the long-standing issue of mixing in infinite ergodic theory, we introduce the idea of mixing for observables possessing an infinite-volume average. The idea is borrowed from statistical mechanics and appears to be relevant, at least for extended systems with a direct physical interpretation. We discuss the pros and cons of a few mathematical definitions that can be devised, testing them on a prototypical class of infinite measure-preserving dynamical systems, namely, the random walks.

  5. Commonwealth Degrees from Class to Equivalence: Changing to Grade Point Averages in the Caribbean

    ERIC Educational Resources Information Center

    Bastick, Tony

    2004-01-01

    British Commonwealth universities inherited the class system for classifying degrees. However, increasing global marketization has brought with it increasing demands for student exchanges, particularly with universities in North America. Hence, Commonwealth universities are considering adopting grade point averages (GPAs) for degree classification…

  6. A Systematic Literature Review of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; van der Maas, Han L. J.

    2010-01-01

    On the basis of several reviews of the literature, Lynn [Lynn, R., (2006). Race differences in intelligence: An evolutionary analysis. Augusta, GA: Washington Summit Publishers.] and Lynn and Vanhanen [Lynn, R., & Vanhanen, T., (2006). IQ and global inequality. Augusta, GA: Washington Summit Publishers.] concluded that the average IQ of the Black…

  7. Optimizing Average Precision Using Weakly Supervised Data.

    PubMed

    Behl, Aseem; Mohapatra, Pritish; Jawahar, C V; Kumar, M Pawan

    2015-12-01

    Many tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (ap). Yet it is common practice to employ the support vector machine ( svm) classifier, which optimizes a surrogate 0-1 loss. The popularity of svmcan be attributed to its empirical performance. Specifically, in fully supervised settings, svm tends to provide similar accuracy to ap-svm, which directly optimizes an ap-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent ap-svm that minimizes a carefully designed upper bound on the ap-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection. PMID:26539857

  8. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  9. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  10. Global Composite

    Atmospheric Science Data Center

    2013-04-19

    article title:  MISR Global Images See the Light of Day     View Larger Image ... than its nadir counterpart due to enhanced reflection of light by atmospheric particulates. MISR data are processed at the ...

  11. Landslide volumes and landslide mobilization rates in Umbria, central Italy

    NASA Astrophysics Data System (ADS)

    Guzzetti, Fausto; Ardizzone, Francesca; Cardinali, Mauro; Rossi, Mauro; Valigi, Daniela

    2009-03-01

    A catalogue of 677 landslides of the slide type was selected from a global database of geometrical measurements of individual landslides, including landslide area ( AL) and volume ( VL). The measurements were used to establish an empirical relationship to link AL (in m 2) to VL (in m 3). The relationship takes the form of a power law with a scaling exponent α = 1.450, covers eight orders of magnitude of AL and twelve orders of magnitude of VL, and is in general agreement with existing relationships published in the literature. The reduced scatter of the experiential data around the dependency line, and the fact that the considered landslides occurred in multiple physiographic and climatic environments and were caused by different triggers, indicate that the relationship between VL and AL is largely independent of the physiographical setting. The new relationship was used to determine the volume of individual landslides of the slide type in the Collazzone area, central Italy, a 78.9 km 2 area for which a multi-temporal landslide inventory covering the 69-year period from 1937 to 2005 is available. In the observation period, the total volume of landslide material was VLT = 4.78 × 10 7 m 3, corresponding to an average rate of landslide mobilization φL = 8.8 mm yr - 1 . Exploiting the temporal information in the landslide inventory, the volume of material produced during different periods by new and reactivated landslides was singled out. The wet period from 1937 to 1941 was recognized as an episode of accelerated landslide production. During this 5-year period, approximately 45% of the total landslide material inventoried in the Collazzone area was produced, corresponding to an average rate of landslide mobilization φL = 54 mm yr - 1 , six times higher than the long term rate. The volume of landslide material in an event or period was used as a proxy for the magnitude of the event or period, defined as the logarithm (base 10) of the total landslide volume produced

  12. Glomerular Aging and Focal Global Glomerulosclerosis: A Podometric Perspective.

    PubMed

    Hodgin, Jeffrey B; Bitzer, Markus; Wickman, Larysa; Afshinnia, Farsad; Wang, Su Q; O'Connor, Christopher; Yang, Yan; Meadowbrooke, Chrysta; Chowdhury, Mahboob; Kikuchi, Masao; Wiggins, Jocelyn E; Wiggins, Roger C

    2015-12-01

    Kidney aging is associated with an increasing proportion of globally scarred glomeruli, decreasing renal function, and exponentially increasing ESRD prevalence. In model systems, podocyte depletion causes glomerulosclerosis, suggesting age-associated glomerulosclerosis could be caused by a similar mechanism. We measured podocyte number, size, density, and glomerular volume in 89 normal kidney samples from living and deceased kidney donors and normal poles of nephrectomies. Podocyte nuclear density decreased with age due to a combination of decreased podocyte number per glomerulus and increased glomerular volume. Compensatory podocyte cell hypertrophy prevented a change in the proportion of tuft volume occupied by podocytes. Young kidneys had high podocyte reserve (podocyte density >300 per 10(6) µm(3)), but by 70-80 years of age, average podocyte nuclear density decreased to, <100 per 10(6) µm(3), with corresponding podocyte hypertrophy. In older age podocyte detachment rate (urine podocin mRNA-to-creatinine ratio) was higher than at younger ages and podocytes were stressed (increased urine podocin-to-nephrin mRNA ratio). Moreover, in older kidneys, proteinaceous material accumulated in the Bowman space of glomeruli with low podocyte density. In a subset of these glomeruli, mass podocyte detachment events occurred in association with podocytes becoming binucleate (mitotic podocyte catastrophe) and subsequent wrinkling of glomerular capillaries, tuft collapse, and periglomerular fibrosis. In kidneys of young patients with underlying glomerular diseases similar pathologic events were identified in association with focal global glomerulosclerosis. Podocyte density reduction with age may therefore directly lead to focal global glomerulosclerosis, and all progressive glomerular diseases can be considered superimposed accelerators of this underlying process. PMID:26038526

  13. Distribution of mesozooplankton biomass in the global ocean

    NASA Astrophysics Data System (ADS)

    Moriarty, R.; O'Brien, T. D.

    2013-02-01

    Mesozooplankton are cosmopolitan within the sunlit layers of the global ocean. They are important in the pelagic food web, having a significant feedback to primary production through their consumption of phytoplankton and microzooplankton. In many regions of the global ocean, they are also the primary contributors to vertical particle flux in the oceans. Through both they affect the biogeochemical cycling of carbon and other nutrients in the oceans. Little, however, is known about their global distribution and biomass. While global maps of mesozooplankton biomass do exist in the literature, they are usually in the form of hand-drawn maps for which the original data associated with these maps are not readily available. The dataset presented in this synthesis has been in development since the late 1990s, is an integral part of the Coastal and Oceanic Plankton Ecology, Production, and Observation Database (COPEPOD), and is now also part of a wider community effort to provide a global picture of carbon biomass data for key plankton functional types, in particular to support the development of marine ecosystem models. A total of 153 163 biomass values were collected, from a variety of sources, for mesozooplankton. Of those 2% were originally recorded as dry mass, 26% as wet mass, 5% as settled volume, and 68% as displacement volume. Using a variety of non-linear biomass conversions from the literature, the data have been converted from their original units to carbon biomass. Depth-integrated values were then used to calculate an estimate of mesozooplankton global biomass. Global epipelagic mesozooplankton biomass, to a depth of 200 m, had a mean of 5.9 μg C L-1, median of 2.7 μg C L-1 and a standard deviation of 10.6 μg C L-1. The global annual average estimate of mesozooplankton in the top 200 m, based on the median value, was 0.19 Pg C. Biomass was highest in the Northern Hemisphere, and there were slight decreases from polar oceans (40-90°) to more temperate

  14. Microwave emission spectrum of the moon: mean global heat flow and average depth of the regolith.

    PubMed

    Keihm, S J; Langseth, M G

    1975-01-10

    Earth-based observations of the lunar microwave brightness temperature spectrum at wavelengths between 5 and 500 centimeters, when reexamined in the light of physical property data derived from the Apollo program, tentatively support the high heat flows measured in situ and indicate that a regolith thickness between 10 and 30 meters may characterize a large portion of the lunar near side. PMID:17844211

  15. The global atmospheric response to low-frequency tropical forcing: Zonally averaged basic states

    NASA Technical Reports Server (NTRS)

    Li, Long; Nathan, Terrence R.

    1994-01-01

    The extratropical response to localized, low-frequency tropical forcing is examined using a linearized, non-divergent barotropic model on a sphere. Zonal-mean basic states characterized by solid-body rotation or critical latitudes are considered. An analytical analysis based on WKB and ray tracing methods shows that, in contrast to stationary Rossby waves, westward moving, low-frequency Rossby waves can propagate through the tropical easterlies into the extratropics. It is shown analytically that the difference between the stationary and low-frequency ray paths is proportional to the forcing frequency and inversely proportional to the zonal wavenumber cubed. An expression for the disturbance amplitude is derived that shows the ability of the forced waves to maintain their strength well into middle latitudes depends on their meridional wave scale and northward group velocity, both of which are functions of the slowly varying background flow. A local energetics analysis shows that the combination of energy dispersion from the forcing region and energy extraction from the equatorward flank of the midlatitude jet produces disturbances that have the greatest impact on the extratropical circulation. Under the assumption that the forcing amplitude is independent of frequency, this impact is largest when the tropical forcing period is in the range 10-20 days.

  16. Microwave emission spectrum of the moon - Mean global heat flow and average depth of the regolith

    NASA Technical Reports Server (NTRS)

    Keihm, S. J.; Langseth, M. G.

    1975-01-01

    Earth-based observations of the lunar microwave brightness temperature spectrum at wavelengths between 5 and 500 centimeters, when reexamined in the light of physical property data derived from the Apollo program, tentatively support the high heat flows measured in situ and indicate that a regolith thickness between 10 and 30 meters may characterize a large portion of the lunar near side.

  17. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  18. Social Class and Education: Global Perspectives

    ERIC Educational Resources Information Center

    Weis, Lois, Ed.; Dolby, Nadine, Ed.

    2012-01-01

    "Social Class and Education: Global Perspectives" is the first empirically grounded volume to explore the intersections of class, social structure, opportunity, and education on a truly global scale. Fifteen essays from contributors representing the US, Europe, China, Latin America and other regions offer an unparralleled examination of how social…

  19. Consuming Globalization, Local Identities, and Common Experiences

    ERIC Educational Resources Information Center

    Filax, Gloria

    2004-01-01

    In articulating global and local forms of sexuality and its impact on how people conceptualise conceptualised LGBT issues in education, the author explores three timely texts: (1) Dennis Altman's "Global Sex" (2000); (2) Vanessa Baird's "The No-Nonsense Guide to Sexual Diversity" (2001); and (3) an edited volume by Evelyn Blackwood and Saskia…

  20. Ensemble Averaged Conservation Equations for Multiphase, Multi-component, and Multi-material Flows

    SciTech Connect

    Ray A. Berry

    2003-08-01

    Many important “fluid” flows involve a combination of two or more materials having different properties. The multiple phases or components often exhibit relative motion among the phases or material classes. The microscopic motions of the individual constituents are complex and the solution to the micro-level evolutionary equations is difficult. Characteristic of such flows of multi-component materials is an uncertainty in the exact locations of the particular constituents at any particular time. For most practical purposes, it is not possible to exactly predict or measure the evolution of the details of such systems, nor is it even necessary or desirable. Instead, we are usually interested in more gross features of the motion, or the “average” behavior of the system. Here we present descriptive equations that will predict the evolution of this averaged behavior. Due to the complexities of interfaces and resultant discontinuities in fluid properties, as well as from physical scaling issues, it is essential to work with averaged quantities and parameters. We begin by tightening up, or more rigorously defining, our concept of an average. There are several types of averaging. The published literature predominantly contains two types of averaging: volume averaging [Whitaker 1999, Dobran 1991] and time averaging [Ishii 1975]. Occasionally combinations of the two are used. However, we utilize a more general approach by adopting what is known as ensemble averaging.

  1. The average ionospheric electrodynamics for the different substorm phases

    SciTech Connect

    Kamide, Y.; Sun, W.; Akasofu, S.I.

    1996-01-01

    The average patterns of the electrostatic potential, current vectors, and Joule heating in the polar ionosphere, as well as the associated field-aligned currents, are determined for a quiet time, the growth phase, the expansion phase, the peak epoch, and the recovery phase of substorms. For this purpose, the Kamide-Richmond-Matsushita magnetogram-inversion algorithm is applied to a data set (for March 17, 18, and 19, 1978) from the six meridian magnetometer chains (the total number of magnetometer stations being 71) which were operated during the period of the International Magnetospheric Study (IMS). This is the first attempt at obtaining, on the basis of individual substorms, the average pattern of substorm quantitities in the polar ionosphere for the different epochs. The main results are as follows: (1) The substorm-time current patterns over the entire polar region consist of two components. The first one is related to the two-cell convection pattern, and the second one is the westward electrojet in the dark sector which is related to the wedge current. (2) Time variations of the two components for the four substorm epochs are shown to be considerably different. (3) The dependence of these differences on the ionospheric electric field and the conductivities (Hall and Pedersen) is identified. (4) It is shown that the large-scale two-cell pattern in the electric potential is dominant during the growth phase of substorms. (5) The expansion phase is characterized by the appearance of a strong westward electrojet, which is added to the two-cell pattern. (6) The large-scale potential pattern becomes complicated during the recovery phase of substorms, but the two-cell pattern appears to be relatively dominant again during their late recovery as the wedge current subsides. These and many other earlier results are consistent with the present ones, which are more quantitatively and comprehensively demonstrated in this global study. 39 refs., 9 figs., 1 tab.

  2. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  3. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  4. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  5. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  6. Potential of high-average-power solid state lasers

    SciTech Connect

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  7. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    NASA Astrophysics Data System (ADS)

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-01

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  8. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    SciTech Connect

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-05

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  9. Decomposing global crop yield variability

    NASA Astrophysics Data System (ADS)

    Ben-Ari, Tamara; Makowski, David

    2014-11-01

    Recent food crises have highlighted the need to better understand the between-year variability of agricultural production. Although increasing future production seems necessary, the globalization of commodity markets suggests that the food system would also benefit from enhanced supplies stability through a reduction in the year-to-year variability. Here, we develop an analytical expression decomposing global crop yield interannual variability into three informative components that quantify how evenly are croplands distributed in the world, the proportion of cultivated areas allocated to regions of above or below average variability and the covariation between yields in distinct world regions. This decomposition is used to identify drivers of interannual yield variations for four major crops (i.e., maize, rice, soybean and wheat) over the period 1961-2012. We show that maize production is fairly spread but marked by one prominent region with high levels of crop yield interannual variability (which encompasses the North American corn belt in the USA, and Canada). In contrast, global rice yields have a small variability because, although spatially concentrated, much of the production is located in regions of below-average variability (i.e., South, Eastern and South Eastern Asia). Because of these contrasted land use allocations, an even cultivated land distribution across regions would reduce global maize yield variance, but increase the variance of global yield rice. Intermediate results are obtained for soybean and wheat for which croplands are mainly located in regions with close-to-average variability. At the scale of large world regions, we find that covariances of regional yields have a negligible contribution to global yield variance. The proposed decomposition could be applied at any spatial and time scales, including the yearly time step. By addressing global crop production stability (or lack thereof) our results contribute to the understanding of a key

  10. Local origin of global contact numbers in frictional ellipsoid packings.

    PubMed

    Schaller, Fabian M; Neudecker, Max; Saadatfar, Mohammad; Delaney, Gary W; Schröder-Turk, Gerd E; Schröter, Matthias

    2015-04-17

    In particulate soft matter systems the average number of contacts Z of a particle is an important predictor of the mechanical properties of the system. Using x-ray tomography, we analyze packings of frictional, oblate ellipsoids of various aspect ratios α, prepared at different global volume fractions ϕg. We find that Z is a monotonically increasing function of ϕg for all α. We demonstrate that this functional dependence can be explained by a local analysis where each particle is described by its local volume fraction ϕl computed from a Voronoi tessellation. Z can be expressed as an integral over all values of ϕl: Z(ϕg,α,X)=∫Zl(ϕl,α,X)P(ϕl|ϕg)dϕl. The local contact number function Zl(ϕl,α,X) describes the relevant physics in term of locally defined variables only, including possible higher order terms X. The conditional probability P(ϕl|ϕg) to find a specific value of ϕl given a global packing fraction ϕg is found to be independent of α and X. Our results demonstrate that for frictional particles a local approach is not only a theoretical requirement but also feasible. PMID:25933340

  11. Local Origin of Global Contact Numbers in Frictional Ellipsoid Packings

    NASA Astrophysics Data System (ADS)

    Schaller, Fabian M.; Neudecker, Max; Saadatfar, Mohammad; Delaney, Gary W.; Schröder-Turk, Gerd E.; Schröter, Matthias

    2015-04-01

    In particulate soft matter systems the average number of contacts Z of a particle is an important predictor of the mechanical properties of the system. Using x-ray tomography, we analyze packings of frictional, oblate ellipsoids of various aspect ratios α , prepared at different global volume fractions ϕg. We find that Z is a monotonically increasing function of ϕg for all α . We demonstrate that this functional dependence can be explained by a local analysis where each particle is described by its local volume fraction ϕl computed from a Voronoi tessellation. Z can be expressed as an integral over all values of ϕl: Z (ϕg,α ,X )=∫Zl(ϕl,α ,X )P (ϕl|ϕg)d ϕl . The local contact number function Zl(ϕl,α ,X ) describes the relevant physics in term of locally defined variables only, including possible higher order terms X . The conditional probability P (ϕl|ϕg) to find a specific value of ϕl given a global packing fraction ϕg is found to be independent of α and X . Our results demonstrate that for frictional particles a local approach is not only a theoretical requirement but also feasible.

  12. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  13. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  14. Cross-correlations between volume change and price change

    PubMed Central

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M.; Stanley, H. Eugene

    2009-01-01

    In finance, one usually deals not with prices but with growth rates R, defined as the difference in logarithm between two consecutive prices. Here we consider not the trading volume, but rather the volume growth rate R̃, the difference in logarithm between two consecutive values of trading volume. To this end, we use several methods to analyze the properties of volume changes |R̃|, and their relationship to price changes |R|. We analyze 14,981 daily recordings of the Standard and Poor's (S & P) 500 Index over the 59-year period 1950–2009, and find power-law cross-correlations between |R| and |R̃| by using detrended cross-correlation analysis (DCCA). We introduce a joint stochastic process that models these cross-correlations. Motivated by the relationship between |R| and |R̃|, we estimate the tail exponent α̃ of the probability density function P(|R̃|) ∼ |R̃|−1−α̃ for both the S & P 500 Index as well as the collection of 1819 constituents of the New York Stock Exchange Composite Index on 17 July 2009. As a new method to estimate α̃, we calculate the time intervals τq between events where R̃ > q. We demonstrate that τ̃q, the average of τq, obeys τ̃q ∼ qα̃. We find α̃ ≈ 3. Furthermore, by aggregating all τq values of 28 global financial indices, we also observe an approximate inverse cubic law. PMID:20018772

  15. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  16. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  17. Global militarization

    SciTech Connect

    Wallensteen, P.; Galtung, J.; Portales, C.

    1985-01-01

    This book contains 10 chapters. Some of the titles are: Military Formations and Social Formations: A Structural Analysis; Global Conflict Formations: Present Developments and Future Directions; War and the Power of Warmakers in Western Europe and Elsewhere, 1600-1980; and The Urban Type of Society and International War.

  18. Global Warming?

    ERIC Educational Resources Information Center

    Eichman, Julia Christensen; Brown, Jeff A.

    1994-01-01

    Presents information and data on an experiment designed to test whether different atmosphere compositions are affected by light and temperature during both cooling and heating. Although flawed, the experiment should help students appreciate the difficulties that researchers face when trying to find evidence of global warming. (PR)

  19. Global Education.

    ERIC Educational Resources Information Center

    McCoubrey, Sharon

    1994-01-01

    This theme issue focuses on topics related to global issues. (1) "Recycling for Art Projects" (Wendy Stephenson) gives an argument for recycling in the art classroom; (2) "Winds of Change: Tradition and Innovation in Circumpolar Art" (Bill Zuk and Robert Dalton) includes profiles of Alaskan Yupik artist, Larry Beck, who creates art from recycled…

  20. Campus Global.

    ERIC Educational Resources Information Center

    Sort, Josep

    2003-01-01

    Describes the development of the Campus Global portal at a public university in Spain. The project aimed to change the ways in which the university community worked, taught, and learned. Examines how the project was carried out, the transformations it instigated inside the organization, the improvements it has brought about, and the current state…

  1. Distribution of mesozooplankton biomass in the global ocean

    NASA Astrophysics Data System (ADS)

    Moriarty, R.; O'Brien, T. D.

    2012-09-01

    Mesozooplankton are cosmopolitan within the sunlit layers of the global ocean. They are important in the classical food web, having a significant feedback to primary production through their consumption of phytoplankton and microzooplankton. They are also the primary contributor to vertical particle flux in the oceans. Through both they affect the biogeochemical cycling of carbon and other nutrients in the oceans. Little, however, is known about their global distribution and biomass. While global maps of mesozooplankton biomass do exist in the literature they are usually in the form of hand-drawn maps and the original data associated with these maps are not readily available. The dataset presented in this synthesis has been in development since the late 1990's, is an integral part of the Coastal & Oceanic Plankton Ecology, Production, & Observation Database (COPEPOD), and is now also part of a wider community effort to provide a global picture of carbon biomass data for key plankton functional types, in particular to support the development of marine ecosystem models. A total of 153 163 biomass values were collected, from a variety of sources, for mesozooplankton. Of those 2% were originally recorded as dry mass, 26% as wet mass, 5% as settled volume, and 68% as displacement volume. Using a variety of non-linear biomass conversions from the literature, the data have been converted from their original units to carbon biomass. Depth-integrated values were then used to calculate mesozooplankton global biomass. Global mesozooplankton biomass, to a depth of 200 m, had a mean of 5.9 μg C l-1, median of 2.7 μg C l-1 and a standard deviation of 10.6 μg C l-1. The global annual average estimate of mesozooplankton, based on the median value, was 0.19 Pg C. Biomass was highest in the Northern Hemisphere, but the general trend shows a slight decrease from polar oceans to temperate regions with values increasing again in the tropics. Gridded dataset

  2. Quantitative Assessment of Global and Regional Air Trappings Using Non-Rigid Registration and Regional Specific Volume Change of Inspiratory/Expiratory CT Scans: Studies on Healthy Volunteers and Asthmatics

    PubMed Central

    Lee, Eunsol; Lee, Hyun Joo; Chae, Eun Jin; Lee, Sang Min; Oh, Sang Young; Kim, Namkug

    2015-01-01

    Objective The purpose of this study was to compare air trapping in healthy volunteers with asthmatics using pulmonary function test and quantitative data, such as specific volume change from paired inspiratory CT and registered expiratory CT. Materials and Methods Sixteen healthy volunteers and 9 asthmatics underwent paired inspiratory/expiratory CT. ΔSV, which represents the ratio of air fraction released after exhalation, was measured with paired inspiratory and anatomically registered expiratory CT scans. Air trapping indexes, ΔSV0.4 and ΔSV0.5, were defined as volume fraction of lung below 0.4 and 0.5 ΔSV, respectively. To assess the gravity effect of air-trapping, ΔSV values of anterior and posterior lung at three different levels were measured and ΔSV ratio of anterior lung to posterior lung was calculated. Color-coded ΔSV map of the whole lung was generated and visually assessed. Mean ΔSV, ΔSV0.4, and ΔSV0.5 were compared between healthy volunteers and asthmatics. In asthmatics, correlation between air trapping indexes and clinical parameters were assessed. Results Mean ΔSV, ΔSV0.4, and ΔSV0.5 in asthmatics were significantly higher than those in healthy volunteer group (all p < 0.05). ΔSV values in posterior lung in asthmatics were significantly higher than those in healthy volunteer group (p = 0.049). In asthmatics, air trapping indexes, such as ΔSV0.5 and ΔSV0.4, showed negative strong correlation with FEF25-75, FEV1, and FEV1/FVC. ΔSV map of asthmatics showed abnormal geographic pattern in 5 patients (55.6%) and disappearance of anterior-posterior gradient in 3 patients (33.3%). Conclusion Quantitative assessment of ΔSV (the ratio of air fraction released after exhalation) shows the difference in extent of air trapping between health volunteers and asthmatics. PMID:25995694

  3. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  4. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  5. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  6. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  7. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  8. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  9. Averaged equations for an isothermal, developing flow of a fluid- solid mixture

    SciTech Connect

    Rajagopal, K.R.; Johnson, G.; Massoudi, M.

    1996-03-01

    A mathematical description of a flowing fluid with entrained particulate solids is presented within the context of Mixture Theory. The mixture is considered to consist of a linearly viscous fluid and a granular solid. The balance of mass and balance of linear momentum equations for each component are averaged over the cross section of the flow to obtain ordinary differential equations describing developing flow between parallel plates. The resulting coupled equations describe the variation of the average velocities and volume fraction in the direction of flow, and represent a simplified approximate set of equations which are easier to use in engineering applications.

  10. Vector light shift averaging in paraffin-coated alkali vapor cells

    NASA Astrophysics Data System (ADS)

    Zhivun, Elena; Wickenbrock, Arne; Sudyka, Julia; Patton, Brian; Pustelny, Szymon; Budker, Dmitry

    2016-05-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  11. Technical Report Series on Global Modeling and Data Assimilation. Volume 40; Soil Moisture Active Passive (SMAP) Project Assessment Report for the Beta-Release L4_SM Data Product

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Liu, Qing; Colliander, Andreas; Conaty, Austin; Jackson, Thomas; Kimball, John

    2015-01-01

    During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public beta release scheduled for 30 October 2015. The primary objective of the beta release is to allow users to familiarize themselves with the data product before the validated product becomes available. The beta release also allows users to conduct their own assessment of the data and to provide feedback to the L4_SM science data product team. The assessment of the L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to upscaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 1 validation and supports the beta release of the data. The validation against

  12. Arithmetic averaging: A versatile technique for smoothing and trend removal

    SciTech Connect

    Clark, E.L.

    1993-12-31

    Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.

  13. A Global Climate Model for Instruction.

    ERIC Educational Resources Information Center

    Burt, James E.

    This paper describes a simple global climate model useful in a freshman or sophomore level course in climatology. There are three parts to the paper. The first part describes the model, which is a global model of surface air temperature averaged over latitude and longitude. Samples of the types of calculations performed in the model are provided.…

  14. Technical Report Series on Global Modeling and Data Assimilation. Volume 42; Soil Moisture Active Passive (SMAP) Project Calibration and Validation for the L4_C Beta-Release Data Product

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas

    2015-01-01

    During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.

  15. Seismic damage before eruptions as a tool to map pre-eruptive mechanics: worldwide average patterns

    NASA Astrophysics Data System (ADS)

    Schmid, A.; Grasso, J. R.

    2010-12-01

    than the 0.80 value recovered for earthquakes. For VEI>4 eruptions, the 1.42 p-value of foreshock sequences is larger than the 0.97 value recovered for earthquakes; iii) The departure from the background rate appears to be earlier for the largest VEI eruptions. Those differences relatively to earthquakes foreshocks are the signature of magma forcing. They are related either to change in forcing rates, or to specific medium properties (temperature, presence of fluids) around the volcanoes. Second, we applied the same analysis at a smaller space scale using the VT seismicity prior to 13 eruptions, on a single volcano, Piton de la Fournaise, La Réunion Island. We analysed stacked time-series of seismicity prior 3 classes of eruption volumes (less than 15.106 m3, more than 15.106 m3, and all eruptions). We find that the p-value of average foreshocks to eruptions also increases with the eruption volume, similarly to the pre-eruptive patterns of worldwide eruptions. This result suggests some volume predictability of eruptions and offers new perspectives for volcanic hazard assessment. We will discuss the physical processes that possibly drive such patterns.

  16. Normal age-related brain morphometric changes: nonuniformity across cortical thickness, surface area and gray matter volume?

    PubMed

    Lemaitre, Herve; Goldman, Aaron L; Sambataro, Fabio; Verchinski, Beth A; Meyer-Lindenberg, Andreas; Weinberger, Daniel R; Mattay, Venkata S

    2012-03-01

    Normal aging is accompanied by global as well as regional structural changes. While these age-related changes in gray matter volume have been extensively studied, less has been done using newer morphological indexes, such as cortical thickness and surface area. To this end, we analyzed structural images of 216 healthy volunteers, ranging from 18 to 87 years of age, using a surface-based automated parcellation approach. Linear regressions of age revealed a concomitant global age-related reduction in cortical thickness, surface area and volume. Cortical thickness and volume collectively confirmed the vulnerability of the prefrontal cortex, whereas in other cortical regions, such as in the parietal cortex, thickness was the only measure sensitive to the pronounced age-related atrophy. No cortical regions showed more surface area reduction than the global average. The distinction between these morphological measures may provide valuable information to dissect age-related structural changes of the brain, with each of these indexes probably reflecting specific histological changes occurring during aging. PMID:20739099

  17. Global Mental Health: An Introduction.

    PubMed

    Verdeli, Helen

    2016-08-01

    In this introductory paper to the Global Mental Health volume, the inception and development of the filed in the last 15 years is reviewed, placing an emphasis on a series of pivotal turning points. A critical delivery strategy, task-shifting is briefly described, as well as the fundamental principles of Interpersonal Psychotherapy (IPT), an evidence-based psychotherapy being adapted and delivered in low-resource settings. Nine case studies by the trainees, supervisors, or local providers from India, the United States, Haiti, Israel, Colombia, and Kenya, presented in this volume, illustrate the prevention and treatment processes or in-depth assessment of "psychological distress" as locally defined and expressed. PMID:27532521

  18. Panwapa: Global Kids, Global Connections

    ERIC Educational Resources Information Center

    Berson, Ilene R.; Berson, Michael J.

    2009-01-01

    Panwapa, created by the Sesame Street Workshop of PBS, is an example of an initiative on the Internet designed to enhance students' learning by exposing them to global communities. Panwapa means "Here on Earth" in Tshiluba, a Bantu language spoken in the Democratic Republic of Congo. At the Panwapa website, www.panwapa.org, children aged four to…

  19. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    SciTech Connect

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.

  20. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGESBeta

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  1. Global carbon budget 2013

    NASA Astrophysics Data System (ADS)

    Le Quéré, C.; Peters, G. P.; Andres, R. J.; Andrew, R. M.; Boden, T.; Ciais, P.; Friedlingstein, P.; Houghton, R. A.; Marland, G.; Moriarty, R.; Sitch, S.; Tans, P.; Arneth, A.; Arvanitis, A.; Bakker, D. C. E.; Bopp, L.; Canadell, J. G.; Chini, L. P.; Doney, S. C.; Harper, A.; Harris, I.; House, J. I.; Jain, A. K.; Jones, S. D.; Kato, E.; Keeling, R. F.; Klein Goldewijk, K.; Körtzinger, A.; Koven, C.; Lefèvre, N.; Omar, A.; Ono, T.; Park, G.-H.; Pfeil, B.; Poulter, B.; Raupach, M. R.; Regnier, P.; Rödenbeck, C.; Saito, S.; Schwinger, J.; Segschneider, J.; Stocker, B. D.; Tilbrook, B.; van Heuven, S.; Viovy, N.; Wanninkhof, R.; Wiltshire, A.; Zaehle, S.; Yue, C.

    2013-11-01

    0.5 GtC yr-1, 2.2% above 2011, reflecting a continued trend in these emissions; GATM was 5.2 ± 0.2 GtC yr-1, SOCEAN was 2.9 ± 0.5 GtC yr-1, and assuming and ELUC of 0.9 ± 0.5 GtC yr-1 (based on 2001-2010 average), SLAND was 2.5 ± 0.9 GtC yr-1. GATM was high in 2012 compared to the 2003-2012 average, almost entirely reflecting the high EFF. The global atmospheric CO2 concentration reached 392.52 ± 0.10 ppm on average over 2012. We estimate that EFF will increase by 2.1% (1.1-3.1%) to 9.9 ± 0.5 GtC in 2013, 61% above emissions in 1990, based on projections of World Gross Domestic Product and recent changes in the carbon intensity of the economy. With this projection, cumulative emissions of CO2 will reach about 550 ± 60 GtC for 1870-2013, 70% from EFF (390 ± 20 GtC) and 30% from ELUC (160 ± 55 GtC). This paper is intended to provide a baseline to keep track of annual carbon budgets in the future. All data presented here can be downloaded from the Carbon Dioxide Information Analysis Center (10.3334/CDIAC/GCP_2013_v1.1).

  2. A Population-Average, Landmark- and Surface-based (PALS) atlas of human cerebral cortex.

    PubMed

    Van Essen, David C

    2005-11-15

    This report describes a new electronic atlas of human cerebral cortex that provides a substrate for a wide variety of brain-mapping analyses. The Population-Average, Landmark- and Surface-based (PALS) atlas approach involves surface-based and volume-based representations of cortical shape, each available as population averages and as individual subject data. The specific PALS-B12 atlas introduced here is derived from structural MRI volumes of 12 normal young adults. Accurate cortical surface reconstructions were generated for each hemisphere, and the surfaces were inflated, flattened, and mapped to standard spherical configurations using SureFit and Caret software. A target atlas sphere was generated by averaging selected landmark contours from each of the 24 contributing hemispheres. Each individual hemisphere was deformed to this target using landmark-constrained surface registration. The utility of the resultant PALS-B12 atlas was demonstrated using a variety of analyses. (i) Probabilistic maps of sulcal identity were generated using both surface-based registration (SBR) and conventional volume-based registration (VBR). The SBR approach achieved markedly better consistency of sulcal alignment than did VBR. (ii) A method is introduced for 'multi-fiducial mapping' of volume-averaged group data (e.g., fMRI data, probabilistic architectonic maps) onto each individual hemisphere in the atlas, followed by spatial averaging across the individual maps. This yielded a population-average surface representation that circumvents the biases inherent in choosing any single hemisphere as a target. (iii) Surface-based and volume-based morphometry applied to maps of sulcal depth and sulcal identity demonstrated prominent left-right asymmetries in and near the superior temporal sulcus and Sylvian fissure. Moreover, shape variability in the temporal lobe is significantly greater in the left than the right hemisphere. The PALS-B12 atlas has been registered to other surface

  3. Going Global

    ERIC Educational Resources Information Center

    Boulard, Garry

    2010-01-01

    In a move to increase its out-of-state and international student enrollment, officials at the University of Iowa are stepping up their global recruitment efforts--even in the face of criticism that the school may be losing sight of its mission. The goal is to increase enrollment across the board, with both in-state as well as out-of-state and…

  4. Scaling of average weighted shortest path and average receiving time on weighted expanded Koch networks

    NASA Astrophysics Data System (ADS)

    Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng

    2014-04-01

    Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.

  5. Global Arrays

    SciTech Connect

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the shared data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).

  6. Global Arrays

    Energy Science and Technology Software Center (ESTSC)

    2006-02-23

    The Global Arrays (GA) toolkit provides an efficient and portable “shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the sharedmore » data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI).« less

  7. Genetic and environmental contributions to the relationships between brain structure and average lifetime cigarette use.

    PubMed

    Prom-Wormley, Elizabeth; Maes, Hermine H M; Schmitt, J Eric; Panizzon, Matthew S; Xian, Hong; Eyler, Lisa T; Franz, Carol E; Lyons, Michael J; Tsuang, Ming T; Dale, Anders M; Fennema-Notestine, Christine; Kremen, William S; Neale, Michael C

    2015-03-01

    Chronic cigarette use has been consistently associated with differences in the neuroanatomy of smokers relative to nonsmokers in case-control studies. However, the etiology underlying the relationships between brain structure and cigarette use is unclear. A community-based sample of male twin pairs ages 51-59 (110 monozygotic pairs, 92 dizygotic pairs) was used to determine the extent to which there are common genetic and environmental influences between brain structure and average lifetime cigarette use. Brain structure was measured by high-resolution structural magnetic resonance imaging, from which subcortical volume and cortical volume, thickness and surface area were derived. Bivariate genetic models were fitted between these measures and average lifetime cigarette use measured as cigarette pack-years. Widespread, negative phenotypic correlations were detected between cigarette pack-years and several cortical as well as subcortical structures. Shared genetic and unique environmental factors contributed to the phenotypic correlations shared between cigarette pack-years and subcortical volume as well as cortical volume and surface area. Brain structures involved in many of the correlations were previously reported to play a role in specific aspects of networks of smoking-related behaviors. These results provide evidence for conducting future research on the etiology of smoking-related behaviors using measures of brain morphology. PMID:25690561

  8. Tortuosity and the Averaging of Microvelocity Fields in Poroelasticity.

    PubMed

    Souzanchi, M F; Cardoso, L; Cowin, S C

    2013-03-01

    The relationship between the macro- and microvelocity fields in a poroelastic representative volume element (RVE) has not being fully investigated. This relationship is considered to be a function of the tortuosity: a quantitative measure of the effect of the deviation of the pore fluid streamlines from straight (not tortuous) paths in fluid-saturated porous media. There are different expressions for tortuosity based on the deviation from straight pores, harmonic wave excitation, or from a kinetic energy loss analysis. The objective of the work presented is to determine the best expression for tortuosity of a multiply interconnected open pore architecture in an anisotropic porous media. The procedures for averaging the pore microvelocity over the RVE of poroelastic media by Coussy and by Biot were reviewed as part of this study, and the significant connection between these two procedures was established. Success was achieved in identifying the Coussy kinetic energy loss in the pore fluid approach as the most attractive expression for the tortuosity of porous media based on pore fluid viscosity, porosity, and the pore architecture. The fabric tensor, a 3D measure of the architecture of pore structure, was introduced in the expression of the tortuosity tensor for anisotropic porous media. Practical considerations for the measurement of the key parameters in the models of Coussy and Biot are discussed. In this study, we used cancellous bone as an example of interconnected pores and as a motivator for this study, but the results achieved are much more general and have a far broader application than just to cancellous bone. PMID:24891725

  9. An investigation of trends in precipitation volume for the last three decades in different regions of Fars province, Iran

    NASA Astrophysics Data System (ADS)

    Ahani, Hossein; Kherad, Mehrzad; Kousari, Mohammad Reza; Rezaeian-Zadeh, Mehdi; Karampour, Mohammad Amin; Ejraee, Faezeh; Kamali, Saeedeh

    2012-08-01

    Under condition of climate changes as global warming, monitoring and detecting trend of precipitation volume is essential and will be useful for agricultural sections. Considering the fact that there were not enough research related to precipitation volume, this study aimed to determine trends in precipitation volume, monthly and annually in different regions of Fars province for the last three decades (33 years period; 1978-2010). Fars province is located in arid and semi-arid regions of Iran, and it plays an important role in agricultural production. Inverse distance weighting interpolation method was used to provide precipitation data for all regions. To analyze the trends of precipitation volume, Mann-Kendall test, Sen's slope estimator, and 10-year moving average low-pass filter (within time series) were used. The negative trends were identified by the Sen's slope estimator as well as Mann-Kendall test. However, all the trends were insignificant at the surveyed confidence level (95%). With regards to the application of 10-year moving average low-pass filter, a considerable decreasing trend was observed after around year 1994. Since one of the most important restrictions in agricultural development of the Fars province is lack of sufficient water resources, any changes onward to lack of sufficient precipitation impose impressive pressure and stress on valuable resources and subsequently agricultural production.

  10. Topics in Culture Learning, Volume 5.

    ERIC Educational Resources Information Center

    Brislin, Richard W., Ed.; Hamnett, Michael P., Ed.

    The first section of this volume includes articles on cross-cultural teaching: "Mau Piailug's Navigation of Hokule'a from Hawaii to Tahiti," by David Lewis; "The New World Order and the Globalization of Social Science: Some Implications for Teaching Cross-Culturally," by Amarjit Singh; "Ponape: Cross-Cultural Contact, Formal Schooling, and Foreign…

  11. Environment Abstracts Annual 1988. Volume 18.

    ERIC Educational Resources Information Center

    Yuster, Leigh C., Ed.; And Others

    This publication is a compilation of environmental information and resources for the year 1988. The first section details the coverage and use of this volume. Section 2 contains a review of events in 1988; a chronology of events; a status report produced for Congress; three articles on environmental issues including global change, pesticides, and…

  12. Estimating Volume of Martian Valleys Using Axelsson Algorithm

    NASA Astrophysics Data System (ADS)

    Jung, J. H.; Kim, C. J.; Heo, J.; Luo, W.

    2012-03-01

    A progressive TIN densification algorithm is adapted to estimate the volume martian valley networks (VN) based MOLA point data. This method can be used to estimate the global water inventory associated with VN.

  13. Global sea level rise

    SciTech Connect

    Douglas, B.C. )

    1991-04-15

    Published values for the long-term, global mean sea level rise determined from tide gauge records exhibit considerable scatter, from about 1 mm to 3 mm/yr. This disparity is not attributable to instrument error; long-term trends computed at adjacent sites often agree to within a few tenths of a millimeter per year. Instead, the differing estimates of global sea level rise appear to be in large part due to authors' using data from gauges located at convergent tectonic plate boundaries, where changes of land elevation give fictitious sea level trends. In addition, virtually all gauges undergo subsidence or uplift due to postglacial rebound (PGR) from the last deglaciation at a rate comparable to or greater than the secular rise of sea level. Modeling PGR by the ICE-3G model of Tushingham and Peltier (1991) and avoiding tide gauge records in areas of converging tectonic plates produces a highly consistent set of long sea level records. The value for mean sea level rise obtained from a global set of 21 such stations in nine oceanic regions with an average record length of 76 years during the period 1880-1980 is 1.8 mm/yr {plus minus} 0.1. This result provides confidence that carefully selected long tide gauge records measure the same underlying trend of sea level and that many old tide gauge records are of very high quality.

  14. Global protected area impacts

    PubMed Central

    Joppa, Lucas N.; Pfaff, Alexander

    2011-01-01

    Protected areas (PAs) dominate conservation efforts. They will probably play a role in future climate policies too, as global payments may reward local reductions of loss of natural land cover. We estimate the impact of PAs on natural land cover within each of 147 countries by comparing outcomes inside PAs with outcomes outside. We use ‘matching’ (or ‘apples to apples’) for land characteristics to control for the fact that PAs very often are non-randomly distributed across their national landscapes. Protection tends towards land that, if unprotected, is less likely than average to be cleared. For 75 per cent of countries, we find protection does reduce conversion of natural land cover. However, for approximately 80 per cent of countries, our global results also confirm (following smaller-scale studies) that controlling for land characteristics reduces estimated impact by half or more. This shows the importance of controlling for at least a few key land characteristics. Further, we show that impacts vary considerably within a country (i.e. across a landscape): protection achieves less on lands far from roads, far from cities and on steeper slopes. Thus, while planners are, of course, constrained by other conservation priorities and costs, they could target higher impacts to earn more global payments for reduced deforestation. PMID:21084351

  15. Cost averaging techniques for robust control of flexible structural systems

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  16. Transfer factor, lung volumes, resistance and ventilation distribution in healthy adults.

    PubMed

    Verbanck, Sylvia; Van Muylem, Alain; Schuermans, Daniel; Bautmans, Ivan; Thompson, Bruce; Vincken, Walter

    2016-01-01

    Monitoring of chronic lung disease requires reference values of lung function indices, including putative markers of small airway function, spanning a wide age range.We measured spirometry, transfer factor of the lung for carbon monoxide (TLCO), static lung volume, resistance and ventilation distribution in a healthy population, studying at least 20 subjects per sex and per decade between the ages of 20 and 80 years.With respect to the Global Lung Function Initiative reference data, our subjects had average z-scores for forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC) and FEV1/FVC of -0.12, 0.04 and -0.32, respectively. Reference equations were obtained which could account for a potential dependence of index variability on age and height. This was done for (but not limited to) indices that are pertinent to asthma and chronic obstructive pulmonary disease studies: forced expired volume in 6 s, forced expiratory flow, TLCO, specific airway conductance, residual volume (RV)/total lung capacity (TLC), and ventilation heterogeneity in acinar and conductive lung zones.Deterioration in acinar ventilation heterogeneity and lung clearance index with age were more marked beyond 60 years, and conductive ventilation heterogeneity showed the greatest increase in variability with age. The most clinically relevant deviation from published reference values concerned RV/TLC values, which were considerably smaller than American Thoracic Society/European Respiratory Society-endorsed reference values. PMID:26585426

  17. Average areal water equivalent of snow in a mountain basin using microwave and visible satellite data

    NASA Technical Reports Server (NTRS)

    Rango, Albert; Van Katwijk, Victor F.; Martinec, Jaroslav; Chang, Alfred T. C.; Foster, James L.

    1989-01-01

    Satellite microwave data were used to evaluate the average areal water equivalent of snow cover in the mountainous Rio Grande basin of Colorado. Areal water equivalent data for the basin were obtained from contoured values of point measurements and from zonal water volume values generated by a snowmelt runoff model. Comparison of these snow water equivalent values shows the model values to consistently exceed the contoured values, probably because of the narrow elevation range in the lower part of the basin where the point measurements are concentrated. A significant relationship between the difference in microwave brightness temperatures at two different wavelengths and a basin-wide average snow water equivalent value is obtained. The average water equivalent of the snow cover in the basin was derived from differences of the microwave brightness temperatures.

  18. Average areal water equivalent of snow in a mountain basin using microwave and visible satellite data

    NASA Technical Reports Server (NTRS)

    Rango, A.; Martinec, J.; Chang, A. T. C.; Foster, J. L.; Vankatwijk, V.

    1988-01-01

    Satellite microwave data were used to evaluate the average areal water equivalent of snow cover in the mountainous Rio Grande basin of Colorado. Areal water equivalent data for the basin were obtained from contoured values of point measurements and from zonal water volume values generated by a snowmelt runoff model. Comparison of these snow water equivalent values shows the model values to consistently exceed the contoured values, probably because of the narrow elevation range in the lower part of the basin where the point measurements are concentrated. A significant relationship between the difference in microwave brightness temperatures at two different wavelengths and a basin-wide average snow water equivalent value is obtained. The average water equivalent of the snow cover in the basin was derived from differences of the microwave brightness temperatures.

  19. 10 CFR 63.332 - Representative volume.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accessible environment; (2) Its position and dimensions in the aquifer are determined using average... dimensions of the representative volume. The DOE must propose its chosen method, and any underlying assumptions, to NRC for approval. (1) DOE may calculate the dimensions as a well-capture zone. If DOE...

  20. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  1. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  2. 76 FR 57081 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...

  3. 78 FR 16711 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...

  4. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an inmate in a Community Corrections...

  5. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Radial average terrain elevation is calculated as the average of the elevation along a straight line path... radial path extends over foreign territory or water, such portion must not be included in the computation of average elevation unless the radial path again passes over United States land between 16 and...

  6. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  7. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  8. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  9. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  10. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  11. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  12. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  13. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  14. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  15. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  16. 27 CFR 19.37 - Average effective tax rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate... effective tax rate. (a) The proprietor may establish an average effective tax rate for any eligible... recompute the average effective tax rate so as to include only the immediately preceding 6-month period....

  17. Multivariate volume rendering

    SciTech Connect

    Crawfis, R.A.

    1996-03-01

    This paper presents a new technique for representing multivalued data sets defined on an integer lattice. It extends the state-of-the-art in volume rendering to include nonhomogeneous volume representations. That is, volume rendering of materials with very fine detail (e.g. translucent granite) within a voxel. Multivariate volume rendering is achieved by introducing controlled amounts of noise within the volume representation. Varying the local amount of noise within the volume is used to represent a separate scalar variable. The technique can also be used in image synthesis to create more realistic clouds and fog.

  18. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  19. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming. PMID:17080865

  20. Disc Volume Reduction with Percutaneous Nucleoplasty in an Animal Model

    PubMed Central

    Kasch, Richard; Mensel, Birger; Schmidt, Florian; Ruetten, Sebastian; Barz, Thomas; Froehlich, Susanne; Seipel, Rebecca; Merk, Harry R.; Kayser, Ralph

    2012-01-01

    Study Design We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI). Purpose To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. Methods We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group) was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group) the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. Results The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml). In the nucleoplasty group (n = 21) volume was reduced by an average of 0.087 ml (SD: 0.110 ml) or 7.14%. In the placebo group (n = 21) volume was increased by an average of 0.075 ml (SD: 0.075 ml) or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml) or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001). Conclusions Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs. PMID:23209677

  1. Size and average density spectra of macromolecules obtained from hydrodynamic data

    NASA Astrophysics Data System (ADS)

    Pavlov, G. M.

    2007-02-01

    It is proposed to normalize the Mark-Kuhn-Houwink-Sakurada type of equation relating the hydrodynamic characteristics, such as intrinsic viscosity, velocity sedimentation coefficient and translational diffusion coefficient of linear macromolecules to their molecular masses for the values of linear density ML and the statistical segment length A. When the set of data covering virtually all known experimental information is normalized for ML, it is presented as a size spectrum of linear polymer molecules. Further normalization for the A value reduces all data to two regions: namely the region exhibiting volume interactions and that showing hydrodynamic draining. For chains without intachain excluded volume effects these results may be reproduced using the Yamakawa-Fujii theory of wormlike cylinders. Data analyzed here cover a range of contour lengths of linear chains varying by three orders of magnitude, with the range of statistical segment lengths varying approximately 500 times. The plot of the dependence of [η]M on M represents the spectrum of average specific volumes occupied by linear and branched macromolecules. Dendrimers and globular proteins for which the volume occupied by the molecule in solution is directly proportional to M have the lowest specific volume. The homologous series of macromolecules in these plots are arranged following their fractal dimensionality.

  2. Size and average density spectra of macromolecules obtained from hydrodynamic data.

    PubMed

    Pavlov, G M

    2007-02-01

    It is proposed to normalize the Mark-Kuhn-Houwink-Sakurada type of equation relating the hydrodynamic characteristics, such as intrinsic viscosity, velocity sedimentation coefficient and translational diffusion coefficient of linear macromolecules to their molecular masses for the values of linear density M(L) and the statistical segment length A. When the set of data covering virtually all known experimental information is normalized for M(L), it is presented as a size spectrum of linear polymer molecules. Further normalization for the A value reduces all data to two regions: namely the region exhibiting volume interactions and that showing hydrodynamic draining. For chains without intachain excluded volume effects these results may be reproduced using the Yamakawa-Fujii theory of wormlike cylinders. Data analyzed here cover a range of contour lengths of linear chains varying by three orders of magnitude, with the range of statistical segment lengths varying approximately 500 times. The plot of the dependence of [eta]M on M represents the spectrum of average specific volumes occupied by linear and branched macromolecules. Dendrimers and globular proteins for which the volume occupied by the molecule in solution is directly proportional to M have the lowest specific volume. The homologous series of macromolecules in these plots are arranged following their fractal dimensionality. PMID:17377754

  3. Global Hail Model

    NASA Astrophysics Data System (ADS)

    Werner, A.; Sanderson, M.; Hand, W.; Blyth, A.; Groenemeijer, P.; Kunz, M.; Puskeiler, M.; Saville, G.; Michel, G.

    2012-04-01

    Hail risk models are rare for the insurance industry. This is opposed to the fact that average annual hail losses can be large and hail dominates losses for many motor portfolios worldwide. Insufficient observational data, high spatio-temporal variability and data inhomogenity have hindered creation of credible models so far. In January 2012, a selected group of hail experts met at Willis in London in order to discuss ways to model hail risk at various scales. Discussions aimed at improving our understanding of hail occurrence and severity, and covered recent progress in the understanding of microphysical processes and climatological behaviour and hail vulnerability. The final outcome of the meeting was the formation of a global hail risk model initiative and the launch of a realistic global hail model in order to assess hail loss occurrence and severities for the globe. The following projects will be tackled: Microphysics of Hail and hail severity measures: Understand the physical drivers of hail and hailstone size development in different regions on the globe. Proposed factors include updraft and supercooled liquid water content in the troposphere. What are the thresholds drivers of hail formation around the globe? Hail Climatology: Consider ways to build a realistic global climatological set of hail events based on physical parameters including spatial variations in total availability of moisture, aerosols, among others, and using neural networks. Vulnerability, Exposure, and financial model: Use historical losses and event footprints available in the insurance market to approximate fragility distributions and damage potential for various hail sizes for property, motor, and agricultural business. Propagate uncertainty distributions and consider effects of policy conditions along with aggregating and disaggregating exposure and losses. This presentation provides an overview of ideas and tasks that lead towards a comprehensive global understanding of hail risk for

  4. Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis

    2004-09-01

    We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.

  5. Global Precipitation Measurement (GPM) Validation Network

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew; Moris, K. Robert

    2010-01-01

    The method averages the minimum TRMM PR and Ground Radar (GR) sample volumes needed to match-up spatially/temporally coincident PR and GR data types. PR and GR averages are calculated at the geometric intersection of the PR rays with the individual Ground Radar(GR)sweeps. Along-ray PR data are averaged only in the vertical, GR data are averaged only in the horizontal. Small difference in PR & GR reflectivity high in the atmosphere, relatively larger differences. Version 6 TRMM PR underestimates rainfall in the case of convective rain in the lower part of the atmosphere by 30 to 40 percent.

  6. Global teaching of global seismology

    NASA Astrophysics Data System (ADS)

    Stein, S.; Wysession, M.

    2005-12-01

    Our recent textbook, Introduction to Seismology, Earthquakes, & Earth Structure (Blackwell, 2003) is used in many countries. Part of the reason for this may be our deliberate attempt to write the book for an international audience. This effort appears in several ways. We stress seismology's long tradition of global data interchange. Our brief discussions of the science's history illustrate the contributions of scientists around the world. Perhaps most importantly, our discussions of earthquakes, tectonics, and seismic hazards take a global view. Many examples are from North America, whereas others are from other areas. Our view is that non-North American students should be exposed to North American examples that are type examples, and that North American students should be similarly exposed to examples elsewhere. For example, we illustrate how the Euler vector geometry changes a plate boundary from spreading, to strike-slip, to convergence using both the Pacific-North America boundary from the Gulf of California to Alaska and the Eurasia-Africa boundary from the Azores to the Mediterranean. We illustrate diffuse plate boundary zones using western North America, the Andes, the Himalayas, the Mediterranean, and the East Africa Rift. The subduction zone discussions examine Japan, Tonga, and Chile. We discuss significant earthquakes both in the U.S. and elsewhere, and explore hazard mitigation issues in different contexts. Both comments from foreign colleagues and our experience lecturing overseas indicate that this approach works well. Beyond the specifics of our text, we believe that such a global approach is facilitated by the international traditions of the earth sciences and the world youth culture that gives students worldwide common culture. For example, a video of the scene in New Madrid, Missouri that arose from a nonsensical earthquake prediction in 1990 elicits similar responses from American and European students.

  7. Consistency of the current global ocean observing systems from an Argo perspective

    NASA Astrophysics Data System (ADS)

    von Schuckmann, K.; Sallée, J.-B.; Chambers, D.; Le Traon, P.-Y.; Cabanes, C.; Gaillard, F.; Speich, S.; Hamon, M.

    2014-06-01

    Variations in the world's ocean heat storage and its associated volume changes are a key factor to gauge global warming and to assess the earth's energy and sea level budget. Estimating global ocean heat content (GOHC) and global steric sea level (GSSL) with temperature/salinity data from the Argo network reveals a positive change of 0.5 ± 0.1 W m-2 (applied to the surface area of the ocean) and 0.5 ± 0.1 mm year-1 during the years 2005 to 2012, averaged between 60° S and 60° N and the 10-1500 m depth layer. In this study, we present an intercomparison of three global ocean observing systems: the Argo network, satellite gravimetry from GRACE and satellite altimetry. Their consistency is investigated from an Argo perspective at global and regional scales during the period 2005-2010. Although we can close the recent global ocean sea level budget within uncertainties, sampling inconsistencies need to be corrected for an accurate global budget due to systematic biases in GOHC and GSSL in the Tropical Ocean. Our findings show that the area around the Tropical Asian Archipelago (TAA) is important to closing the global sea level budget on interannual to decadal timescales, pointing out that the steric estimate from Argo is biased low, as the current mapping methods are insufficient to recover the steric signal in the TAA region. Both the large regional variability and the uncertainties in the current observing system prevent us from extracting indirect information regarding deep-ocean changes. This emphasizes the importance of continuing sustained effort in measuring the deep ocean from ship platforms and by beginning a much needed automated deep-Argo network.

  8. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    NASA Astrophysics Data System (ADS)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  9. Global Geomorphology

    NASA Technical Reports Server (NTRS)

    Douglas, I.

    1985-01-01

    Any global view of landforms must include an evaluation of the link between plate tectonics and geomorphology. To explain the broad features of the continents and ocean floors, a basic distinction between the tectogene and cratogene part of the Earth's surface must be made. The tectogene areas are those that are dominated by crustal movements, earthquakes and volcanicity at the present time and are essentially those of the great mountain belts and mid ocean ridges. Cratogene areas comprise the plate interiors, especially the old lands of Gondwanaland and Laurasia. Fundamental as this division between plate margin areas and plate interiors is, it cannot be said to be a simple case of a distinction between tectonically active and stable areas. Indeed, in terms of megageomorphology, former plate margins and tectonic activity up to 600 million years ago have to be considered.

  10. Global gamesmanship.

    PubMed

    MacMillan, Ian C; van Putten, Alexander B; McGrath, Rita Gunther

    2003-05-01

    Competition among multinationals these days is likely to be a three-dimensional game of global chess: The moves an organization makes in one market are designed to achieve goals in another in ways that aren't immediately apparent to its rivals. The authors--all management professors-call this approach "competing under strategic interdependence," or CSI. And where this interdependence exists, the complexity of the situation can quickly overwhelm ordinary analysis. Indeed, most business strategists are terrible at anticipating the consequences of interdependent choices, and they're even worse at using interdependency to their advantage. In this article, the authors offer a process for mapping the competitive landscape and anticipating how your company's moves in one market can influence its competitive interactions in others. They outline the six types of CSI campaigns--onslaughts, contests, guerrilla campaigns, feints, gambits, and harvesting--available to any multiproduct or multimarket corporation that wants to compete skillfully. They cite real-world examples such as the U.S. pricing battle Philip Morris waged with R.J. Reynolds--not to gain market share in the domestic cigarette market but to divert R.J. Reynolds's resources and attention from the opportunities Philip Morris was pursuing in Eastern Europe. And, using data they collected from their studies of consumer-products companies Procter & Gamble and Unilever, the authors describe how to create CSI tables and bubble charts that present a graphical look at the competitive landscape and that may uncover previously hidden opportunities. The CSI mapping process isn't just for global corporations, the authors explain. Smaller organizations that compete with a portfolio of products in just one national or regional market may find it just as useful for planning their next business moves. PMID:12747163

  11. Global warming

    NASA Astrophysics Data System (ADS)

    Houghton, John

    2005-06-01

    'Global warming' is a phrase that refers to the effect on the climate of human activities, in particular the burning of fossil fuels (coal, oil and gas) and large-scale deforestation, which cause emissions to the atmosphere of large amounts of 'greenhouse gases', of which the most important is carbon dioxide. Such gases absorb infrared radiation emitted by the Earth's surface and act as blankets over the surface keeping it warmer than it would otherwise be. Associated with this warming are changes of climate. The basic science of the 'greenhouse effect' that leads to the warming is well understood. More detailed understanding relies on numerical models of the climate that integrate the basic dynamical and physical equations describing the complete climate system. Many of the likely characteristics of the resulting changes in climate (such as more frequent heat waves, increases in rainfall, increase in frequency and intensity of many extreme climate events) can be identified. Substantial uncertainties remain in knowledge of some of the feedbacks within the climate system (that affect the overall magnitude of change) and in much of the detail of likely regional change. Because of its negative impacts on human communities (including for instance substantial sea-level rise) and on ecosystems, global warming is the most important environmental problem the world faces. Adaptation to the inevitable impacts and mitigation to reduce their magnitude are both necessary. International action is being taken by the world's scientific and political communities. Because of the need for urgent action, the greatest challenge is to move rapidly to much increased energy efficiency and to non-fossil-fuel energy sources.

  12. Revision of the Branch Technical Position on Concentration Averaging and Encapsulation - 12510

    SciTech Connect

    Heath, Maurice; Kennedy, James E.; Ridge, Christianne; Lowman, Donald; Cochran, John

    2012-07-01

    The U.S. Nuclear Regulatory Commission (NRC) regulation governing low-level waste (LLW) disposal, 'Licensing Requirements for Land Disposal of Radioactive Waste', 10 CFR Part 61, establishes a waste classification system based on the concentration of specific radionuclides contained in the waste. The regulation also states, at 10 CFR 61.55(a)(8), that, 'the concentration of a radionuclide (in waste) may be averaged over the volume of the waste, or weight of the waste if the units are expressed as nanocuries per gram'. The NRC's Branch Technical Position on Concentration Averaging and Encapsulation provides guidance on averaging radionuclide concentrations in waste under 10 CFR 61.55(a)(8) when classifying waste for disposal. In 2007, the NRC staff proposed to revise the Branch Technical Position on Concentration Averaging and Encapsulation. The Branch Technical Position on Concentration Averaging and Encapsulation is an NRC guidance document for averaging and classifying wastes under 10 CFR 61. The Branch Technical Position on Concentration Averaging and Encapsulation is used by nuclear power plants (NPPs) licensees and sealed source users, among others. In addition, three of the four U.S. LLW disposal facility operators are required to honor the Branch Technical Position on Concentration Averaging and Encapsulation as a licensing condition. In 2010, the Commission directed the staff to develop guidance regarding large scale blending of similar homogenous waste types, as described in SECY-10-0043 as part of its Branch Technical Position on Concentration Averaging and Encapsulation revision. The Commission is improving the regulatory approach used in the Branch Technical Position on Concentration Averaging and Encapsulation by moving towards a making it more risk-informed and performance-based approach, which is more consistent with the agency's regulatory policies. Among the improvements to the Branch Technical Position on Concentration Averaging and Encapsulation

  13. Rapid growth in agricultural trade: effects on global area efficiency and the role of management

    NASA Astrophysics Data System (ADS)

    Kastner, Thomas; Erb, Karl-Heinz; Haberl, Helmut

    2014-03-01

    Cropland is crucial for supplying humans with biomass products, above all, food. Globalization has led to soaring volumes of international trade, resulting in strongly increasing distances between the locations where land use takes place and where the products are consumed. Based on a dataset that allows tracing the flows of almost 450 crop and livestock products and consistently allocating them to cropland areas in over 200 nations, we analyze this rapidly growing spatial disconnect between production and consumption for the period from 1986 to 2009. At the global level, land for export production grew rapidly (by about 100 Mha), while land supplying crops for direct domestic use remained virtually unchanged. We show that international trade on average flows from high-yield to low-yield regions: compared to a hypothetical no-trade counterfactual that assumes equal consumption and yield levels, trade lowered global cropland demand by almost 90 Mha in 2008 (3-year mean). An analysis using yield gap data (which quantify the distance of prevailing yields to those attainable through the best currently available production techniques) revealed that differences in land management and in natural endowments contribute almost equally to the yield differences between exporting and importing nations. A comparison of the effect of yield differences between exporting and importing regions with the potential of closing yield gaps suggests that increasing yields holds greater potentials for reducing future cropland demand than increasing and adjusting trade volumes based on differences in current land productivity.

  14. Global environmental change

    SciTech Connect

    Corell, R.W.; Anderson, P.A.

    1990-01-01

    Fifty years ago the buzz words in science were [open quotes]atomic energy,[close quotes] and the general mood of the public, in those more naive days, was that the earth is so large that it could take any kind of human abuse. The advance of science and technology since then has proved that this is not the case. It is now common sense, even to the layperson, that the earth's environment is delicate and needs careful protection if future generations are to enjoy it. The buzz words now are [open quotes]global change.[close quotes] This book is the outcome of the Workshop on the Science of Global Environmental Change sponsored by the North Atlantic Treaty Organization (NATO) and is one of the NATO's Advanced Science Institute Series books. It is essentially a collection of the lectures given in the workshop. The workshop was apparently not intended for in-depth scientific discussions but to review the overall current research situation and to identify future research needs. Accordingly, the papers collected in this volume are basically of this nature.

  15. Australopithecine endocast (Taung specimen, 1924): a new volume determination.

    PubMed

    Holloway, R L

    1970-05-22

    A redetermination of endocranial volume of the original 1924 Taung australopithecine described by Dart indicates a volume of 405 cubic centimeters, rather than the 525 cubic centimeters published earlier. The adult volume is estimated to have been 440 cubic centimeters. This value, plus other redeterminations of australopithecine endocasts, lowers the average to 442 cubic centimeters, and increase the likelihood of statistically significant differences from both robust australopithecines and the Olduvai Gorge hominid No. 7. PMID:5441027

  16. Deep water temperature, carbonate ion, and ice volume changes across the Eocene-Oligocene climate transition

    NASA Astrophysics Data System (ADS)

    Pusz, A. E.; Thunell, R. C.; Miller, K. G.

    2011-06-01

    Paired benthic foraminiferal stable isotope and Mg/Ca data are used to estimate bottom water temperature (BWT) and ice volume changes associated with the Eocene-Oligocene Transition (EOT), the largest global climate event of the past 50 Myr. We utilized ODP Sites 1090 and 1265 in the South Atlantic to assess seawater δ18O (δw), Antarctic ice volume, and sea level changes across the EOT (˜33.8-33.54 Ma). We also use benthic δ13C data to reconstruct the sources of the deep water masses in this region during the EOT. Our data, together with previously published records, indicate that a pulse of Northern Component Water influenced the South Atlantic immediately prior to and following the EOT. Benthic δ18O records show a 0.5‰ increase at ˜33.8 Ma (EOT-1) that represents a ˜2°C cooling and a small (˜10 m) eustatic fall that is followed by a 1.0‰ increase associated with Oi-1. The expected cooling of deep waters at Oi-1 (˜33.54 Ma) is not apparent in our Mg/Ca records. We suggest the cooling is masked by coeval changes in the carbonate saturation state (Δ[CO32-]) which affect the Mg/Ca data. To account for this, the BWT, ice volume, and δw estimates are corrected for a change in the Δ[CO32-] of deep waters on the basis of recently published work. Corrected BWT at Sites 1090 and 1265 show a ˜1.5°C cooling coincident with Oi-1 and an average δw increase of ˜0.75‰. The increase in ice volume during Oi-1 resulted in a ˜70 m drop in global sea level and the development of an Antarctic ice sheet that was near modern size or slightly larger.

  17. Averaging of viral envelope glycoprotein spikes from electron cryotomography reconstructions using Jsubtomo.

    PubMed

    Huiskonen, Juha T; Parsy, Marie-Laure; Li, Sai; Bitto, David; Renner, Max; Bowden, Thomas A

    2014-01-01

    Enveloped viruses utilize membrane glycoproteins on their surface to mediate entry into host cells. Three-dimensional structural analysis of these glycoprotein 'spikes' is often technically challenging but important for understanding viral pathogenesis and in drug design. Here, a protocol is presented for viral spike structure determination through computational averaging of electron cryo-tomography data. Electron cryo-tomography is a technique in electron microscopy used to derive three-dimensional tomographic volume reconstructions, or tomograms, of pleomorphic biological specimens such as membrane viruses in a near-native, frozen-hydrated state. These tomograms reveal structures of interest in three dimensions, albeit at low resolution. Computational averaging of sub-volumes, or sub-tomograms, is necessary to obtain higher resolution detail of repeating structural motifs, such as viral glycoprotein spikes. A detailed computational approach for aligning and averaging sub-tomograms using the Jsubtomo software package is outlined. This approach enables visualization of the structure of viral glycoprotein spikes to a resolution in the range of 20-40 Å and study of the study of higher order spike-to-spike interactions on the virion membrane. Typical results are presented for Bunyamwera virus, an enveloped virus from the family Bunyaviridae. This family is a structurally diverse group of pathogens posing a threat to human and animal health. PMID:25350719

  18. Global persistence in directed percolation

    NASA Astrophysics Data System (ADS)

    Oerding, K.; van Wijland, F.

    1998-08-01

    We consider a directed percolation process at its critical point. The probability that the deviation of the global order parameter with respect to its average has not changed its sign between 0 and t decays with t as a power law. In space dimensions 0305-4470/31/34/004/img5 the global persistence exponent 0305-4470/31/34/004/img6 that characterizes this decay is 0305-4470/31/34/004/img7 while for d<4 its value is increased to first order in 0305-4470/31/34/004/img8. Combining a method developed by Majumdar and Sire with renormalization group techniques we compute the correction to 0305-4470/31/34/004/img6 to first order in 0305-4470/31/34/004/img10. The global persistence exponent is found to be a new and independent exponent. Finally we compare our results with existing simulations.

  19. Average g-Factors of Anisotropic Polycrystalline Samples

    SciTech Connect

    Fishman, Randy Scott; Miller, Joel S.

    2010-01-01

    Due to the lack of suitable single crystals, the average g-factor of anisotropic polycrystalline samples are commonly estimated from either the Curie-Weiss susceptibility or the saturation magnetization. We show that the average g-factor obtained from the Curie constant is always greater than or equal to the average g-factor obtained from the saturation magnetization. The average g-factors are equal only for a single crystal or an isotropic polycrystal. We review experimental results for several compounds containing the anisotropic cation [Fe(C5Me5)2]+ and propose an experiment to test this inequality using a compound with a spinless anion.

  20. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  1. Thermodynamic properties of average-atom interatomic potentials for alloys

    NASA Astrophysics Data System (ADS)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  2. Global trends

    NASA Technical Reports Server (NTRS)

    Megie, G.; Chanin, M.-L.; Ehhalt, D.; Fraser, P.; Frederick, J. F.; Gille, J. C.; Mccormick, M. P.; Schoebert, M.; Bishop, L.; Bojkov, R. D.

    1990-01-01

    Measuring trends in ozone, and most other geophysical variables, requires that a small systematic change with time be determined from signals that have large periodic and aperiodic variations. Their time scales range from the day-to-day changes due to atmospheric motions through seasonal and annual variations to 11 year cycles resulting from changes in the sun UV output. Because of the magnitude of all of these variations is not well known and highly variable, it is necessary to measure over more than one period of the variations to remove their effects. This means that at least 2 or more times the 11 year sunspot cycle. Thus, the first requirement is for a long term data record. The second related requirement is that the record be consistent. A third requirement is for reasonable global sampling, to ensure that the effects are representative of the entire Earth. The various observational methods relevant to trend detection are reviewed to characterize their quality and time and space coverage. Available data are then examined for long term trends or recent changes in ozone total content and vertical distribution, as well as related parameters such as stratospheric temperature, source gases and aerosols.

  3. Global Warming And Meltwater

    NASA Astrophysics Data System (ADS)

    Bratu, S.

    2012-04-01

    In order to find new approaches and new ideas for my students to appreciate the importance of science in their daily life, I proposed a theme for them to debate. They had to search for global warming information and illustrations in the media, and discuss the articles they found in the classroom. This task inspired them to search for new information about this important and timely theme in science. I informed my students that all the best information about global warming and meltwater they found would be used in a poster that would help us to update the knowledge base of the Physics laboratory. I guided them to choose the most eloquent images and significant information. Searching and working to create this poster, the students arrived to better appreciate the importance of science in their daily life and to critically evaluate scientific information transmitted via the media. In the poster we created, one can find images, photos and diagrams and some interesting information: Global warming refers to the rising average temperature of the Earth's atmosphere and oceans and its projected evolution. In the last 100 years, the Earth's average surface temperature increased by about 0.8 °C with about two thirds of the increase occurring over just the last three decades. Warming of the climate system is unequivocal, and scientists are more than 90% certain most of it is caused by increasing concentrations of greenhouse gases produced by human activities such as deforestation and burning fossil fuel. They indicate that during the 21st century the global surface temperature is likely to rise a further 1.1 to 2.9 °C for the lowest emissions scenario and 2.4 to 6.4 °C for the highest predictions. An increase in global temperature will cause sea levels to rise and will change the amount and pattern of precipitation, and potentially result in expansion of subtropical deserts. Warming is expected to be strongest in the Arctic and would be associated with continuing decrease of

  4. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    SciTech Connect

    Kimes, D.S.; Kerber, A.G.; Sellers, P.J. )

    1993-06-01

    The problems in moving from a radiance measurement made for a particular sun-target-sensor geometry to an accurate estimate of the hemispherical reflectance are considerable. A knowledge-based system called VEG was used in this study to infer hemispherical reflectance. Given directional reflectance(s) and the sun angle, VEG selects the most suitable inference technique(s) and estimates the surface hemispherical reflectance with an estimate of the error. Ideally, VEG is applied to homogeneous vegetation. However, what is typically done in GCM (global circulation model) models and related studies is to obtain an average hemispherical reflectance on a square grid cell on the order of 200 km x 200 km. All available directional data for a given cell are averaged (for each view direction), and then a particular technique for inferring hemispherical reflectance is applied to this averaged data. Any given grid cell can contain several surface types that directionally scatter radiation very differently. When averaging over a set of view angles, the resulting mean values may be atypical of the actual surface types that occur on the ground, and the resulting inferred hemispherical reflectance can be in error. These errors were explored by creating a simulated scene and applying VEG to estimate the area-averaged hemispherical reflectance using various sampling procedures. The reduction in the hemispherical reflectance errors provided by using VEG ranged from a factor of 2-4, depending on conditions. This improvement represents a shift from the calculation of a hemispherical reflectance product of relative value (errors of 20% or more), to a product that could be used quantitatively in global modeling applications, where the requirement is for errors to be limited to around 5-10 %.

  5. Measurement of lacunar bone strains and crack formation during tensile loading by digital volume correlation of second harmonic generation images.

    PubMed

    Wentzell, Scott; Nesbitt, Robert Sterling; Macione, James; Kotha, Shiva

    2016-07-01

    The maintenance of healthy bone tissue depends upon the ability of osteocytes to respond to mechanical cues on the cellular level. The combination of digital volume correlation and second harmonic generation microscopy offers the opportunity to investigate the mechanical microenvironment of intact bone on the scale of individual osteocytes. Adult human femurs were imaged under tensile loads of 5 and 15MPa and volumes of approximately 492×429×31μm(3) were analyzed, along with an image of a bone microcrack under the same loading conditions. Principal strains were significantly higher in three-dimensional digital volume correlation when compared to two-dimensional digital image correlation. The average maximum principal strain magnitude was 5.06-fold greater than the applied global strain, with peak strains of up to 23.14-fold over global strains measured at the borders of osteocyte lacunae. Finally, a microcrack that initiated at an osteocyte lacunae had its greatest tensile strain magnitudes at the crack expansion front in the direction of a second lacunae, but strain at the crack border was reduced to background strain magnitudes upon breaching the second lacunae. This serveed to demonstrate the role of lacunae in initiating, mediating and terminating microcrack growth. PMID:26807766

  6. Phase averaging of image ensembles by using cepstral gradients

    SciTech Connect

    Swan, H.W.

    1983-11-01

    The direct Fourier phase averaging of an ensemble of randomly blurred images has long been thought to be too difficult a problem to undertake realistically owing to the necessity of proper phase unwrapping. It is shown that it is nevertheless possible to average the Fourier phase information in an image ensemble without calculating phases by using the technique of cepstral gradients.

  7. 78 FR 49770 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...

  8. 20 CFR 404.220 - Average-monthly-wage method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 404.220 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  9. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  10. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  11. Using Multiple Representations To Improve Conceptions of Average Speed.

    ERIC Educational Resources Information Center

    Reed, Stephen K.; Jazo, Linda

    2002-01-01

    Discusses improving mathematical reasoning through the design of computer microworlds and evaluates a computer-based learning environment that uses multiple representations to improve undergraduate students' conception of average speed. Describes improvement of students' estimates of average speed by using visual feedback from a simulation.…

  12. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...

  13. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...

  14. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...

  15. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...

  16. Average refractive powers of an alexandrite laser rod

    NASA Astrophysics Data System (ADS)

    Driedger, K. P.; Krause, W.; Weber, H.

    1986-04-01

    The average refractive powers (average inverse focal lengths) of the thermal lens produced by an alexandrite laser rod optically pumped at repetition rates between 0.4 and 10 Hz and with electrical flashlamp input pulse energies up to 500 J have been measured. The measuring setup is described and the measurement results are discussed.

  17. Hadley circulations for zonally averaged heating centered off the equator

    NASA Technical Reports Server (NTRS)

    Lindzen, Richard S.; Hou, Arthur Y.

    1988-01-01

    Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.

  18. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw...

  19. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw...

  20. Gyrokinetic simulations of electrostatic microinstabilities with bounce-averaged kinetic electrons for shaped tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Qi, Lei; Kwon, Jaemin; Hahm, T. S.; Jo, Gahyung

    2016-06-01

    Nonlinear bounce-averaged kinetic theory [B. H. Fong and T. S. Hahm, Phys. Plasmas 6, 188 (1999)] is used for magnetically trapped electron dynamics for the purpose of achieving efficient gyrokinetic simulations of Trapped Electron Mode (TEM) and Ion Temperature Gradient mode with trapped electrons (ITG-TEM) in shaped tokamak plasmas. The bounce-averaged kinetic equations are explicitly extended to shaped plasma equilibria from the previous ones for concentric circular plasmas, and implemented to a global nonlinear gyrokinetic code, Gyro-Kinetic Plasma Simulation Program (gKPSP) [J. M. Kwon et al., Nucl. Fusion 52, 013004 (2012)]. Verification of gKPSP with the bounce-averaged kinetic trapped electrons in shaped plasmas is successfully carried out for linear properties of the ITG-TEM mode and Rosenbluth-Hinton residual zonal flow [M. N. Rosenbluth and F. L. Hinton, Phys. Rev. Lett. 80, 724 (1998)]. Physics responsible for stabilizing effects of elongation on both ITG mode and TEM is identified using global gKPSP simulations. These can be understood in terms of magnetic flux expansion, leading to the effective temperature gradient R / L T ( 1 - E ') [P. Angelino et al., Phys. Rev. Lett. 102, 195002 (2009)] and poloidal wave length contraction at low field side, resulting in the effective poloidal wave number kθρi/κ.