Science.gov

Sample records for global volume averaged

  1. A volume averaged global model for inductively coupled HBr/Ar plasma discharge

    NASA Astrophysics Data System (ADS)

    Chung, Sang-Young; Kwon, Deuk-Chul; Choi, Heechol; Song, Mi-Young

    2015-09-01

    A global model for inductively coupled HBr/Ar plasma was developed. The model was based on a self-consistent global model had been developed by Kwon et al., and a set of chemical reactions in the HBr/Ar plasma was compiled by surveying theoretical, experimental and evaluative researches. In this model vibrational excitations of bi-atomic molecules and electronic excitations of hydrogen atom were taken into account. Neutralizations by collisions between positive and negative ions were considered with Hakman's approximate formula achieved by fitting of theoretical result. For some reactions that were not supplied from literatures the reaction parameters of Cl2 and HCl were adopted as them Br2 and HBr, respectively. For validation calculation results using this model were compared with experimental results from literatures for various plasma discharge parameters and it showed overall good agreement.

  2. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  3. Global Average Brightness Temperature for April 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1

    This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  4. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  5. Global Average Brightness Temperature for April 2003

    NASA Image and Video Library

    2003-06-02

    This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image. http://photojournal.jpl.nasa.gov/catalog/PIA00427

  6. Lighting design for globally illuminated volume rendering.

    PubMed

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  7. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    NASA Astrophysics Data System (ADS)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  8. Average exit time for volume-preserving maps.

    PubMed

    Meiss, J. D.

    1997-03-01

    For a volume-preserving map, we show that the exit time averaged over the entry set of a region is given by the ratio of the measure of the accessible subset of the region to that of the entry set. This result is primarily of interest to show two things: First, it gives a simple bound on the algebraic decay exponent of the survival probability. Second, it gives a tool for computing the measure of the accessible set. We use this to compute the measure of the bounded orbits for the Henon quadratic map. (c) 1997 American Institute of Physics.

  9. Particle filtration: An analysis using the method of volume averaging

    SciTech Connect

    Quintard, M.; Whitaker, S.

    1994-12-01

    The process of filtration of non-charged, submicron particles is analyzed using the method of volume averaging. The particle continuity equation is represented in terms of the first correction to the Smoluchowski equation that takes into account particle inertia effects for small Stokes numbers. This leads to a cellular efficiency that contains a minimum in the efficiency as a function of the particle size, and this allows us to identify the most penetrating particle size. Comparison of the theory with results from Brownian dynamics indicates that the first correction to the Smoluchowski equation gives reasonable results in terms of both the cellular efficiency and the most penetrating particle size. However, the results for larger particles clearly indicate the need to extend the Smoluchowski equation to include higher order corrections. Comparison of the theory with laboratory experiments, in the absence of adjustable parameters, provides interesting agreement for particle diameters that are equal to or less than the diameter of the most penetrating particle.

  10. Global robust image rotation from combined weighted averaging

    NASA Astrophysics Data System (ADS)

    Reich, Martin; Yang, Michael Ying; Heipke, Christian

    2017-05-01

    In this paper we present a novel rotation averaging scheme as part of our global image orientation model. This model is based on homologous points in overlapping images and is robust against outliers. It is applicable to various kinds of image data and provides accurate initializations for a subsequent bundle adjustment. The computation of global rotations is a combined optimization scheme: First, rotations are estimated in a convex relaxed semidefinite program. Rotations are required to be in the convex hull of the rotation group SO (3) , which in most cases leads to correct rotations. Second, the estimation is improved in an iterative least squares optimization in the Lie algebra of SO (3) . In order to deal with outliers in the relative rotations, we developed a sequential graph optimization algorithm that is able to detect and eliminate incorrect rotations. From the beginning, we propagate covariance information which allows for a weighting in the least squares estimation. We evaluate our approach using both synthetic and real image datasets. Compared to recent state-of-the-art rotation averaging and global image orientation algorithms, our proposed scheme reaches a high degree of robustness and accuracy. Moreover, it is also applicable to large Internet datasets, which shows its efficiency.

  11. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  12. Volume averaging for determining the effective dispersion tensor: Closure using periodic unit cells and comparison with ensemble averaging

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.; Cherblanc, Fabien; Quintard, Michel; Whitaker, Stephen

    2003-08-01

    In this work, we use the method of volume averaging to determine the effective dispersion tensor for a heterogeneous porous medium; closure for the averaged equation is obtained by solution of a concentration deviation equation over a periodic unit cell. Our purpose is to show how the method of volume averaging with closure can be rectified with the results obtained by other upscaling methods under particular conditions. Although this rectification is something that is generally believed to be true, there has been very little research that explores this issue explicitly. We show that under certain limiting (but mild) assumptions, the closure problem provides a Fourier series solution for the effective dispersion tensor. When second-order spatial stationarity is imposed on the velocity field, the method yields a simple Fourier series that converges to an integral form in the limit as the period of the unit cell approaches infinity. This limiting result is identical to the quasi-Fickian forms that have been developed previously via ensemble averaging by [1993] and recently by [2000] except in the definition of the averaging operation. As a second objective we have conducted a numerical study to evaluate the influence of the size of the averaging volume on the effective dispersion tensor and its volume averaged statistics. This second objective is complimentary in many ways to recent research reported by [1999] (via ensemble averaging) and by [1999] (via volume averaging) on the block-averaged effective dispersion tensor. The variability of the effective dispersion tensor from realization to realization is assessed by computing the volume-averaged effective dispersion tensor for an ensemble of finite fields with the same (ensemble) statistics. Ensembles were generated using three different sizes of unit cells. All three unit cell sizes yield similar results for the value of the mean effective dispersion tensor. However, the coefficient of variation depends strongly

  13. Global average net radiation sensitivity to cloud amount variations

    SciTech Connect

    Karner, O.

    1993-12-01

    Time series analysis performed using an autoregressive model is carried out to study monthly oscillations in the earth radiation budget (ERB) at the top of the atmosphere (TOA) and cloud amount estimates on a global basis. Two independent cloud amount datasets, produced elsewhere by different authors, and the ERB record based on the Nimbus-7 wide field-of-view 8-year (1978-86) observations are used. Autoregressive models are used to eliminate the effects of the earth`s orbit eccentricity on the radiation budget and cloud amount series. Nonzero cross correlation between the residual series provides a way of estimating the contribution of the cloudiness variations to the variance in the net radiation. As a result, a new parameter to estimate the net radiation sensitivity at the TOA to changes in cloud amount is introduced. This parameter has a more general character than other estimates because it contains time-lag terms of different length responsible for different cloud-radiation feedback mechanisms in the earth climate system. Time lags of 0, 1, 12, and 13 months are involved. Inclusion of the zero-lag term only shows that the albedo effect of clouds dominates, as is known from other research. Inclusion of all four terms leads to an average quasi-annual insensitivity. Approximately 96% of the ERB variance at the TOA can be explained by the eccentricity factor and 1% by cloudiness variations, provided that the data used are without error. Although the latter assumption is not fully correct, the results presented allow one to estimate the contribution of current cloudiness changes to the net radiation variability. Two independent cloud amount datasets have very similar temporal variability and also approximately equal impact on the net radiation at the TOA.

  14. Long term average rates of large-volume explosive volcanism are not average

    NASA Astrophysics Data System (ADS)

    Connor, C.; Kiyosugi, K.

    2011-12-01

    How good are our estimates of long term recurrence rates of large magnitude explosive volcanic eruptions? To investigate this question, we created a data set of all known explosive eruptions in Japan since 1.8 Ma and VEI magnitude 4 or greater. This data set contains 696 explosive eruptions. We use this data set to consider the change in apparent recurrence rate of large volume explosive eruptions through time. Assuming there has been little change in recurrence rate of volcanism since 2.25 Ma, apparent changes are due to erosion of explosive eruption deposits and a lower rate of identification of older deposits preserved in the geologic record. Surprisingly, one half of the eruptions in the data set occurred within the last 65 ka. 77% of the total eruptions occurred since 200 ka; the oldest eruption in the database is 2.25 Ma. Overall, there is a roughly exponential decrease in the numbers of eruptions of a given magnitude identified in the geological record as a function of time. This result clearly indicates that even large magnitude eruptions are significantly under-reported. In addition, percentages of explosive eruptions in the entire data set by eruption magnitude are: VEI 4 (40%), VEI 5 (42%), VEI 6 (13%) and VEI 7 (5%). Because it is reasonable to assume that smaller eruptions occur much more frequently, fewer VEI 4 eruptions than VEI 5 eruptions indicates that small eruptions are missing in this data set. We quantify these variations by plotting survivor functions, noting that there is little change in apparent rate of activity (or the preservation potential of deposits) with geographic and tectonic setting in Japan. These data indicate that eruption probabilities based on long term recurrence rate may underestimate rates of activity. This result also indicates there is considerable uncertainty about the future recurrence rate of large magnitude eruptions, as our best estimates of frequency are based on an unrealistically short record.

  15. The global-average production rate of Be-10

    NASA Astrophysics Data System (ADS)

    Monaghan, M. C.; Krishnaswami, S.; Turekian, K. K.

    1986-01-01

    Precipitation collected in continuously open containers for about a year at seven sites around the United States was analyzed for Be-10, Sr-90, Pb-210, and U-238. Based on these data and long-term precipitation, Sr-90 and Pb-210 delivery patterns, the stratospheric, tropospheric and recycled Be-10 components in the collections were estimated and the global Be-10 production rate was assessed. Single station production rate estimates range from 0.52 x 10 to the 6th to 2.64 x 10 to the 6th atoms/sq cm per year. The mean value is 1.21 x 10 to the 6th atoms/sq cm per year with a standard error of 0.26 x 10 to he 6th atoms/per year

  16. Compensating for volume and vector averaging biases in lidar wind speed measurements

    NASA Astrophysics Data System (ADS)

    Clive, Peter J. M.

    2008-10-01

    A number of vector and volume averaging considerations arise in relation to remote sensing, and in particular, Lidar. 1) Remote sensing devices obtain vector averages. These values are often compared to the scalar averages associated with cup anemometry. The magnitude of a vector average is less than or equal to the scalar average obtained over the same period. The use of Lidars in wind power applications has entailed the estimation of scalar averages by vector averages and vice versa. The relationship between the two kinds of average must therefore be understood. It is found that the ratio of the averages depends upon wind direction variability according to a Bessel function of the standard deviation of the wind direction during the averaging interval. 2) The finite probe length of remote sensing devices also incurs a volume averaging bias when wind shear is non-linear. The sensitivity of the devices to signals from a range of heights produces volume averages which will be representative of wind speeds at heights within that range. One can distinguish between the effective or apparent height the measured wind speeds represent as a result of volume averaging bias, and the configuration height at which the device has been set to measure wind speeds. If the wind shear is described by a logarithmic wind profile the apparent height is found to depend mainly on simple geometrical arguments concerning configuration height and probe length and is largely independent of the degree of wind shear. 3) The restriction of the locus of points at which radial velocity measurements are made to the circumference of a horizontally oriented disc at a particular height is seen to introduce ambiguity into results when dealing with wind vector fields which are not irrotational.

  17. Compensation of vector and volume averaging bias in lidar wind speed measurements

    NASA Astrophysics Data System (ADS)

    Clive, P. J. M.

    2008-05-01

    A number of vector and volume averaging considerations arise in relation to remote sensing, and in particular, Lidar. 1) Remote sensing devices obtain vector averages. These values are often compared to the scalar averages associated with cup anemometry. The magnitude of a vector average is less than or equal to the scalar average obtained over the same period. The use of Lidars in wind power applications has entailed the estimation of scalar averages by vector averages and vice versa. The relationship between the two kinds of average must therefore be understood. It is found that the ratio of the averages depends upon wind direction variability according to a Bessel function of the standard deviation of the wind direction during the averaging interval. 2) The finite probe length of remote sensing devices also incurs a volume averaging bias when wind shear is non-linear. The sensitivity of the devices to signals from a range of heights produces volume averages which will be representative of wind speeds at heights within that range. One can distinguish between the effective or apparent height the measured wind speeds represent as a result of volume averaging bias, and the configuration height at which the device has been set to measure wind speeds. If the wind shear is described by a logarithmic wind profile the apparent height is found to depend mainly on simple geometrical arguments concerning configuration height and probe length and is largely independent of the degree of wind shear. 3) The restriction of the locus of points at which radial velocity measurements are made to the circumference of a horizontally oriented disc at a particular height is seen to introduce ambiguity into results when dealing with wind vector fields which are not irrotational.

  18. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    PubMed

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  19. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    SciTech Connect

    Soltanian-Zadeh, H. Henry Ford Hospital, Detroit, MI ); Windham, J.P. ); Yagle, A.E. )

    1993-08-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images.

  20. Improved volume-averaged model for steady and pulsed-power electronegative discharges

    SciTech Connect

    Kim, Sungjin; Lieberman, M. A.; Lichtenberg, A. J.; Gudmundsson, J. T.

    2006-11-15

    An improved volume-averaged global model is developed for a cylindrical (radius R, length L) electronegative (EN) plasma that is applicable over a wide range of electron densities, electronegativities, and pressures. It is applied to steady and pulsed-power oxygen discharges. The model incorporates effective volume and surface loss factors for positive ions, negative ions, and electrons combining three electronegative discharge regimes: a two-region regime with a parabolic EN core surrounded by an electropositive edge, a one-region parabolic EN plasma, and a one-region flat-topped EN plasma, spanning the plasma parameters and gas pressures of interest for low pressure processing (below a few hundred millitorr). Pressure-dependent effective volume and surface loss factors are also used for the neutral species. A set of reaction rate coefficients, updated from previous model calculations, is developed for oxygen for the species O{sub 2}, O{sub 2}({sup 1}{delta}{sub g}), O, O{sub 2}{sup +}, O{sup +}, and O{sup -}, based on the latest published cross-section sets and measurements. The model solutions yield all of the quantities above together with such important processing quantities such as the neutral/ion flux ratio {gamma}{sub O}/{gamma}{sub i}, with the discharge aspect ratio 2R/L and pulsed-power period and duty ratio (pulse on-time/pulse period) as parameters. The steady discharge results are compared to an experiment, giving good agreement. For steady discharges, increasing 2R/L from 1 to 6 leads to a factor of 0.45 reduction in {gamma}{sub O}/{gamma}{sub i}. For pulsed discharges with a fixed duty ratio, {gamma}{sub O}/{gamma}{sub i} is found to have a minimum with respect to pulse period. A 25% duty ratio pulse reduces {gamma}{sub O}/{gamma}{sub i} by a factor of 0.75 compared to the steady-state case.

  1. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    NASA Astrophysics Data System (ADS)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  2. Effects of an absorbing boundary on the average volume visited by N spherical Brownian particles

    NASA Astrophysics Data System (ADS)

    Larralde, Hernan; M. Berezhkovskii, Alexander; Weiss, George H.

    2003-12-01

    The number of distinct sites visited by a lattice random walk and its continuum analog, the volume swept out by a diffusing spherical particle are used to model different phenomena in physics, chemistry and biology. Therefore the problem of finding statistical properties of these random variables is of importance. There have been several studies of the more general problem of the volume of the region explored by N random walks or Brownian particles in an unbounded space. We here study the effects of a planar absorbing boundary on the average of this volume. The boundary breaks the translational invariance of the space, and introduces an additional spatial parameter, the initial distance of the Brownian particles from the surface. We derive expressions for the average volume visited in three dimensions and the average span in one dimension as functions of the time for given values of the initial distance to the absorbing boundary and N. The results can be transformed to those for N lattice random walks by appropriately choosing the radius and diffusion constant of the spheres.

  3. The Incremental Validity of Average State Self-Reports Over Global Self-Reports of Personality.

    PubMed

    Finnigan, Katherine M; Vazire, Simine

    2017-03-09

    Personality traits are most often assessed using global self-reports of one's general patterns of thoughts, feelings, and behavior. However, recent theories have challenged the idea that global self-reports are the best way to assess traits. Whole Trait Theory postulates that repeated measures of a person's self-reported personality states (i.e., the average of many state self-reports) can be an alternative and potentially superior way of measuring a person's trait level (Fleeson & Jayawickreme, 2015). Our goal is to examine the validity of average state self-reports of personality for measuring between-person differences in what people are typically like. In order to validate average states as a measure of personality, we examine whether they are incrementally valid in predicting informant reports above and beyond global self-reports. In 2 samples, we find that average state self-reports tend to correlate with informant reports, although this relationship is weaker than the relationship between global self-reports and informant reports. Further, using structural equation modeling, we find that average state self-reports do not significantly predict informant reports independently of global self-reports. Our results suggest that average state self-reports may not contain information about between-person differences in personality traits beyond what is captured by global self-reports, and that average state self-reports may contain more self-bias than is commonly believed. We discuss the implications of these findings for research on daily manifestations of personality and the accuracy of self-reports. (PsycINFO Database Record

  4. Estimation of the global average temperature with optimally weighted point gauges

    NASA Technical Reports Server (NTRS)

    Hardin, James W.; Upson, Robert B.

    1993-01-01

    This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.

  5. Human-experienced temperature changes exceed global average climate changes for all income groups

    NASA Astrophysics Data System (ADS)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  6. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    SciTech Connect

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the charged wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.

  7. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  8. Predicting Lake Depths from Topography to Map Global Lake Volume

    NASA Astrophysics Data System (ADS)

    Holtzman, N.; Pavelsky, T.

    2016-12-01

    The depth of a lake affects its role in climate and biogeochemical cycling. There is a lack of lake depth data due to the difficulty of measuring bathymetry, which impedes the accurate inclusion of lakes in climate models and the assessment of global water resources and carbon storage. However, lake depths can be estimated from land topography, for which remotely-sensed DEM data is available. We develop a simple statistical model to predict lake depth from two explanatory variables: the mean relief above the lake surface of a buffer surrounding the lake, and whether the lake's location was glaciated in the last ice age. The model is based on 328 lakes with known depths, located on all continents but Antarctica, and has an r2 of 0.57. We then apply this model to a set of over 200,000 lakes from the Global Lakes and Wetlands Database to produce global gridded maps of predicted total lake volume and average depth. The realistic depth estimates provided by our model may improve the accuracy of future studies of climate and water resources.

  9. Volume averaging: Local and nonlocal closures using a Green’s function approach

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.; Valdés-Parada, Francisco J.

    2013-01-01

    Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the

  10. A global average model of atmospheric aerosols for radiative transfer calculations

    NASA Technical Reports Server (NTRS)

    Toon, O. B.; Pollack, J. B.

    1976-01-01

    A global average model is proposed for the size distribution, chemical composition, and optical thickness of stratospheric and tropospheric aerosols. This aerosol model is designed to specify the input parameters to global average radiative transfer calculations which assume the atmosphere is horizontally homogeneous. The model subdivides the atmosphere at multiples of 3 km, where the surface layer extends from the ground to 3 km, the upper troposphere from 3 to 12 km, and the stratosphere from 12 to 45 km. A list of assumptions made in construction of the model is presented and discussed along with major model uncertainties. The stratospheric aerosol is modeled as a liquid mixture of 75% H2SO4 and 25% H2O, while the tropospheric aerosol consists of 60% sulfate and 40% soil particles above 3 km and of 50% sulfate, 35% soil particles, and 15% sea salt below 3 km. Implications and consistency of the model are discussed.

  11. Global Average Upper Ocean Temperature Response To Changing Solar Irradiance: Exciting The Internal Decadal Mode

    NASA Astrophysics Data System (ADS)

    White, W. B.; Dettinger, M. D.; Cayan, D. R.; White, Warren B.; Dettinger, Michael D.; Cayan, Daniel R.

    Global average upper ocean temperatures anomalies of +/-0.05°K fluctuate in fixed phase with decadal signals in the Sun's irradiance of +/-0.5 Watts m-2 over the past 100 years (White et al., 1997), but its amplitude is 2 to 3 times that expected from the transient Stefan-Boltzmann radiation balance (White et al., 1988). Examining global patterns of upper ocean temperature and lower troposphere winds, we find the internal interannual mode of variability in Earth's ocean-atmosphere-terrestrial system with global-average upper ocean temperature anomalies of +/-0.05°K occurring naturally, independent of changing solar irradiance (White et al., 2000). Yet coherence and phase statistics indicate that the observed internal decadal mode in Earth's ocean -atmosphere terrestrial system is excited by the decadal signal in the Sun's irradiance. To understand the thermodynamics of this association we conduct a global-average upper ocean heat budget utilizing upper ocean temperatures from the SIO reanalysis and air-sea heat and momentum fluxes from the COADS reanalysis, finding the source of decadal global warming to be the reduction in trade wind intensity across the tropics, decreasing global average latent heat flux out of the ocean. We demonstrate that this reduction in trade wind intensity in the Pacific Ocean is governed by a delayed action oscillator mechanism in the ocean-atmosphere system differing little from that used to explain the El Niño-Southern Oscillation (Graham and White, 1988). We operate an intermediate coupled model of this delayed action oscillator, normally driven by white noise, by superimposing the Stefan-Boltzmann upper ocean temperature response to decadal changes in the Sun's irradiance. We find the latter, with weak amplitude of +/-0.02°K and non-random phase, is able to excite a decadal signal in this delayed action oscillator, yielding a damped resonance response of +/-0.1°K in the equatorial Pacific Ocean, with dissipation provided by

  12. Measurement of average density and relative volumes in a dispersed two-phase fluid

    SciTech Connect

    Sreepada, S.R.; Rippel, R.R.

    1990-12-19

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varying optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  13. Measurement of average density and relative volumes in a dispersed two-phase fluid

    DOEpatents

    Sreepada, Sastry R.; Rippel, Robert R.

    1992-01-01

    An apparatus and a method are disclosed for measuring the average density and relative volumes in an essentially transparent, dispersed two-phase fluid. A laser beam with a diameter no greater than 1% of the diameter of the bubbles, droplets, or particles of the dispersed phase is directed onto a diffraction grating. A single-order component of the diffracted beam is directed through the two-phase fluid and its refraction is measured. Preferably, the refracted beam exiting the fluid is incident upon a optical filter with linearly varing optical density and the intensity of the filtered beam is measured. The invention can be combined with other laser-based measurement systems, e.g., laser doppler anemometry.

  14. The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin

    2016-06-01

    On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.

  15. Average volume of the domain visited by randomly injected spherical Brownian particles in d dimensions

    NASA Astrophysics Data System (ADS)

    Berezhkovskii, Alexander M.; Weiss, George H.

    1996-07-01

    In order to extend the greatly simplified Smoluchowski model for chemical reaction rates it is necessary to incorporate many-body effects. A generalization with this feature is the so-called trapping model in which random walkers move among a uniformly distributed set of traps. The solution of this model requires consideration of the distinct number of sites visited by a single n-step random walk. A recent analysis [H. Larralde et al., Phys. Rev. A 45, 1728 (1992)] has considered a generalized version of this problem by calculating the average number of distinct sites visited by N n-step random walks. A related continuum analysis is given in [A. M. Berezhkovskii, J. Stat. Phys. 76, 1089 (1994)]. We consider a slightly different version of the general problem by calculating the average volume of the Wiener sausage generated by Brownian particles generated randomly in time. The analysis shows that two types of behavior are possible: one in which there is strong overlap between the Wiener sausages of the particles, and the second in which the particles are mainly independent of one another. Either one or both of these regimes occur, depending on the dimension.

  16. Subnational distribution of average farm size and smallholder contributions to global food production

    NASA Astrophysics Data System (ADS)

    Samberg, Leah H.; Gerber, James S.; Ramankutty, Navin; Herrero, Mario; West, Paul C.

    2016-12-01

    Smallholder farming is the most prevalent form of agriculture in the world, supports many of the planet’s most vulnerable populations, and coexists with some of its most diverse and threatened landscapes. However, there is little information about the location of small farms, making it difficult both to estimate their numbers and to implement effective agricultural, development, and land use policies. Here, we present a map of mean agricultural area, classified by the amount of land per farming household, at subnational resolutions across three key global regions using a novel integration of household microdata and agricultural landscape data. This approach provides a subnational estimate of the number, average size, and contribution of farms across much of the developing world. By our estimates, 918 subnational units in 83 countries in Latin America, sub-Saharan Africa, and South and East Asia average less than five hectares of agricultural land per farming household. These smallholder-dominated systems are home to more than 380 million farming households, make up roughly 30% of the agricultural land and produce more than 70% of the food calories produced in these regions, and are responsible for more than half of the food calories produced globally, as well as more than half of global production of several major food crops. Smallholder systems in these three regions direct a greater percentage of calories produced toward direct human consumption, with 70% of calories produced in these units consumed as food, compared to 55% globally. Our approach provides the ability to disaggregate farming populations from non-farming populations, providing a more accurate picture of farming households on the landscape than has previously been available. These data meet a critical need, as improved understanding of the prevalence and distribution of smallholder farming is essential for effective policy development for food security, poverty reduction, and conservation agendas.

  17. The Global 2000 Report to the President. Volume Three. Documentation on the Government's Global Sectoral Models: The Government's "Global Model."

    ERIC Educational Resources Information Center

    Barney, Gerald O., Ed.

    The third volume of the Global 2000 study presents basic information ("documentation") on the long-term sectoral models used by the U.S. government to project global trends in population, resources, and the environment. Its threefold purposes are: (1) to present all this basic information in a single volume, (2) to provide an…

  18. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  19. Average synaptic activity and neural networks topology: a global inverse problem

    PubMed Central

    Burioni, Raffaella; Casartelli, Mario; di Volo, Matteo; Livi, Roberto; Vezzani, Alessandro

    2014-01-01

    The dynamics of neural networks is often characterized by collective behavior and quasi-synchronous events, where a large fraction of neurons fire in short time intervals, separated by uncorrelated firing activity. These global temporal signals are crucial for brain functioning. They strongly depend on the topology of the network and on the fluctuations of the connectivity. We propose a heterogeneous mean–field approach to neural dynamics on random networks, that explicitly preserves the disorder in the topology at growing network sizes, and leads to a set of self-consistent equations. Within this approach, we provide an effective description of microscopic and large scale temporal signals in a leaky integrate-and-fire model with short term plasticity, where quasi-synchronous events arise. Our equations provide a clear analytical picture of the dynamics, evidencing the contributions of both periodic (locked) and aperiodic (unlocked) neurons to the measurable average signal. In particular, we formulate and solve a global inverse problem of reconstructing the in-degree distribution from the knowledge of the average activity field. Our method is very general and applies to a large class of dynamical models on dense random networks. PMID:24613973

  20. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  1. Average volume-assured pressure support in obesity hypoventilation: A randomized crossover trial.

    PubMed

    Storre, Jan Hendrik; Seuthe, Benjamin; Fiechter, René; Milioglou, Stavroula; Dreher, Michael; Sorichter, Stephan; Windisch, Wolfram

    2006-09-01

    Average volume-assured pressure support (AVAPS) has been introduced as a new additional mode for a bilevel pressure ventilation (BPV) device (BiPAP; Respironics; Murrysville, PA), but studies on the physiologic and clinical effects have not yet been performed. There is a particular need to better define the most efficient ventilatory treatment modality for patients with obesity hypoventilation syndrome (OHS). In OHS patients who did not respond to therapy with continuous positive airway pressure, the effects of BPV with the spontaneous/timed (S/T) ventilation mode with and without AVAPS over 6 weeks on ventilation pattern, gas exchange, sleep quality, and health-related quality of life (HRQL) assessed by the severe respiratory insufficiency questionnaire (SRI) were prospectively investigated in a randomized crossover trial. Ten patients (mean [+/- SD] age, 53.5 +/- 11.7 years; mean body mass index, 41.6 +/- 12.1 kg/m2; mean FEV1/FVC ratio, 79.4 +/- 6.5%; mean transcutaneous P(CO2) [PtcCO2], 58 +/- 12 mm Hg) were studied. PtcCO2 nonsignificantly decreased during nocturnal BPV-S/T by -5.6 +/- 11.8 mm Hg (95% confidence interval [CI], -14.7 to 3.4 mm Hg; p = 0.188), but significantly decreased during BPV-S/T-AVAPS by -12.6 +/- 12.2 mm Hg (95% CI, -22.0 to -3.2 mm Hg; p = 0.015). Pneumotachographic measurements revealed a higher individual variance of peak inspiratory pressure (p < 0.001) and a trend for lower leak volumes but also for higher tidal volumes during BPV-S/T-AVAPS. The SRI summary scale score improved from 63 +/- 15 to 78 +/- 14 during BPV-S/T (p = 0.004) and to 76 +/- 16 during BPV-S/T-AVAPS (p = 0.014). Sleep quality and oxygen saturation also comparably improved following BPV-S/T and BPV-S/T-AVAPS. BPV-S/T substantially improved oxygenation, sleep quality, and HRQL in patients with OHS. AVAPS provided additional benefits on ventilation quality, thus resulting in a more efficient decrease of PtcCO2. However, this did not provide further clinical benefits

  2. Introduction to "Global Tsunami Science: Past and Future, Volume II"

    NASA Astrophysics Data System (ADS)

    Rabinovich, Alexander B.; Fritz, Hermann M.; Tanioka, Yuichiro; Geist, Eric L.

    2017-08-01

    Twenty-two papers on the study of tsunamis are included in Volume II of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Volume I of this topical issue was published as PAGEOPH, vol. 173, No. 12, 2016 (Eds., E. L. Geist, H. M. Fritz, A. B. Rabinovich, and Y. Tanioka). Three papers in Volume II focus on details of the 2011 and 2016 tsunami-generating earthquakes offshore of Tohoku, Japan. The next six papers describe important case studies and observations of recent and historical events. Four papers related to tsunami hazard assessment are followed by three papers on tsunami hydrodynamics and numerical modelling. Three papers discuss problems of tsunami warning and real-time forecasting. The final set of three papers importantly investigates tsunamis generated by non-seismic sources: volcanic explosions, landslides, and meteorological disturbances. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.

  3. A temperature-based model for estimating monthly average daily global solar radiation in China.

    PubMed

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China.

  4. Predicting Climate Change Using Response Theory: Global Averages and Spatial Patterns

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Ragone, Francesco; Lunkeit, Frank

    2017-02-01

    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source general circulation model of the atmosphere featuring O(10^5) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Response theory allows one to practically compute the time-dependent measure supported on the pullback attractor of the climate system, whose dynamics is non-autonomous as a result of time-dependent forcings. We propose a simple yet efficient method for predicting—at any lead time and in an ensemble sense—the change in climate properties resulting from increase in the concentration of CO_2 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as in their spatial patterns. The quality of the predictions obtained for the surface temperature fields is rather good, while in the case of precipitation a good skill is observed only for the global average. We also show how it is possible to define accurately concepts like the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change.

  5. The dependence on solar elevation of the correlation between monthly average hourly diffuse and global radiation

    SciTech Connect

    Soler, A. )

    1988-01-01

    In the present work, the dependence on {anti {gamma}} of the correlation between {anti K}{sub d} = {anti I}{sub d}/{anti I}{sub O} and {anti K}{sub t} = {anti I}/{anti I}{sub o} is studied, {anti I}, {anti I}{sub d}, and {anti I}{sub o} respectively being the monthly average hourly values of the global, diffuse, and extraterrestrial radiation, all of them on a horizontal surface, and {anti {gamma}} the solar elevation at midhour. The dependence is studied for Uccle for the following sky conditions. Condition A: clear skies (fraction of possible sunshine = 1) and the maximum values of direct radiation measured during the period considered (each of the hours before or after the solar noon for which radiation is received); Condition B corresponding to all the values of radiation measured when the sunshine fraction is 1 during the period considered; Condition C; corresponding to all the data collected, independently of the state of the sky; Condition D: corresponding to overcast skies ({anti I} = {anti I}{sub d}). From the available values of {anti I} and {anti I}{sub d} (monthly average hourly direct radiation on a horizontal surface), values of {anti K}{sub d} and {anti K}{sub t} for 5{degree} {le} {anti {gamma}} {le} 45{degree} and {Delta} {anti {gamma}} = 5{degree} are calculated using Newton's divided difference interpolation formula.

  6. Global Symmetries, Volume Independence, and Continuity in Quantum Field Theories.

    PubMed

    Sulejmanpasic, Tin

    2017-01-06

    We discuss quantum field theories with global SU(N) and O(N) symmetries for which temporal direction is compactified on a circle of size L with periodicity of fields up to a global symmetry transformation, i.e., twisted boundary conditions. Such boundary conditions correspond to an insertion of the global symmetry operator in the partition function. We argue in general and prove in particular for CP(N-1) and O(N) nonlinear sigma models that large-N volume independence holds. Further we show that the CP(N-1) theory is free from the Affleck phase transition confirming the Ünsal-Dunne continuity conjecture.

  7. Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.

    ERIC Educational Resources Information Center

    Cambridge Conference on School Mathematics, Newton, MA.

    Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…

  8. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    The use of individual Global Navigation Satellite Services (GPS, GLONASS, Galileo, and Beidou/COMPASS) for the position, navigation, and timing in the Space Service Volume at altitudes of 300 km, 3000 km, 8000 km, 15000 km, 25000 km, 36500km and 70000 km is examined and the percent availability of at least one and at least four satellites is presented.

  9. Biotechnology in a global economy. Volume 2. Part 1

    SciTech Connect

    Not Available

    1991-06-01

    Volume 2, Part 1 of Biotechnology in a Global Economy is comprised of various papers relating to the biotechnology industry and its level of development in various countries. Major topics discussed include current status of the industry in these countries, financing sources, future strategies, special projects being pursued, and technology transfer.

  10. Nonlinear analysis of a new car-following model accounting for the global average optimal velocity difference

    NASA Astrophysics Data System (ADS)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi

    2016-09-01

    In this paper, a new car-following model is proposed by considering the global average optimal velocity difference effect on the basis of the full velocity difference (FVD) model. We investigate the influence of the global average optimal velocity difference on the stability of traffic flow by making use of linear stability analysis. It indicates that the stable region will be enlarged by taking the global average optimal velocity difference effect into account. Subsequently, the mKdV equation near the critical point and its kink-antikink soliton solution, which can describe the traffic jam transition, is derived from nonlinear analysis. Furthermore, numerical simulations confirm that the effect of the global average optimal velocity difference can efficiently improve the stability of traffic flow, which show that our new consideration should be taken into account to suppress the traffic congestion for car-following theory.

  11. Individual Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) to terrestrial users, GPS is currently used to provide for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Beidou, and Galileo), it will be possible to provide these services by using other GNSS constellations. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to 70,000 km. This paper will report a similar analysis of the performance of each of the additional GNSS and compare them with GPS alone. The Space Service Volume, defined as the volume between 3,000 km altitude and geosynchronous altitude, as compared with the Terrestrial Service Volume between the surface and 3,000 km. In the Terrestrial Service Volume, GNSS performance will be similar to performance on the Earth's surface. The GPS system has established signal requirements for the Space Service Volume. A separate paper presented at the conference covers the use of multiple GNSS in the Space Service Volume.

  12. The intrinsic dependence structure of peak, volume, duration, and average intensity of hyetographs and hydrographs.

    PubMed

    Serinaldi, Francesco; Kilsby, Chris G

    2013-06-01

    [1] The information contained in hyetographs and hydrographs is often synthesized by using key properties such as the peak or maximum value Xp , volume V, duration D, and average intensity I. These variables play a fundamental role in hydrologic engineering as they are used, for instance, to define design hyetographs and hydrographs as well as to model and simulate the rainfall and streamflow processes. Given their inherent variability and the empirical evidence of the presence of a significant degree of association, such quantities have been studied as correlated random variables suitable to be modeled by multivariate joint distribution functions. The advent of copulas in geosciences simplified the inference procedures allowing for splitting the analysis of the marginal distributions and the study of the so-called dependence structure or copula. However, the attention paid to the modeling task has overlooked a more thorough study of the true nature and origin of the relationships that link [Formula: see text], and I. In this study, we apply a set of ad hoc bootstrap algorithms to investigate these aspects by analyzing the hyetographs and hydrographs extracted from 282 daily rainfall series from central eastern Europe, three 5 min rainfall series from central Italy, 80 daily streamflow series from the continental United States, and two sets of 200 simulated universal multifractal time series. Our results show that all the pairwise dependence structures between [Formula: see text], and I exhibit some key properties that can be reproduced by simple bootstrap algorithms that rely on a standard univariate resampling without resort to multivariate techniques. Therefore, the strong similarities between the observed dependence structures and the agreement between the observed and bootstrap samples suggest the existence of a numerical generating mechanism based on the superposition of the effects of sampling data at finite time steps and the process of summing realizations

  13. Average Spatial Distribution of Cosmic Rays behind the Interplanetary Shock—Global Muon Detector Network Observations

    NASA Astrophysics Data System (ADS)

    Kozai, M.; Munakata, K.; Kato, C.; Kuwabara, T.; Rockenbach, M.; Dal Lago, A.; Schuch, N. J.; Braga, C. R.; Mendonça, R. R. S.; Jassar, H. K. Al; Sharma, M. M.; Duldig, M. L.; Humble, J. E.; Evenson, P.; Sabbah, I.; Tokumaru, M.

    2016-07-01

    We analyze the galactic cosmic ray (GCR) density and its spatial gradient in Forbush Decreases (FDs) observed with the Global Muon Detector Network (GMDN) and neutron monitors (NMs). By superposing the GCR density and density gradient observed in FDs following 45 interplanetary shocks (IP-shocks), each associated with an identified eruption on the Sun, we infer the average spatial distribution of GCRs behind IP-shocks. We find two distinct modulations of GCR density in FDs, one in the magnetic sheath and the other in the coronal mass ejection (CME) behind the sheath. The density modulation in the sheath is dominant in the western flank of the shock, while the modulation in the CME ejecta stands out in the eastern flank. This east-west asymmetry is more prominent in GMDN data responding to ˜60 GV GCRs than in NM data responding to ˜10 GV GCRs, because of the softer rigidity spectrum of the modulation in the CME ejecta than in the sheath. The geocentric solar ecliptic-y component of the density gradient, G y , shows a negative (positive) enhancement in FDs caused by the eastern (western) eruptions, while G z shows a negative (positive) enhancement in FDs caused by the northern (southern) eruptions. This implies that the GCR density minimum is located behind the central flank of IP-shocks and propagating radially outward from the location of the solar eruption. We also confirmed that the average G z changes its sign above and below the heliospheric current sheet, in accord with the prediction of the drift model for the large-scale GCR transport in the heliosphere.

  14. Local and Global Illumination in the Volume Rendering Integral

    SciTech Connect

    Max, N; Chen, M

    2005-10-21

    This article is intended as an update of the major survey by Max [1] on optical models for direct volume rendering. It provides a brief overview of the subject scope covered by [1], and brings recent developments, such as new shadow algorithms and refraction rendering, into the perspective. In particular, we examine three fundamentals aspects of direct volume rendering, namely the volume rendering integral, local illumination models and global illumination models, in a wavelength-independent manner. We review the developments on spectral volume rendering, in which visible light are considered as a form of electromagnetic radiation, optical models are implemented in conjunction with representations of spectral power distribution. This survey can provide a basis for, and encourage, new efforts for developing and using complex illumination models to achieve better realism and perception through optical correctness.

  15. A model ensemble for explaining the seasonal cycle of globally averaged atmospheric carbon dioxide concentration

    NASA Astrophysics Data System (ADS)

    Alexandrov, Georgii; Eliseev, Alexey

    2015-04-01

    The seasonal cycle of the globally averaged atmospheric carbon dioxide concentrations results from the seasonal changes in the gas exchange between the atmosphere and other carbon pools. Terrestrial pools are the most important. Boreal and temperate ecosystems provide a sink for carbon dioxide only during the warm period of the year, and, therefore, the summertime reduction in the atmospheric carbon dioxide concentration is usually explained by the seasonal changes in the magnitude of terrestrial carbon sink. Although this explanation seems almost obvious, it is surprisingly difficult to support it by calculations of the seasonal changes in the strength of the sink provided by boreal and temperate ecosystems. The traditional conceptual framework for modelling net ecosystem exchange (NEE) leads to the estimates of the NEE seasonal cycle amplitude which are too low for explaining the amplitude of the seasonal cycle of the atmospheric carbon dioxide concentration. To propose a more suitable conceptual framework we develop a model ensemble that consists of nine structurally different models and covers various approaches to modelling gross primary production and heterotrophic respiration, including the effects of light saturation, limited light use efficiency, limited water use efficiency, substrate limitation and microbiological priming. The use of model ensembles is a well recognized methodology for evaluating structural uncertainty of model-based predictions. In this study we use this methodology for exploratory modelling analysis - that is, to identify the mechanisms that cause the observed amplitude of the seasonal cycle of the atmospheric carbon dioxide concentration and its slow but steady growth.

  16. Exploring Granger causality between global average observed time series of carbon dioxide and temperature

    SciTech Connect

    Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R

    2010-01-01

    Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

  17. Predicting Climate Change using Response Theory: Global Averages and Spatial Patterns

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Lunkeit, Frank; Ragone, Francesco

    2016-04-01

    The provision of accurate methods for predicting the climate response to anthropogenic and natural forcings is a key contemporary scientific challenge. Using a simplified and efficient open-source climate model featuring O(105) degrees of freedom, we show how it is possible to approach such a problem using nonequilibrium statistical mechanics. Using the theoretical framework of the pullback attractor and the tools of response theory we propose a simple yet efficient method for predicting - at any lead time and in an ensemble sense - the change in climate properties resulting from increase in the concentration of CO2 using test perturbation model runs. We assess strengths and limitations of the response theory in predicting the changes in the globally averaged values of surface temperature and of the yearly total precipitation, as well as their spatial patterns. We also show how it is possible to define accurately concepts like the the inertia of the climate system or to predict when climate change is detectable given a scenario of forcing. Our analysis can be extended for dealing with more complex portfolios of forcings and can be adapted to treat, in principle, any climate observable. Our conclusion is that climate change is indeed a problem that can be effectively seen through a statistical mechanical lens, and that there is great potential for optimizing the current coordinated modelling exercises run for the preparation of the subsequent reports of the Intergovernmental Panel for Climate Change.

  18. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  19. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) services to traditional terrestrial and airborne users, GPS is also being increasingly used as a tool to enable precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite System (GNSS) constellations being replenished and coming into service (GLONASS, Beidou, and Galileo), it will become possible to benefit from greater signal availability and robustness by using evolving multi-constellation receivers. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to seventy thousand kilometers. This paper will report a similar analysis of the signal coverage of GPS in the space domain; however, the analyses will also consider signal coverage from each of the additional GNSS constellations noted earlier to specifically demonstrate the expected benefits to be derived from using GPS in conjunction with other foreign systems. The Space Service Volume is formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude circa 36,000 km, as compared with the Terrestrial Service Volume between 3,000 km and the surface of the Earth. In the Terrestrial Service Volume, GNSS performance is the same as on or near the Earth's surface due to satellite vehicle availability and geometry similarities. The core GPS system has thereby established signal requirements for the Space Service Volume as part of technical Capability Development Documentation (CDD) that specifies system performance. Besides the technical discussion, we also present diplomatic efforts to extend the GPS Space Service Volume concept to other PNT service providers in an effort to assure that all space users will benefit from the enhanced

  20. Introduction to "Global Tsunami Science: Past and Future, Volume I"

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Fritz, Hermann M.; Rabinovich, Alexander B.; Tanioka, Yuichiro

    2016-12-01

    Twenty-five papers on the study of tsunamis are included in Volume I of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Six papers examine various aspects of tsunami probability and uncertainty analysis related to hazard assessment. Three papers relate to deterministic hazard and risk assessment. Five more papers present new methods for tsunami warning and detection. Six papers describe new methods for modeling tsunami hydrodynamics. Two papers investigate tsunamis generated by non-seismic sources: landslides and meteorological disturbances. The final three papers describe important case studies of recent and historical events. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.

  1. Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) Cost-Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2008-01-01

    Lightning Launch Commit Criteria (LLCC) are designed to prevent space launch vehicles from flight through environments conducive to natural or triggered lightning and are used for all U.S. government and commercial launches at government and civilian ranges. They are maintained by a committee known as the NASA/USAF Lightning Advisory Panel (LAP). The previous LLCC for anvil cloud, meant to avoid triggered lightning, have been shown to be overly restrictive. Some of these rules have had such high safety margins that they prohibited flight under conditions that are now thought to be safe 90% of the time, leading to costly launch delays and scrubs. The LLCC for anvil clouds was upgraded in the summer of 2005 to incorporate results from the Airborne Field Mill (ABFM) experiment at the Eastern Range (ER). Numerous combinations of parameters were considered to develop the best correlation of operational weather observations to in-cloud electric fields capable of rocket triggered lightning in anvil clouds. The Volume Averaged Height Integrated Radar Reflectivity (VAHIRR) was the best metric found. Dr. Harry Koons of Aerospace Corporation conducted a risk analysis of the VAHIRR product. The results indicated that the LLCC based on the VAHIRR product would pose a negligible risk of flying through hazardous electric fields. Based on these findings, the Kennedy Space Center Weather Office is considering seeking funding for development of an automated VAHIRR algorithm for the new ER 45th Weather Squadron (45 WS) RadTec 431250 weather radar and Weather Surveillance Radar-1988 Doppler (WSR-88D) radars. Before developing an automated algorithm, the Applied Meteorology Unit (AMU) was tasked to determine the frequency with which VAHIRR would have allowed a launch to safely proceed during weather conditions otherwise deemed "red" by the Launch Weather Officer. To do this, the AMU manually calculated VAHIRR values based on candidate cases from past launches with known anvil cloud

  2. Grade Point Average and Student Outcomes. Data Notes. Volume 5, Number 1, January/February 2010

    ERIC Educational Resources Information Center

    Clery, Sue; Topper, Amy

    2010-01-01

    Using data from Achieving the Dream: Community College Count, this issue of Data Notes investigates the academic achievement patterns of students attending Achieving the Dream colleges. The data show that 21 percent of students at Achieving the Dream colleges had grade point averages (GPAs) of 3.50 or higher at the end of their first year. At…

  3. Quantitative analysis of molecular surfaces: areas, volumes, electrostatic potentials and average local ionization energies.

    PubMed

    Bulat, Felipe A; Toro-Labbé, Alejandro; Brinck, Tore; Murray, Jane S; Politzer, Peter

    2010-11-01

    We describe a procedure for performing quantitative analyses of fields f(r) on molecular surfaces, including statistical quantities and locating and evaluating their local extrema. Our approach avoids the need for explicit mathematical representation of the surface and can be implemented easily in existing graphical software, as it is based on the very popular representation of a surface as collection of polygons. We discuss applications involving the volumes, surface areas and molecular surface electrostatic potentials, and local ionization energies of a group of 11 molecules.

  4. The global volume and distribution of modern groundwater

    NASA Astrophysics Data System (ADS)

    Gleeson, Tom; Befus, Kevin; Jasechko, Scott; Luijendijk, Elco; Cardenas, Bayani

    2017-04-01

    Groundwater is important for energy and food security, human health and ecosystems. The time since groundwater was recharged - or groundwater age - can be important for diverse geologic processes such as chemical weathering, ocean eutrophication and climate change. However, measured groundwater ages range from months to millions of years. The global volume and distribution of groundwater less than 50 years old - modern groundwater that is the most recently recharged and also the most vulnerable to global change - are unknown. Here we combine geochemical, geological, hydrologic and geospatial datasets with numerical simulations of groundwater flow and analyze tritium ages to show that less than 6% of the groundwater in the uppermost portion of Earth's landmass is modern. We find that the total groundwater volume in the upper 2 km of continental crust is approximately 22.6 million km3, of which 0.1 to 5.0 million km3 is less than 50 years old. Although modern groundwater represents a small percentage of the total groundwater on Earth, the volume of modern groundwater is equivalent to a body of water with a depth of about 3 m spread over the continents. This water resource dwarfs all other components of the active hydrologic cycle.

  5. The global volume and distribution of modern groundwater

    NASA Astrophysics Data System (ADS)

    Gleeson, Tom; Befus, Kevin M.; Jasechko, Scott; Luijendijk, Elco; Cardenas, M. Bayani

    2016-02-01

    Groundwater is important for energy and food security, human health and ecosystems. The time since groundwater was recharged--or groundwater age--can be important for diverse geologic processes, such as chemical weathering, ocean eutrophication and climate change. However, measured groundwater ages range from months to millions of years. The global volume and distribution of groundwater less than 50 years old--modern groundwater that is the most recently recharged and also the most vulnerable to global change--are unknown. Here we combine geochemical, geologic, hydrologic and geospatial data sets with numerical simulations of groundwater and analyse tritium ages to show that less than 6% of the groundwater in the uppermost portion of Earth’s landmass is modern. We find that the total groundwater volume in the upper 2 km of continental crust is approximately 22.6 million km3, of which 0.1-5.0 million km3 is less than 50 years old. Although modern groundwater represents a small percentage of the total groundwater on Earth, the volume of modern groundwater is equivalent to a body of water with a depth of about 3 m spread over the continents. This water resource dwarfs all other components of the active hydrologic cycle.

  6. The Spitzer Local Volume Legacy (LVL) global optical photometry

    NASA Astrophysics Data System (ADS)

    Cook, David O.; Dale, Daniel A.; Johnson, Benjamin D.; Van Zee, Liese; Lee, Janice C.; Kennicutt, Robert C.; Calzetti, Daniela; Staudaher, Shawn M.; Engelbracht, Charles W.

    2014-11-01

    We present the global optical photometry of 246 galaxies in the Local Volume Legacy (LVL) survey. The full volume-limited sample consists of 258 nearby (D < 11 Mpc) galaxies whose absolute B-band magnitude span a range of -9.6 < MB < -20.7 mag. A composite optical (UBVR) data set is constructed from observed UBVR and Sloan Digital Sky Survey ugriz imaging, where the ugriz magnitudes are transformed into UBVR. We present photometry within three galaxy apertures defined at UV, optical, and IR wavelengths. Flux comparisons between these apertures reveal that the traditional optical R25 galaxy apertures do not fully encompass extended sources. Using the larger IR apertures, we find colour-colour relationships where later type spiral and irregular galaxies tend to be bluer than earlier type galaxies. These data provide the missing optical emission from which future LVL studies can construct the full panchromatic (UV-optical-IR) spectral energy distributions.

  7. Volume-Averaged Model of Inductively-Driven Multicusp Ion Source

    NASA Astrophysics Data System (ADS)

    Patel, Kedar K.; Lieberman, M. A.; Graf, M. A.

    1998-10-01

    A self-consistent spatially averaged model of high-density oxygen and boron trifluoride discharges has been developed for a 13.56 MHz, inductively coupled multicusp ion source. We determine positive ion, negative ion, and electron densities, the ground state and metastable densities, and the electron temperature as functions of the control parameters: gas pressure, gas flow rate, input power and reactor geometry. Neutralization and fragmentation into atomic species are assumed for all ions hitting the wall. For neutrals, a wall recombination coefficient for oxygen atoms and a wall sticking coefficient for boron trifluoride (BF_3) and its dissociation products are the single adjustable parameters used to model the surface chemistry. For the aluminum walls of the ion source used in the Eaton ULE2 ion implanter, complete wall recombination of O atoms is found to give the best match to the experimental data for oxygen, whereas a sticking coefficient of 0.62 for all neutral species in a BF3 discharge was found to best match experimental data.

  8. The determination of the global average OH concentration using a deuteroethane tracer

    NASA Technical Reports Server (NTRS)

    Stevens, Charles M.; Cicerone, Ralph

    1986-01-01

    It is proposed to measure the decreasing global concentration of an OH reactive isotopic tracer, G sub 2 D sub 6, after its introduction into the troposphere in a manner to facilitate uniform global mixing. Analyses at the level of 2 x 10 to the -19th power fraction, corresponding to one kg uniformly distributed globally, should be possible by a combination of cryogenic absorption techniques to separate ethane from air and high sensitivity isotopic analysis of ethane by mass spectrometry. Aliquots of C sub 2 D sub 6 totaling one kg would be introduced to numerous southern and northern latitudes over a 10 day period in order to achieve a uniform global concentration within 3 to 6 months by the normal atmospheric circulation. Then samples of air of 1000 l (STP) would be collected periodically at a tropical and temperate zone location in each hemisphere and spiked with a known amount of another isotopic species of ethane, C-13 sub 2 H sub 6, at the level of 10 to the -11th power mole fraction. After separation of the ethanes from air, the absolute concentration of C sub 2 D sub 6 would be analyzed using the Argonne 100-inch radius mass spectrometer.

  9. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  10. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems.

    PubMed

    Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-21

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  11. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  12. Effects of volume averaging on the line spectra of vertical velocity from multiple-Doppler radar observations

    NASA Technical Reports Server (NTRS)

    Gal-Chen, T.; Wyngaard, J. C.

    1982-01-01

    Calculations of the ratio of the true one-dimensional spectrum of vertical velocity and that measured with multiple-Doppler radar beams are presented. It was assumed that the effects of pulse volume averaging and objective analysis routines is replacement of a point measurement with a volume integral. A u and v estimate was assumed to be feasible when orthogonal radars are not available. Also, the target fluid was configured as having an infinite vertical dimension, zero vertical velocity at the top and bottom, and having homogeneous and isotropic turbulence with a Kolmogorov energy spectrum. The ratio obtained indicated that equal resolutions among radars yields a monotonically decreasing, wavenumber-dependent response function. A gain of 0.95 was demonstrated in an experimental situation with 40 levels. Possible errors introduced when using unequal resolution radars were discussed. Finally, it was found that, for some flows, the extent of attenuation depends on the number of vertical levels resolvable by the radars.

  13. Average volume-assured pressure support in a 16-year-old girl with congenital central hypoventilation syndrome.

    PubMed

    Vagiakis, Emmanouil; Koutsourelakis, Ioannis; Perraki, Eleni; Roussos, Charis; Mastora, Zafeiria; Zakynthinos, Spyros; Kotanidou, Anastasia

    2010-12-15

    Congenital central hypoventilation syndrome (CCHS) is an uncommon disorder characterized by the absence of adequate autonomic control of respiration, which results in alveolar hypoventilation and decreased sensitivity to hypercarbia and hypoxemia, especially during sleep. Patients with CCHS need lifelong ventilatory support. The treatment options for CCHS include intermittent positive pressure ventilation administered via tracheostomy, noninvasive positive pressure ventilation, negative-pressure ventilation by body chamber or cuirass, and phrenic nerve pacing. However, it may be necessary to alter the mode of ventilation according to age, psychosocial reasons, complications of therapy, and emergence of new modes of ventilation. We present a case of a 16-year-old girl with CCHS who was mechanically ventilated via tracheostomy for 16 years and was successfully transitioned to a new modality of noninvasive ventilation (average volume-assured pressure support [AVAPS]) that automatically adjusts the pressure support level in order to provide a consistent tidal volume.

  14. Effects of volume averaging on the line spectra of vertical velocity from multiple-Doppler radar observations

    NASA Technical Reports Server (NTRS)

    Gal-Chen, T.; Wyngaard, J. C.

    1982-01-01

    Calculations of the ratio of the true one-dimensional spectrum of vertical velocity and that measured with multiple-Doppler radar beams are presented. It was assumed that the effects of pulse volume averaging and objective analysis routines is replacement of a point measurement with a volume integral. A u and v estimate was assumed to be feasible when orthogonal radars are not available. Also, the target fluid was configured as having an infinite vertical dimension, zero vertical velocity at the top and bottom, and having homogeneous and isotropic turbulence with a Kolmogorov energy spectrum. The ratio obtained indicated that equal resolutions among radars yields a monotonically decreasing, wavenumber-dependent response function. A gain of 0.95 was demonstrated in an experimental situation with 40 levels. Possible errors introduced when using unequal resolution radars were discussed. Finally, it was found that, for some flows, the extent of attenuation depends on the number of vertical levels resolvable by the radars.

  15. Combined Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.; Miller, James J.

    2013-01-01

    Besides providing position, velocity, and timing (PVT) for terrestrial users, the Global Positioning System (GPS) is also being used to provide PVT information for earth orbiting satellites. In 2006, F. H. Bauer, et. al., defined the Space Service Volume in the paper GPS in the Space Service Volume , presented at ION s 19th international Technical Meeting of the Satellite Division, and looked at GPS coverage for orbiting satellites. With GLONASS already operational, and the first satellites of the Galileo and Beidou/COMPASS constellations already in orbit, it is time to look at the use of the new Global Navigation Satellite Systems (GNSS) coming into service to provide PVT information for earth orbiting satellites. This presentation extends GPS in the Space Service Volume by examining the coverage capability of combinations of the new constellations with GPS GPS was first explored as a system for refining the position, velocity, and timing of other spacecraft equipped with GPS receivers in the early eighties. Because of this, a new GPS utility developed beyond the original purpose of providing position, velocity, and timing services for land, maritime, and aerial applications. GPS signals are now received and processed by spacecraft both above and below the GPS constellation, including signals that spill over the limb of the earth. Support of GPS space applications is now part of the system plan for GPS, and support of the Space Service Volume by other GNSS providers has been proposed to the UN International Committee on GNSS (ICG). GPS has been demonstrated to provide decimeter level position accuracy in real-time for satellites in low Earth orbit (centimeter level in non-real-time applications). GPS has been proven useful for satellites in geosynchronous orbit, and also for satellites in highly elliptical orbits. Depending on how many satellites are in view, one can keep time locked to the GNSS standard, and through that to Universal Time as long as at least one

  16. Low Goiter Rate Associated with Small Average Thyroid Volume in Schoolchildren after the Elimination of Iodine Deficiency Disorders.

    PubMed

    Wang, Peihua; Sun, Hong; Shang, Li; Zhang, Qinglan; He, Yingxia; Chen, Zhigao; Zhou, Yonglin; Zhang, Jingjing; Wang, Qingqing; Zhao, Jinkou; Shen, Hongbing

    2015-01-01

    After the implementation of the universal salt iodization (USI) program in 1996, seven cross-sectional school-based surveys have been conducted to monitor iodine deficiency disorders (IDD) among children in eastern China. This study aimed to examine the correlation of total goiter rate (TGR) with average thyroid volume (Tvol) and urinary iodine concentration (UIC) in Jiangsu province after IDD elimination. Probability-proportional-to-size sampling was applied to select 1,200 children aged 8-10 years old in 30 clusters for each survey in 1995, 1997, 1999, 2001, 2002, 2005, 2009 and 2011. We measured Tvol using ultrasonography in 8,314 children and measured UIC (4,767 subjects) and salt iodine (10,184 samples) using methods recommended by the World Health Organization. Tvol was used to calculate TGR based on the reference criteria specified for sex and body surface area (BSA). TGR decreased from 55.2% in 1997 to 1.0% in 2009, and geometric means of Tvol decreased from 3.63 mL to 1.33 mL, along with the UIC increasing from 83 μg/L in 1995 to 407 μg/L in 1999, then decreasing to 243 μg/L in 2005, and then increasing to 345 μg/L in 2011. In the low goiter population (TGR < 3.9%), TGR was positively associated with average Tvol (r = 0.99); UIC showed a non-linear association with average Tvol, and UIC > 300 μg/L was associated with a smaller average Tvol in children. After IDD elimination in Jiangsu province in 2001, lower TGR was associated with smaller average Tvol. Average Tvol was more sensitive than TGR in detecting the fluctuation of UIC. A UIC of 300 μg/L may be defined as a critical value for population level iodine status monitoring.

  17. A global approach for image orientation using Lie algebraic rotation averaging and convex L∞ minimisation

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2014-08-01

    In this paper we present a new global image orientation approach for a set of multiple overlapping images with given homologous point tuples which is based on a two-step procedure. The approach is independent on initial values, robust with respect to outliers and yields the global minimum solution under relatively mild constraints. The first step of the approach consists of the estimation of global rotation parameters by averaging relative rotation estimates for image pairs (these are determined from the homologous points via the essential matrix in a pre-processing step). For the averaging we make use of algebraic group theory in which rotations, as part of the special orthogonal group SO(3), form a Lie group with a Riemannian manifold structure. This allows for a mapping to the local Euclidean tangent space of SO(3), the Lie algebra. In this space the redundancy of relative orientations is used to compute an average of the absolute rotation for each image and furthermore to detect and eliminate outliers. In the second step translation parameters and the object coordinates of the homologous points are estimated within a convex L∞ optimisation, in which the rotation parameters are kept fixed. As an optional third step the results can be used as initial values for a final bundle adjustment that does not suffer from bad initialisation and quickly converges to a globally optimal solution. We investigate our approach for global image orientation based on synthetic data. The results are compared to a robust least squares bundle adjustment. In this way we show that our approach is independent of initial values and more robust against outliers than a conventional bundle adjustment.

  18. Attributing Rise in Global Average Temperature to Emissions Traceable to Major Industrial Carbon Producer

    NASA Astrophysics Data System (ADS)

    Mera, R. J.; Allen, M. R.; Dalton, M.; Ekwurzel, B.; Frumhoff, P. C.; Heede, R.

    2013-12-01

    The role of human activity on global climate change has been explored in attribution studies based on the total amount of greenhouse gases in the atmosphere. Until now, however, a direct link between emissions traced directly to the major carbon producers has not been addressed. The carbon majors dataset developed by Heede (in review) account for more than 60 percent of the cumulative worldwide emissions of industrial carbon dioxide and methane through 2010. We use a conventional energy balance model coupled to a diffusive ocean, based on Allen et al. 2009, to evaluate the global temperature response to forcing from cumulative emissions traced to these producers. The base case for comparison is the Relative Concentration Pathways 4.5 [RCP4.5 (Moss et al. 2012)] simulation. Sensitivity tests varying climate sensitivity, ocean thermal diffusivity, ocean/atmosphere carbon uptake diffusivity, deep ocean carbon advection, and the carbon cycle temperature-dependent feedback are used to assess whether the fractional attribution for these sources surpasses the uncertainty limits calculated from these parameters The results suggest this dataset can be utilized for an expanded field of climate change impacts. Allen, M. R., D. J. Frame, C. Huntingford, C. D. Jones, J. A. Lowe, M. Meinshausen and N. Meinshausen (2009), Warming caused by cumulative carbon emissions towards the trillionth tonne, Nature, 458, 1163-1166, doi:10.1038/nature08019. Heede, R. (2013), Tracing anthropogenic carbon dioxide and methane emissions to fossil fuel and cement producers, 1854-2010, in review. Moss, R. H., et al. (2010), The next generation of scenarios for climate change research and assessment, Nature, 463, 747-756.

  19. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  20. Estimation of the diffuse radiation fraction for hourly, daily and monthly-average global radiation

    NASA Astrophysics Data System (ADS)

    Erbs, D. G.; Klein, S. A.; Duffie, J. A.

    1982-01-01

    Hourly pyrheliometer and pyranometer data from four U.S. locations are used to establish a relationship between the hourly diffuse fraction and the hourly clearness index. This relationship is compared to the relationship established by Orgill and Hollands (1977) and to a set of data from Highett, Australia, and agreement is within a few percent in both cases. The transient simulation program TRNSYS is used to calculate the annual performance of solar energy systems using several correlations. For the systems investigated, the effect of simulating the random distribution of the hourly diffuse fraction is negligible. A seasonally dependent daily diffuse correlation is developed from the data, and this daily relationship is used to derive a correlation for the monthly-average diffuse fraction.

  1. Challenges of Measuring Cosmic Dawn with the 21-cm Sky-Averaged, Global Signal

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Harker, G.; Mirocha, J.; Datta, A.

    2014-01-01

    The sky-averaged 21-cm signal is perhaps the most promising near-term probe of the “Cosmic Dawn”, when the first stars and galaxies began to heat and ionize the Universe. Measurements are still challenging, however, because of the intense foregrounds at the relevant low radio frequencies, the exquisite instrumental calibration this necessitates, human-generated radio frequency interference (RFI), and the Earth’s ionosphere. The latter three problems can be mitigated by studying the Cosmic Dawn from the farside of the Moon. The proposed Dark Ages Radio Explorer (DARE) would do so by carrying a dipole antenna to in a low lunar orbit. We outline this mission, show the constraints it can put on the physics of the cosmic dawn, and demonstrate how the ionosphere puts a fundamental limit on the sensitivity of similar, ground-based experiments.

  2. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes

    PubMed Central

    Nitzsche, Björn; Frey, Stephen; Collins, Louis D.; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H.; Fonov, Vladimir S.; Boltze, Johannes

    2015-01-01

    Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species. PMID:26089780

  3. A stereotaxic, population-averaged T1w ovine brain atlas including cerebral morphology and tissue volumes.

    PubMed

    Nitzsche, Björn; Frey, Stephen; Collins, Louis D; Seeger, Johannes; Lobsien, Donald; Dreyer, Antje; Kirsten, Holger; Stoffel, Michael H; Fonov, Vladimir S; Boltze, Johannes

    2015-01-01

    Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.

  4. Time for the Global Rollout of Endoscopic Lung Volume Reduction.

    PubMed

    Koegelenberg, Coenraad F N; Slebos, Dirk-Jan; Shah, Pallav L; Theron, Johan; Dheda, Keertan; Allwood, Brian W; Herth, Felix J F

    2015-01-01

    Chronic obstructive pulmonary disease remains one of the most common causes of morbidity and mortality globally. The disease is generally managed with pharmacotherapy, as well as guidance about smoking cessation and pulmonary rehabilitation. Endoscopic lung volume reduction (ELVR) has been proposed for the treatment of advanced emphysema, with the aim of obtaining the same clinical and functional advantages of surgical lung volume reduction whilst potentially reducing risks and costs. There is a growing body of evidence that certain well-defined sub-groups of patients with advanced emphysema may benefit from ELVR, provided the selection criteria are met and a systematic approach is followed. ELVR devices, particularly unidirectional valves and coils, are currently being rolled out to many countries outside of the U.S.A. and Europe, although very few centres currently have the capacity to correctly evaluate and provide ELVR to prospective candidates. The high cost of these interventions underpins the need for careful patient selection to best identify those who may or may not benefit from ELVR-related procedures. The aim of this review is to provide the practicing pulmonologist with an overview of the practical aspects and current evidence for the use of the various techniques available, and to suggest an evidence-based approach for the appropriate use of these devices, particularly in emerging markets, where there should be a drive to develop and equip key specialised ELVR units.

  5. Fast global interactive volume segmentation with regional supervoxel descriptors

    NASA Astrophysics Data System (ADS)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  6. Monthly Averages of Aerosol Properties: A Global Comparison Among Models, Satellite Data, and AERONET Ground Data

    SciTech Connect

    Kinne, S.; Lohmann, U; Feichter, J; Schulz, M.; Timmreck, C.; Ghan, Steven J.; Easter, Richard C.; Chin, M; Ginoux, P.; Takemura, T.; Tegen, I.; Koch, D; Herzog, M.; Penner, J.; Pitari, G.; Holben, B. N.; Eck, T.; Smirnov, A.; Dubovik, O.; Slutsker, I.; Tanre, D.; Torres, O.; Mishchenko, M.; Geogdzhayev, I.; Chu, D. A.; Kaufman, Yoram J.

    2003-10-21

    Aerosol introduces the largest uncertainties in model-based estimates of anthropogenic sources on the Earth's climate. A better representation of aerosol in climate models can be expected from an individual processing of aerosol type and new aerosol modules have been developed, that distinguish among at least five aerosol types: sulfate, organic carbon, black carbon, sea-salt and dust. In this study intermediate results of aerosol mass and aerosol optical depth of new aerosol modules from seven global models are evaluated. Among models, differences in predicted mass-fields are expected with differences to initialization and processing. Nonetheless, unusual discrepancies in source strength and in removal rates for particular aerosol types were identified. With simultaneous data for mass and optical depth, type conversion factors were compared. Differences among the tested models cover a factor of 2 for each, even hydrophobic, aerosol type. This is alarming and suggests that efforts of good mass-simulations could be wasted or that conversions are misused to cover for poor mass-simulations. An individual assessment, however, is difficult, as only part of the conversion determining factors (size assumption, permitted humidification and prescribed ambient relative humidity) were revealed. These differences need to be understood and minimized, if conclusions on aerosol processing in models can be drawn from comparisons to aerosol optical depth measurements.

  7. Elevation of monthly-average global radiation data from a horizontal to a tilted surface

    SciTech Connect

    Lunde, P.J.

    1980-01-01

    J.K. Page's method for elevating horizontally measured monthly global radiation data to a tilted surface was examined and several improvements were made. The most significant involved recalculation of the direct beam elevation factor considering the varying atmospheric path of the sun during the day in winter. Values based on extraterrestrial radiation were found to be much too high. Page's direct/diffuse relationship was then compensated methodically so that the convenient extraterrestrial elevation factors could continue to be used. The final elevation procedure was validated against monthly summaries of hourly records of tilted data synthesized from cloud cover at Hartford, Connecticut, monthly measured records on a 38/sup 0/ surface from Highett, Australia, and monthly records of tilted data generated at 30/sup 0/, 60/sup 0/, and 90/sup 0/ elevations at Kapuskasing, Ontario, by a new hourly elevation method developed for the Canadian Atmospheric Environment Service. Monthly correspondence was within 3% for any winter month and nearly that good in summer.

  8. Beyond the average: Detecting global singular nodes from local features in complex networks

    NASA Astrophysics Data System (ADS)

    Costa, L. da F.; Rodrigues, F. A.; Hilgetag, C. C.; Kaiser, M.

    2009-07-01

    Deviations from the average can provide valuable insights about the organization of natural systems. The present article extends this important principle to the systematic identification and analysis of singular motifs in complex networks. Six measurements quantifying different and complementary features of the connectivity around each node of a network were calculated, and multivariate statistical methods applied to identify singular nodes. The potential of the presented concepts and methodology was illustrated with respect to different types of complex real-world networks, namely the US air transportation network, the protein-protein interactions of the yeast Saccharomyces cerevisiae and the Roget thesaurus networks. The obtained singular motifs possessed unique functional roles in the networks. Three classic theoretical network models were also investigated, with the Barabási-Albert model resulting in singular motifs corresponding to hubs, confirming the potential of the approach. Interestingly, the number of different types of singular node motifs as well as the number of their instances were found to be considerably higher in the real-world networks than in any of the benchmark networks.

  9. A multi-moment finite volume method for incompressible Navier-Stokes equations on unstructured grids: Volume-average/point-value formulation

    NASA Astrophysics Data System (ADS)

    Xie, Bin; , Satoshi, Ii; Ikebata, Akio; Xiao, Feng

    2014-11-01

    A robust and accurate finite volume method (FVM) is proposed for incompressible viscous fluid dynamics on triangular and tetrahedral unstructured grids. Differently from conventional FVM where the volume integrated average (VIA) value is the only computational variable, the present formulation treats both VIA and the point value (PV) as the computational variables which are updated separately at each time step. The VIA is computed from a finite volume scheme of flux form, and is thus numerically conservative. The PV is updated from the differential form of the governing equation that does not have to be conservative but can be solved in a very efficient way. Including PV as the additional variable enables us to make higher-order reconstructions over compact mesh stencil to improve the accuracy, and moreover, the resulting numerical model is more robust for unstructured grids. We present the numerical formulations in both two and three dimensions on triangular and tetrahedral mesh elements. Numerical results of several benchmark tests are also presented to verify the proposed numerical method as an accurate and robust solver for incompressible flows on unstructured grids.

  10. The equivalence between volume averaging and method of planes definitions of the pressure tensor at a plane

    NASA Astrophysics Data System (ADS)

    Heyes, D. M.; Smith, E. R.; Dini, D.; Zaki, T. A.

    2011-07-01

    It is shown analytically that the method of planes (MOP) [Todd, Evans, and Daivis, Phys. Rev. E 52, 1627 (1995)] and volume averaging (VA) [Cormier, Rickman, and Delph, J. Appl. Phys. 89, 99 (2001), 10.1063/1.1328406] formulas for the local pressure tensor, Pα, y(y), where α ≡ x, y, or z, are mathematically identical. In the case of VA, the sampling volume is taken to be an infinitely thin parallelepiped, with an infinite lateral extent. This limit is shown to yield the MOP expression. The treatment is extended to include the condition of mechanical equilibrium resulting from an imposed force field. This analytical development is followed by numerical simulations. The equivalence of these two methods is demonstrated in the context of non-equilibrium molecular dynamics (NEMD) simulations of boundary-driven shear flow. A wall of tethered atoms is constrained to impose a normal load and a velocity profile on the entrained central layer. The VA formula can be used to compute all components of Pαβ(y), which offers an advantage in calculating, for example, Pxx(y) for nano-scale pressure-driven flows in the x-direction, where deviations from the classical Poiseuille flow solution can occur.

  11. Fatigue strength of Al7075 notched plates based on the local SED averaged over a control volume

    NASA Astrophysics Data System (ADS)

    Berto, Filippo; Lazzarin, Paolo

    2014-01-01

    When pointed V-notches weaken structural components, local stresses are singular and their intensities are expressed in terms of the notch stress intensity factors (NSIFs). These parameters have been widely used for fatigue assessments of welded structures under high cycle fatigue and sharp notches in plates made of brittle materials subjected to static loading. Fine meshes are required to capture the asymptotic stress distributions ahead of the notch tip and evaluate the relevant NSIFs. On the other hand, when the aim is to determine the local Strain Energy Density (SED) averaged in a control volume embracing the point of stress singularity, refined meshes are, not at all, necessary. The SED can be evaluated from nodal displacements and regular coarse meshes provide accurate values for the averaged local SED. In the present contribution, the link between the SED and the NSIFs is discussed by considering some typical welded joints and sharp V-notches. The procedure based on the SED has been also proofed to be useful for determining theoretical stress concentration factors of blunt notches and holes. In the second part of this work an application of the strain energy density to the fatigue assessment of Al7075 notched plates is presented. The experimental data are taken from the recent literature and refer to notched specimens subjected to different shot peening treatments aimed to increase the notch fatigue strength with respect to the parent material.

  12. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    NASA Astrophysics Data System (ADS)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell

  13. Gray matter volume correlates of global positive alcohol expectancy in non-dependent adult drinkers

    PubMed Central

    Ide, Jaime S.; Zhang, Sheng; Hu, Sien; Matuskey, David; Bednarski, Sarah R.; Erdman, Emily; Farr, Olivia M.; Li, Chiang-shan R.

    2013-01-01

    Alcohol use and misuse is known to involve structural brain changes. Numerous imaging studies have examined changes in gray matter (GM) volumes in dependent drinkers, but there is little information on whether non-dependent drinking is associated with structural changes and whether these changes are related to psychological factors – such as alcohol expectancy – that influence drinking behavior. We used voxel based morphometry (VBM) to examine whether the global positive scale of alcohol expectancy, as measured by the Alcohol Expectancy Questionnaire AEQ-3, is associated with specific structural markers and whether such markers are associated with drinking behavior in 113 adult non-dependent drinkers (66 women). Alcohol expectancy is positively correlated with GM volume of left precentrral gyrus (PCG) in men and women combined and bilateral superior frontal gyri (SFG) in women, and negatively correlated with GM volume of the right ventral putamen in men. Furthermore, mediation analyses showed that the GM volume of PCG mediate the correlation of alcohol expectancy and the average number of drinks consumed per occasion and monthly total number of drinks in the past year. When recent drinking was directly accounted for in multiple regressions, GM volume of bilateral dorsolateral prefrontal cortices (DLPFC) correlated positively with alcohol expectancy in the combined sample. To our knowledge, these results are the first to identify the structural brain correlates of alcohol expectancy and its mediation of drinking behaviors. These findings suggest that more studies are needed to investigate increased GM volume in the frontal cortices as a neural correlate of alcohol expectancy. PMID:23461484

  14. Paleosecular variation and time-averaged field analysis over the last 10 Ma from a new global dataset (PSV10)

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.

    2015-12-01

    Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.

  15. Investigation of the Solidification Behavior of NH4Cl Aqueous Solution Based on a Volume-Averaged Method

    NASA Astrophysics Data System (ADS)

    Li, Ri; Zhou, Liming; Wang, Jian; Li, Yan

    2017-02-01

    Based on solidification theory and a volume-averaged multiphase solidification model, the solidification process of NH4Cl-70 pct H2O was numerically simulated and experimentally verified. Although researchers have investigated the solidification process of NH4Cl-70 pct H2O, most existing studies have been focused on analysis of a single phenomenon, such as the formation of channel segregation, convection types, and the formation of grains. Based on prior studies, by combining numerical simulation and experimental investigation, all phenomena of the entire computational domain of the solidification process of an NH4Cl aqueous solution were comprehensively investigated for the first time in this study. In particular, the sedimentation of equiaxed grains in the ingot and the induced convection were reproduced. In addition, the formation mechanism of segregation was studied in depth. The calculation demonstrated that the equiaxed grains settled from the wall of the mold and gradually aggregated at the bottom of the mold; when the volume fraction reached a critical value, the columnar grains stopped growing, thus completing the columnar-to-equiaxed transition (CET). Because of solute partitioning, negative segregation occurred at the bottom region of the ingot concentrated with grains, whereas a wide range of positive segregation occurred in the unsolidified, upper part of the ingot. Experimental investigation indicated that the predicted results of the sedimentation of the equiaxed grains in the ingot and the convection types agreed well with the experimental results, thus revealing that the sedimentation of solid phase and convection in the solidification process are the key factors responsible for macrosegregation.

  16. Noninvasive mechanical ventilation with average volume assured pressure support (AVAPS) in patients with chronic obstructive pulmonary disease and hypercapnic encephalopathy

    PubMed Central

    2013-01-01

    Background Non-invasive mechanical ventilation (NIV) in patients with acute respiratory failure has been traditionally determined based on clinical assessment and changes in blood gases, with NIV support pressures manually adjusted by an operator. Bilevel positive airway pressure-spontaneous/timed (BiPAP S/T) with average volume assured pressure support (AVAPS) uses a fixed tidal volume that automatically adjusts to a patient’s needs. Our study assessed the use of BiPAP S/T with AVAPS in patients with chronic obstructive pulmonary disease (COPD) and hypercapnic encephalopathy as compared to BiPAP S/T alone, upon immediate arrival in the Emergency-ICU. Methods We carried out a prospective interventional match-controlled study in Guayaquil, Ecuador. A total of 22 patients were analyzed. Eleven with COPD exacerbations and hypercapnic encephalopathy with a Glasgow Coma Scale (GCS) <10 and a pH of 7.25-7.35 were assigned to receive NIV via BiPAP S/T with AVAPS. Eleven patients were selected as paired controls for the initial group by physicians who were unfamiliar with our study, and these patients were administered BiPAP S/T. Arterial blood gases, GCS, vital signs, and ventilatory parameters were then measured and compared between the two groups. Results We observed statistically significant differences in favor of the BiPAP S/T + AVAPS group in GCS (P = .00001), pCO2 (P = .03) and maximum inspiratory positive airway pressure (IPAP) (P = .005), among others. However, no significant differences in terms of length of stay or days on NIV were observed. Conclusions BiPAP S/T with AVAPS facilitates rapid recovery of consciousness when compared to traditional BiPAP S/T in patients with chronic obstructive pulmonary disease and hypercapnic encephalopathy. Trial registration Current Controlled Trials application ref is ISRCTN05135218 PMID:23497021

  17. Investigation of the Solidification Behavior of NH4Cl Aqueous Solution Based on a Volume-Averaged Method

    NASA Astrophysics Data System (ADS)

    Li, Ri; Zhou, Liming; Wang, Jian; Li, Yan

    2017-06-01

    Based on solidification theory and a volume-averaged multiphase solidification model, the solidification process of NH4Cl-70 pct H2O was numerically simulated and experimentally verified. Although researchers have investigated the solidification process of NH4Cl-70 pct H2O, most existing studies have been focused on analysis of a single phenomenon, such as the formation of channel segregation, convection types, and the formation of grains. Based on prior studies, by combining numerical simulation and experimental investigation, all phenomena of the entire computational domain of the solidification process of an NH4Cl aqueous solution were comprehensively investigated for the first time in this study. In particular, the sedimentation of equiaxed grains in the ingot and the induced convection were reproduced. In addition, the formation mechanism of segregation was studied in depth. The calculation demonstrated that the equiaxed grains settled from the wall of the mold and gradually aggregated at the bottom of the mold; when the volume fraction reached a critical value, the columnar grains stopped growing, thus completing the columnar-to-equiaxed transition (CET). Because of solute partitioning, negative segregation occurred at the bottom region of the ingot concentrated with grains, whereas a wide range of positive segregation occurred in the unsolidified, upper part of the ingot. Experimental investigation indicated that the predicted results of the sedimentation of the equiaxed grains in the ingot and the convection types agreed well with the experimental results, thus revealing that the sedimentation of solid phase and convection in the solidification process are the key factors responsible for macrosegregation.

  18. The effect of stress and incentive magnetic field on the average volume of magnetic Barkhausen jump in iron

    NASA Astrophysics Data System (ADS)

    Shu, Di; Guo, Lei; Yin, Liang; Chen, Zhaoyang; Chen, Juan; Qi, Xin

    2015-11-01

    The average volume of magnetic Barkhausen jump (AVMBJ) v bar generated by magnetic domain wall irreversible displacement under the effect of the incentive magnetic field H for ferromagnetic materials and the relationship between irreversible magnetic susceptibility χirr and stress σ are adopted in this paper to study the theoretical relationship among AVMBJ v bar(magneto-elasticity noise) and the incentive magnetic field H. Then the numerical relationship among AVMBJ v bar, stress σ and the incentive magnetic field H is deduced. Utilizing this numerical relationship, the displacement process of magnetic domain wall for single crystal is analyzed and the effect of the incentive magnetic field H and the stress σ on the AVMBJ v bar (magneto-elasticity noise) is explained from experimental and theoretical perspectives. The saturation velocity of Barkhausen jump characteristic value curve is different when tensile or compressive stress is applied on ferromagnetic materials, because the resistance of magnetic domain wall displacement is different. The idea of critical magnetic field in the process of magnetic domain wall displacement is introduced in this paper, which solves the supersaturated calibration problem of AVMBJ - σ calibration curve.

  19. Dynamic volume-averaged model of heat and mass transport within a compost biofilter: I. Model development.

    PubMed

    Mysliwiec, M J; VanderGheynst, J S; Rashid, M M; Schroeder, E D

    2001-05-20

    Successful, long-term operation of a biofilter system depends on maintaining a suitable biofilm environment within a porous medium reactor. In this article a mathematical study was conducted of the spatial and temporal changes of biofilter performance due to interphase heat and mass transport. The method of volume averaging was used to spatially smooth the three-phase (solid, liquid, and gas) conservation equations over the biofilter domain. The packing medium was assumed to be inert, removing the solid phase mass continuity equation from the system. The finite element method was used to integrate the resulting nonlinear-coupled partial differential equations, tracking eight state variables: temperature, water vapor, dry air, liquid water, biofilm, gas and liquid phase organic pollutant, and nutrient densities, through time and space. A multiphase, gas and liquid flow model was adapted to the biofilter model from previous studies of unsaturated groundwater flow. Newton's method accelerated by an LU direct solver was used to iterate the model for solutions. Effects of packing media on performance were investigated to illustrate the utility of the model. The moisture dynamics and nutrient cycling are presented in Part II of this article. Copyright 2001 John Wiley & Sons, Inc.

  20. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    SciTech Connect

    Tuite, P.; Tuite, K.; O`Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types.

  1. Evolution of tropical circulation anomalies associated with 30-60 day oscillation of globally averaged angular momentum during northern summer

    NASA Technical Reports Server (NTRS)

    Kang, In-Sik; Lau, K.-M.

    1990-01-01

    Lag correlation statistics was used to study intraseasonal variations of upper and lower-level zonal winds, outgoing longwave radiation, and globally averaged angular momentum (GAM) for northern summers of 1977-1984. The temporal and spatial distribution of surface wind stress in the tropics and its relationship with zonal wind anomalies were studied to assess the impact of surface frictional drag on the atmospheric angular momentum. The 30-60 day GAM fluctuation is shown to be accompanied by zonal propagation of convection and 850 mb zonal wind anomalies in the tropical belt. The climatological zonal wind in the tropics affects the magnitude of wind stress anomalies. It is suggested that momentum exchange between the lower and upper troposphere may occur in regions of active convection via vertical momentum transport. The tropical central Pacific is considered to play a key role in linking the atmosphere and the earth through angular momentum exchange on intraseasonal time scales.

  2. Evolution of tropical circulation anomalies associated with 30-60 day oscillation of globally averaged angular momentum during northern summer

    NASA Technical Reports Server (NTRS)

    Kang, In-Sik; Lau, K.-M.

    1990-01-01

    Lag correlation statistics was used to study intraseasonal variations of upper and lower-level zonal winds, outgoing longwave radiation, and globally averaged angular momentum (GAM) for northern summers of 1977-1984. The temporal and spatial distribution of surface wind stress in the tropics and its relationship with zonal wind anomalies were studied to assess the impact of surface frictional drag on the atmospheric angular momentum. The 30-60 day GAM fluctuation is shown to be accompanied by zonal propagation of convection and 850 mb zonal wind anomalies in the tropical belt. The climatological zonal wind in the tropics affects the magnitude of wind stress anomalies. It is suggested that momentum exchange between the lower and upper troposphere may occur in regions of active convection via vertical momentum transport. The tropical central Pacific is considered to play a key role in linking the atmosphere and the earth through angular momentum exchange on intraseasonal time scales.

  3. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene.

    PubMed

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-10-28

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet's dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to -134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 10(6) km(3) greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 m⋅ka(-1) punctuated by periods of greater, particularly at 14.5-14.0 ka BP at ≥40 mm⋅y(-1) (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100-150 y ago, with no evidence of oscillations exceeding ∼15-20 cm in time intervals ≥200 y from 6 to 0.15 ka BP.

  4. Sea level and global ice volumes from the Last Glacial Maximum to the Holocene

    PubMed Central

    Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm

    2014-01-01

    The major cause of sea-level change during ice ages is the exchange of water between ice and ocean and the planet’s dynamic response to the changing surface load. Inversion of ∼1,000 observations for the past 35,000 y from localities far from former ice margins has provided new constraints on the fluctuation of ice volume in this interval. Key results are: (i) a rapid final fall in global sea level of ∼40 m in <2,000 y at the onset of the glacial maximum ∼30,000 y before present (30 ka BP); (ii) a slow fall to −134 m from 29 to 21 ka BP with a maximum grounded ice volume of ∼52 × 106 km3 greater than today; (iii) after an initial short duration rapid rise and a short interval of near-constant sea level, the main phase of deglaciation occurred from ∼16.5 ka BP to ∼8.2 ka BP at an average rate of rise of 12 m⋅ka−1 punctuated by periods of greater, particularly at 14.5–14.0 ka BP at ≥40 mm⋅y−1 (MWP-1A), and lesser, from 12.5 to 11.5 ka BP (Younger Dryas), rates; (iv) no evidence for a global MWP-1B event at ∼11.3 ka BP; and (v) a progressive decrease in the rate of rise from 8.2 ka to ∼2.5 ka BP, after which ocean volumes remained nearly constant until the renewed sea-level rise at 100–150 y ago, with no evidence of oscillations exceeding ∼15–20 cm in time intervals ≥200 y from 6 to 0.15 ka BP. PMID:25313072

  5. Global Education: What the Research Shows. Information Capsule. Volume 0604

    ERIC Educational Resources Information Center

    Blazer, Christie

    2006-01-01

    Teaching from a global perspective is important because the lives of people around the world are increasingly interconnected through politics, economics, technology, and the environment. Global education teaches students to understand and appreciate people from different cultural backgrounds; view events from a variety of perspectives; recognize…

  6. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  7. Stone Attenuation Values Measured by Average Hounsfield Units and Stone Volume as Predictors of Total Laser Energy Required During Ureteroscopic Lithotripsy Using Holmium:Yttrium-Aluminum-Garnet Lasers.

    PubMed

    Ofude, Mitsuo; Shima, Takashi; Yotsuyanagi, Satoshi; Ikeda, Daisuke

    2017-04-01

    To evaluate the predictors of the total laser energy (TLE) required during ureteroscopic lithotripsy (URS) using the holmium:yttrium-aluminum-garnet (Ho:YAG) laser for a single ureteral stone. We retrospectively analyzed the data of 93 URS procedures performed for a single ureteral stone in our institution from November 2011 to September 2015. We evaluated the association between TLE and preoperative clinical data, such as age, sex, body mass index, and noncontrast computed tomographic findings, including stone laterality, location, maximum diameter, volume, stone attenuation values measured using average Hounsfield units (HUs), and presence of secondary signs (severe hydronephrosis, tissue rim sign, and perinephric stranding). The mean maximum stone diameter, volume, and average HUs were 9.2 ± 3.8 mm, 283.2 ± 341.4 mm(3), and 863 ± 297, respectively. The mean TLE and operative time were 2.93 ± 3.27 kJ and 59.1 ± 28.1 minutes, respectively. Maximum stone diameter, volume, average HUs, severe hydronephrosis, and tissue rim sign were significantly correlated with TLE (Spearman's rho analysis). Stepwise multiple linear regression analysis defining stone volume, average HUs, severe hydronephrosis, and tissue rim sign as explanatory variables showed that stone volume and average HUs were significant predictors of TLE (standardized coefficients of 0.565 and 0.320, respectively; adjusted R(2) = 0.55, F = 54.7, P <.001). Stone attenuation values measured by average HUs and stone volume were strong predictors of TLE during URS using Ho:YAG laser procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Can Granger causality delineate natural versus anthropogenic drivers of climate change from global-average multivariate time series?

    NASA Astrophysics Data System (ADS)

    Kodra, E. A.; Chatterjee, S.; Ganguly, A. R.

    2009-12-01

    The Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) notes with a high degree of certainty that global warming can be attributed to anthropogenic emissions. Detection and attribution studies, which attempt to delineate human influences on regional- and decadal-scale climate change or its impacts, use a variety of techniques, including Granger causality. Recently, Granger causality was used as a tool for detection and attribution in climate based on a spatio-temporal data mining approach. However, the degree to which Granger causality may be able to delineate natural versus anthropogenic drivers of change in these situations needs to be thoroughly investigated. As a first step, we use multivariate global-average time series of observations to test the performance of Granger causality. We apply the popular Granger F-tests to Radiative Forcing (RF), which is a transformation of carbon dioxide (CO2), and Global land surface Temperature anomalies (GT). Our preliminary results with observations appear to suggest that RF Granger-causes GT, which seem to become more apparent with more data. However, carefully designed simulations indicate that these results are not reliable and may, in fact, be misleading. On the other hand, the same observation- and simulation-driven methodologies, when applied to the El Niño Southern Oscillation (ENSO) index, clearly show reliable Granger-causality from ENSO to GT. We develop and test several hypotheses to explain why the Granger causality tests between RF and GT are not reliable. We conclude that the form of Granger causality used in this study, and in past studies reported in the literature, is sensitive to data availability, random variability, and especially whether the variables arise from a deterministic or stochastic process. Simulations indicate that Granger causality in this form performs poorly, even in simple linear effect cases, when applied to one deterministic and one stochastic time

  9. Computation and use of volume-weighted-average concentrations to determine long-term variations of selected water-quality constituents in lakes and reservoirs

    USGS Publications Warehouse

    Wells, Frank C.; Schertz, Terry L.

    1984-01-01

    A computer program using the Statistical Analysis System has been developed to perform the arithmetic calculations and regression analyses to determine volume-weighted-average concentrations of selected water-quality constituents in lakes and reservoirs. The program has been used in Texas to show decreasing trends in dissolved-solids and total-phosphorus concentrations in Lake Arlington after the discharge of sewage effluent into the reservoir was stopped. The program also was used to show that the August 1978 and October 1981 floods on the Brazos River greatly decreased the volume-weighted-average concentrations of selected constituents in Hubbard Creek Reservoir and Possum Kingdom Lake.

  10. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City

    PubMed Central

    Metson, Geneviève S.; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world’s main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region’s “phosphorus footprint” – the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident’s annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management. PMID:27617261

  11. Potential Impact of Dietary Choices on Phosphorus Recycling and Global Phosphorus Footprints: The Case of the Average Australian City.

    PubMed

    Metson, Geneviève S; Cordell, Dana; Ridoutt, Brad

    2016-01-01

    Changes in human diets, population increases, farming practices, and globalized food chains have led to dramatic increases in the demand for phosphorus fertilizers. Long-term food security and water quality are, however, threatened by such increased phosphorus consumption, because the world's main source, phosphate rock, is an increasingly scarce resource. At the same time, losses of phosphorus from farms and cities have caused widespread water pollution. As one of the major factors contributing to increased phosphorus demand, dietary choices can play a key role in changing our resource consumption pathway. Importantly, the effects of dietary choices on phosphorus management are twofold: First, dietary choices affect a person or region's "phosphorus footprint" - the magnitude of mined phosphate required to meet food demand. Second, dietary choices affect the magnitude of phosphorus content in human excreta and hence the recycling- and pollution-potential of phosphorus in sanitation systems. When considering options and impacts of interventions at the city scale (e.g., potential for recycling), dietary changes may be undervalued as a solution toward phosphorus sustainability. For example, in an average Australian city, a vegetable-based diet could marginally increase phosphorus in human excreta (an 8% increase). However, such a shift could simultaneously dramatically decrease the mined phosphate required to meet the city resident's annual food demand by 72%. Taking a multi-scalar perspective is therefore key to fully exploring dietary choices as one of the tools for sustainable phosphorus management.

  12. An Implicit 2-D Depth-Averaged Finite-Volume Model of Flow and Sediment Transport in Coastal Waters

    DTIC Science & Technology

    2010-01-01

    Two-dimensional depth-averaged circulation model CMS- M2D : Version 3.0, Report 2: Sediment transport and morphology change, Technical Report ERDC/CHL TR...dimensional depth-averaged circulation model M2D : Version 2.0, Report 1, Technical documentation and user’s guide. ERDC/CHL TR-04-2, Coastal and Hydraulics

  13. An investigation into the sensitivity of the atmospheric chlorine and bromine loading using a globally averaged mass balance model

    NASA Astrophysics Data System (ADS)

    Dowdell, David C.; Matthews, G. Peter; Wells, Ian

    Two globally averaged mass balance models have been developed to investigate the sensitivity and future level of atmospheric chlorine and bromine as a result of the emission of 14 chloro- and 3 bromo-carbons. The models use production, growth, lifetime and concentration data for each of the halocarbons and divide the production into one of eight uses, these being aerosol propellants, cleaning agents, blowing agents in open and closed cell foams, non-hermetic and hermetic refrigeration, fire retardants and a residual "other" category. Each use category has an associated emission profile which is built into the models to take into account the proportion of halocarbon retained in equipment for a characteristic period of time before its release. Under the Montreal Protocol 3 requirements, a peak chlorine loading of 3.8 ppb is attained in 1994, which does not reduce to 2.0 ppb (the approximate level of atmospheric chlorine when the ozone hole formed) until 2053. The peak bromine loading is 22 ppt, also in 1994, which decays to 12 ppt by the end of next century. The models have been used to (i) compare the effectiveness of Montreal Protocols 1, 2 and 3 in removing chlorine from the atmosphere, (ii) assess the influence of the delayed emission assumptions used in these models compared to immediate emission assumptions used in previous models, (iii) assess the relative effect on the chlorine loading of a tightening of the Montreal Protocol 3 restrictions, and (iv) calculate the influence of chlorine and bromine chemistry as well as the faster phase out of man-made methyl bromide on the bromine loading.

  14. FDTD based SAR analysis in human head using irregular volume averaging techniques of different resolutions at GSM 900 band

    NASA Astrophysics Data System (ADS)

    Ali, Md Faruk; Ray, Sudhabindu

    2014-06-01

    Specific absorption rate (SAR) induced inside human head in the near-field of a mobile phone antenna has been investigated for three different SAR resolutions using Finite Difference in Time Domain (FDTD) method at GSM 900 band. Voxel based anthropomorphic human head model, consisting of different anatomical tissues, is used to calculate the peak SAR values averaged over 10-g, 1-g and 0.1-g mass. It is observed that the maximum local SAR increases significantly for smaller mass averages.

  15. Standard error of estimated average timber volume per acre under point sampling when trees are measured for volume on a subsample of all points.

    Treesearch

    Floyd A. Johnson

    1961-01-01

    This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...

  16. Association between global brain volume and the rate of cognitive change in elderly humans without dementia.

    PubMed

    Hensel, A; Wolf, H; Busse, A; Arendt, T; Gertz, H J

    2005-01-01

    Patients with mild cognitive deficits experience different types of evolution. They are at increased risk of developing dementia, but they have also a chance of remaining stable in cognition or of improving. We investigated whether global brain volume, callosal size and hippocampal size are associated with the rate of cognitive change in elderly without dementia. Volumetric MR images were recorded from 39 controls and 35 patients with questionable dementia who were followed up longitudinally for a mean of 2.3 years. The outcome measure was the annual change in the test score in the Structured Interview for the Diagnosis of Alzheimer's Dementia and Multi-Infarct Dementia, which includes all items of the Mini-Mental State Examination. Global brain volume, grey matter volume and white matter volume were the only significant independent predictors of the rate of cognitive change.

  17. A planet under siege: Are we changing earth`s climate?. Global Systems Science, Volume 1

    SciTech Connect

    Sneider, C.; Golden, R.

    1992-12-31

    Global Systems Science is an interdisciplinary course for high school students. It is divided into five volumes. Each volume contains laboratory experiments; home investigations; descriptions of recent scientific work; historical background; and consideration of the political, economic, and ethical issues associated with each problem area. Collectively, these volumes constitute a unique combination of studies in the natural and social sciences from which high school students may view the global environmental problems that they will confront within their lifetimes. The five volumes are: A Planet Under Siege: Are We Changing Earths Climate; A History of Fire and Ice: The Earth`s Climate System; Energy Paths: Use and Conservation of Energy; Ecological Systems: Evolution and Interdependence of Life; and, The Case of the Missing Ozone: Chemistry of the Earth`s Atmosphere.

  18. Attitude towards technology, social media usage and grade-point average as predictors of global citizenship identification in Filipino University Students.

    PubMed

    Lee, Romeo B; Baring, Rito; Maria, Madelene Sta; Reysen, Stephen

    2015-08-04

    We examine the influence of a positive attitude towards technology, number of social media network memberships and grade-point average (GPA) on global citizenship identification antecedents and outcomes. Students (N = 3628) at a university in the Philippines completed a survey assessing the above constructs. The results showed that attitude towards technology, number of social network site memberships and GPA-predicted global citizenship identification, and subsequent prosocial outcomes (e.g. intergroup helping, responsibility to act for the betterment of the world), through the perception that valued others prescribe a global citizen identity (normative environment) and perceived knowledge of the world and felt interconnectedness with others (global awareness). The results highlight the associations between technology and academic performance with a global identity and associated values.

  19. The effect of reducing spatial resolution by in-plane partial volume averaging on peak velocity measurements in phase contrast magnetic resonance angiography.

    PubMed

    Rodrigues, Jonathan; Minhas, Kishore; Pieles, Guido; McAlindon, Elisa; Occleshaw, Christopher; Manghat, Nathan; Hamilton, Mark

    2016-10-01

    The aim of this study was to quantify the degree of the effect of in-plane partial volume averaging on recorded peak velocity in phase contrast magnetic resonance angiography (PCMRA). Using cardiac optimized 1.5 Tesla MRI scanners (Siemens Symphony and Avanto), 145 flow measurements (14 anatomical locations; ventricular outlets, aortic valve (AorV), aorta (5 sites), pulmonary arteries (3 sites), pulmonary veins, superior and inferior vena cava)- in 37 subjects (consisting of healthy volunteers, congenital and acquired heart disease patients) were analyzed by Siemens Argus default voxel averaging technique (where peak velocity = mean of highest velocity voxel and four neighbouring voxels) and by single voxel technique (1.3×1.3×5 or 1.7×1.7×5.5 mm(3)) (where peak velocity = highest velocity voxel only). The effect of scan protocol (breath hold versus free breathing) and scanner type (Siemens Symphony versus Siemens Avanto) were also assessed. Statistical significance was defined as P<0.05. There was a significant mean increase in peak velocity of 7.1% when single voxel technique was used compared to voxel averaging (P<0.0001). Significant increases in peak velocity were observed by single voxel technique compared to voxel averaging regardless of subject type, anatomical flow location, scanner type and breathing command. Disabling voxel averaging did not affect the volume of flow recorded. Reducing spatial resolution by the use of voxel averaging produces a significant underestimation of peak velocity. While this is of itself not surprising this is the first report to quantify the size of the effect. When PCMRA is used to assess peak velocity recording pixel averaging should be disabled.

  20. The effect of reducing spatial resolution by in-plane partial volume averaging on peak velocity measurements in phase contrast magnetic resonance angiography

    PubMed Central

    Rodrigues, Jonathan; Minhas, Kishore; Pieles, Guido; McAlindon, Elisa; Occleshaw, Christopher; Manghat, Nathan

    2016-01-01

    Background The aim of this study was to quantify the degree of the effect of in-plane partial volume averaging on recorded peak velocity in phase contrast magnetic resonance angiography (PCMRA). Methods Using cardiac optimized 1.5 Tesla MRI scanners (Siemens Symphony and Avanto), 145 flow measurements (14 anatomical locations; ventricular outlets, aortic valve (AorV), aorta (5 sites), pulmonary arteries (3 sites), pulmonary veins, superior and inferior vena cava)- in 37 subjects (consisting of healthy volunteers, congenital and acquired heart disease patients) were analyzed by Siemens Argus default voxel averaging technique (where peak velocity = mean of highest velocity voxel and four neighbouring voxels) and by single voxel technique (1.3×1.3×5 or 1.7×1.7×5.5 mm3) (where peak velocity = highest velocity voxel only). The effect of scan protocol (breath hold versus free breathing) and scanner type (Siemens Symphony versus Siemens Avanto) were also assessed. Statistical significance was defined as P<0.05. Results There was a significant mean increase in peak velocity of 7.1% when single voxel technique was used compared to voxel averaging (P<0.0001). Significant increases in peak velocity were observed by single voxel technique compared to voxel averaging regardless of subject type, anatomical flow location, scanner type and breathing command. Disabling voxel averaging did not affect the volume of flow recorded. Conclusions Reducing spatial resolution by the use of voxel averaging produces a significant underestimation of peak velocity. While this is of itself not surprising this is the first report to quantify the size of the effect. When PCMRA is used to assess peak velocity recording pixel averaging should be disabled. PMID:27942477

  1. Insolation data manual: long-term monthly averages of solar radiation, temperature, degree-days and global anti K/sub T/ for 248 national weather service stations

    SciTech Connect

    Knapp, C L; Stoffel, T L; Whitaker, S D

    1980-10-01

    Monthly averaged data is presented which describes the availability of solar radiation at 248 National Weather Service stations. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data. Average daily maximum, minimum, and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3/sup 0/C (65/sup 0/F). For each station, global anti K/sub T/ (cloudiness index) were calculated on a monthly and annual basis. (MHR)

  2. The impact of hospital market structure on patient volume, average length of stay, and the cost of care.

    PubMed

    Robinson, J C; Luft, H S

    1985-12-01

    A variety of recent proposals rely heavily on market forces as a means of controlling hospital cost inflation. Sceptics argue, however, that increased competition might lead to cost-increasing acquisitions of specialized clinical services and other forms of non-price competition as means of attracting physicians and patients. Using data from hospitals in 1972 we analyzed the impact of market structure on average hospital costs, measured in terms of both cost per patient and cost per patient day. Under the retrospective reimbursement system in place at the time, hospitals in more competitive environments exhibited significantly higher costs of production than did those in less competitive environments.

  3. Is the Surface Potential Integral of a Dipole in a Volume Conductor Always Zero? A Cloud Over the Average Reference of EEG and ERP.

    PubMed

    Yao, Dezhong

    2017-03-01

    Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.

  4. SU-D-213-04: Accounting for Volume Averaging and Material Composition Effects in An Ionization Chamber Array for Patient Specific QA

    SciTech Connect

    Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K

    2015-06-15

    Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly

  5. Identification of myocardial diffuse fibrosis by 11 heartbeat MOLLI T 1 mapping: averaging to improve precision and correlation with collagen volume fraction.

    PubMed

    Vassiliou, Vassilios S; Wassilew, Katharina; Cameron, Donnie; Heng, Ee Ling; Nyktari, Evangelia; Asimakopoulos, George; de Souza, Anthony; Giri, Shivraman; Pierce, Iain; Jabbour, Andrew; Firmin, David; Frenneaux, Michael; Gatehouse, Peter; Pennell, Dudley J; Prasad, Sanjay K

    2017-06-12

    Our objectives involved identifying whether repeated averaging in basal and mid left ventricular myocardial levels improves precision and correlation with collagen volume fraction for 11 heartbeat MOLLI T 1 mapping versus assessment at a single ventricular level. For assessment of T 1 mapping precision, a cohort of 15 healthy volunteers underwent two CMR scans on separate days using an 11 heartbeat MOLLI with a 5(3)3 beat scheme to measure native T 1 and a 4(1)3(1)2 beat post-contrast scheme to measure post-contrast T 1, allowing calculation of partition coefficient and ECV. To assess correlation of T 1 mapping with collagen volume fraction, a separate cohort of ten aortic stenosis patients scheduled to undergo surgery underwent one CMR scan with this 11 heartbeat MOLLI scheme, followed by intraoperative tru-cut myocardial biopsy. Six models of myocardial diffuse fibrosis assessment were established with incremental inclusion of imaging by averaging of the basal and mid-myocardial left ventricular levels, and each model was assessed for precision and correlation with collagen volume fraction. A model using 11 heart beat MOLLI imaging of two basal and two mid ventricular level averaged T 1 maps provided improved precision (Intraclass correlation 0.93 vs 0.84) and correlation with histology (R (2) = 0.83 vs 0.36) for diffuse fibrosis compared to a single mid-ventricular level alone. ECV was more precise and correlated better than native T 1 mapping. T 1 mapping sequences with repeated averaging could be considered for applications of 11 heartbeat MOLLI, especially when small changes in native T 1/ECV might affect clinical management.

  6. Global Inventory of Regional and National Qualifications Frameworks. Volume II: National and Regional Cases

    ERIC Educational Resources Information Center

    UNESCO Institute for Lifelong Learning, 2015

    2015-01-01

    This second volume of the "Global Inventory of Regional and National Qualifications Frameworks" focuses on national and regional cases of national qualifications frameworks for eighty- six countries from Afghanistan to Uzbekistan and seven regional qualifications frameworks. Each country profile provides a thorough review of the main…

  7. Transforming Education: Global Perspectives, Experiences and Implications. Educational Psychology: Critical Pedagogical Perspectives. Volume 24

    ERIC Educational Resources Information Center

    DeVillar, Robert A., Ed.; Jiang, Binbin, Ed.; Cummins, Jim, Ed.

    2013-01-01

    This research-based volume presents a substantive, panoramic view of ways in which Australia and countries in Africa, Asia, Europe, and North and South America engage in educational programs and practices to transform the learning processes and outcomes of their students. It reveals and analyzes national and global trajectories in key areas of…

  8. Transforming Education: Global Perspectives, Experiences and Implications. Educational Psychology: Critical Pedagogical Perspectives. Volume 24

    ERIC Educational Resources Information Center

    DeVillar, Robert A., Ed.; Jiang, Binbin, Ed.; Cummins, Jim, Ed.

    2013-01-01

    This research-based volume presents a substantive, panoramic view of ways in which Australia and countries in Africa, Asia, Europe, and North and South America engage in educational programs and practices to transform the learning processes and outcomes of their students. It reveals and analyzes national and global trajectories in key areas of…

  9. Plantation Pedagogy: A Postcolonial and Global Perspective. Global Studies in Education. Volume 16

    ERIC Educational Resources Information Center

    Bristol, Laurette S. M.

    2012-01-01

    "Plantation Pedagogy" originates from an Afro-Caribbean primary school teacher's experience. It provides a discourse which extends and illuminates the limitations of current neo-liberal and global rationalizations of the challenges posed to a teacher's practice. Plantation pedagogy is distinguished from critical pedagogy by its historical presence…

  10. Plantation Pedagogy: A Postcolonial and Global Perspective. Global Studies in Education. Volume 16

    ERIC Educational Resources Information Center

    Bristol, Laurette S. M.

    2012-01-01

    "Plantation Pedagogy" originates from an Afro-Caribbean primary school teacher's experience. It provides a discourse which extends and illuminates the limitations of current neo-liberal and global rationalizations of the challenges posed to a teacher's practice. Plantation pedagogy is distinguished from critical pedagogy by its historical presence…

  11. Mass and volume contributions to twentieth-century global sea level rise.

    PubMed

    Miller, Laury; Douglas, Bruce C

    2004-03-25

    The rate of twentieth-century global sea level rise and its causes are the subjects of intense controversy. Most direct estimates from tide gauges give 1.5-2.0 mm yr(-1), whereas indirect estimates based on the two processes responsible for global sea level rise, namely mass and volume change, fall far below this range. Estimates of the volume increase due to ocean warming give a rate of about 0.5 mm yr(-1) (ref. 8) and the rate due to mass increase, primarily from the melting of continental ice, is thought to be even smaller. Therefore, either the tide gauge estimates are too high, as has been suggested recently, or one (or both) of the mass and volume estimates is too low. Here we present an analysis of sea level measurements at tide gauges combined with observations of temperature and salinity in the Pacific and Atlantic oceans close to the gauges. We find that gauge-determined rates of sea level rise, which encompass both mass and volume changes, are two to three times higher than the rates due to volume change derived from temperature and salinity data. Our analysis supports earlier studies that put the twentieth-century rate in the 1.5-2.0 mm yr(-1) range, but more importantly it suggests that mass increase plays a larger role than ocean warming in twentieth-century global sea level rise.

  12. Drive-Response Analysis of Global Ice Volume, CO2, and Insolation using Information Transfer

    NASA Astrophysics Data System (ADS)

    Brendryen, J.; Hannisdal, B.

    2014-12-01

    The processes and interactions that drive global ice volume variability and deglaciations are a topic of considerable debate. Here we analyze the drive-response relationships between data sets representing global ice volume, CO2 and insolation over the past 800 000 years using an information theoretic approach. Specifically, we use a non-parametric measure of directional information transfer (IT) based on the construct of transfer entropy to detect the relative strength and directionality of interactions in the potentially chaotic and non-linear glacial-interglacial climate system. Analyses of unfiltered data suggest a tight coupling between CO2 and ice volume, detected as strong, symmetric information flow consistent with a two-way interaction. In contrast, IT from Northern Hemisphere (NH) summer insolation to CO2 is highly asymmetric, suggesting that insolation is an important driver of CO2. Conditional analysis further suggests that CO2 is a dominant influence on ice volume, with the effect of insolation also being significant but limited to smaller-scale variability. However, the strong correlation between CO2 and ice volume renders them information redundant with respect to insolation, confounding further drive-response attribution. We expect this information redundancy to be partly explained by the shared glacial-interglacial "sawtooth" pattern and its overwhelming influence on the transition probability distributions over the target interval. To test this, we filtered out the abrupt glacial terminations from the ice volume and CO2 records to focus on the residual variability. Preliminary results from this analysis confirm insolation as a driver of CO2 and two-way interactions between CO2 and ice volume. However, insolation is reduced to a weak influence on ice volume. Conditional analyses support CO2 as a dominant driver of ice volume, while ice volume and insolation both have a strong influence on CO2. These findings suggest that the effect of orbital

  13. Using global illumination in volume visualization of rheumatoid arthritis CT data.

    PubMed

    Zheng, Lin; Chaudhari, Abhijit J; Badawi, Ramsey D; Ma, Kwan-Liu

    2014-01-01

    Proper lighting in rendering is essential for visualizing 3D objects, but most visualization software tools still employ simple lighting models. The advent of hardware-accelerated advanced lighting suggests that volume visualization can be truly usable for clinical work. Researchers studied how volume rendering incorporating global illumination impacted perception of bone surface features captured by x-ray computed-tomography scanners for clinical monitoring of rheumatoid arthritis patients. The results, evaluated by clinical researchers familiar with the disease and medical-image interpretation, indicate that interactive visualization with global illumination helped the researchers derive more accurate interpretations of the image data. With clinical needs and the recent advancement of volume visualization technology, this study is timely and points the way for further research.

  14. Development of the average lattice phase-strain and global elastic macro-strain in Al/TiC composites

    SciTech Connect

    Shi, N.; Bourke, M.A.M.; Goldstone, J.A.; Allison, J.E.

    1994-02-01

    The development of elastic lattice phase strains and global elastic macro-strain in a 15 vol% TiC particle reinforced 2219-T6 Al composite was modeled by finite element method (FEM) as a function of tensile uniaxial loading. The numerical predictions are in excellent agreement with strain measurements at a spallation neutron source. Results from the measurements and modeling indicate that the lattice phase-strains go through a ``zigzag`` increase with the applied load in the direction perpendicular to the load, while the changes of slope in the parallel direction are monotonic. FEM results further showed that it is essential to consider the effect of thermal residual stresses (TRS) in understanding this anomalous behavior. It was demonstrated that, due to TRS, the site of matrix plastic flow initiation changed. On the other hand, the changes of slope of the elastic global macrostrain is solely determined by the phase-stress partition in the composite. An analytical calculation showed that both experimental and numerical slope changes during elastic global strain response under loading could be accurately reproduced by accounting for the changes of phase-stress ratio between the matrix and the matrix.

  15. Cell volumes of marine phytoplankton from globally distributed coastal data sets

    NASA Astrophysics Data System (ADS)

    Harrison, Paul J.; Zingone, Adriana; Mickelson, Michael J.; Lehtinen, Sirpa; Ramaiah, Nagappa; Kraberg, Alexandra C.; Sun, Jun; McQuatters-Gollop, Abigail; Jakobsen, Hans Henrik

    2015-09-01

    Globally there are numerous long-term time series measuring phytoplanton abundance. With appropriate conversion factors, numerical species abundance can be expressed as biovolume and then converted to phytoplankton carbon. To-date there has been no attempt to analyze globally distributed phytoplankton data sets to determine the most appropriate species-specific mean cell volume. We have determined phytoplankton cell volumes for 214 of the most common species found in globally distributed coastal time series. The cell volume, carbon/cell and cell density of large diatoms is 200,000, 20,000 and 0.1 times respectively, compared to small diatoms. The cell volume, carbon/cell and cell density of large dinoflagellates is 1500, 1000 and 0.7 times respectively, compared to small dinoflagellates. The range in diatom biovolumes is 100 times greater than across dinoflagellates (i.e. >200,000 vs. 1500 times) and within any diatom species, the range in biovolume is up to 10-fold. Variation in diatom cell volumes are the single largest source of uncertainty in community phytoplankton carbon estimates and greatly exceeds the uncertainty associated with the different volume to carbon estimates. Small diatoms have 10 times more carbon density than large diatoms and small dinoflagellates have 1.5 times more carbon density than large cells. However, carbon density varies relatively little compared to biovolume. We recommend that monthly biovolumes should be determined on field samples, at least for the most important species in each study area, since these measurements will incorporate the effects of variations in light, temperature, nutrients and life cycles. Since biovolumes of diatoms are particularly variable, the use of size classes will help to capture the percentage of large and small cells for each species at certain times of the year. This summary of global datasets of phytoplankton biovolumes is useful in order to evaluate where locally determined biovolumes lie within the

  16. The Global 2000 Report to the President: Entering the Twenty-First Century. Volume Two--The Technical Report.

    ERIC Educational Resources Information Center

    Council on Environmental Quality, Washington, DC.

    This second volume of the Global 2000 study presents a technical report of detailed projections and analyses. It is a U.S. government effort to present a long-term global perspective on population, resources, and environment. The volume has four parts. Approximately half of the report, part one, deals with projections for the future in the areas…

  17. Shifting Tides in Global Higher Education: Agency, Autonomy, and Governance in the Global Network. Global Studies in Education, Volume 9

    ERIC Educational Resources Information Center

    Witt, Mary Allison

    2011-01-01

    The increasing connection among higher education institutions worldwide is well documented. What is less understood is how this connectivity is enacted and manifested on specific levels of the global education network. This book details the planning process of a multi-institutional program in engineering between institutions in the US and…

  18. Shifting Tides in Global Higher Education: Agency, Autonomy, and Governance in the Global Network. Global Studies in Education, Volume 9

    ERIC Educational Resources Information Center

    Witt, Mary Allison

    2011-01-01

    The increasing connection among higher education institutions worldwide is well documented. What is less understood is how this connectivity is enacted and manifested on specific levels of the global education network. This book details the planning process of a multi-institutional program in engineering between institutions in the US and…

  19. Numerical study of sea level and kuroshio volume transport change contributed by steric effect due to global warming

    NASA Astrophysics Data System (ADS)

    Lim, C.; Kim, D. H.; Woo, S. B.

    2016-02-01

    For direct consideration of seawater volume change by steric effect due to global warming, this study uses a MOM (Modular Ocean Model) version4 oceanic general circulation model, which does not use Boussinesq approximation. The model was improved to regional scale by increasing the grid resolution. Global simulation model results of CM2.1, HADCM3, MIROC3.2 provided by the IPCC AR4 (Intergovernmental Panel on Climate Change) were used as initial and boundary conditions, and SRES (Special Report on Emissions Scenarios) A1B was selected as a global warming scenario. The Northwestern Pacific region, which includes the Korean Peninsula, was selected as the study area, and the Yellow Sea which has a complex coastline, was expressed in detail by increasing grid resolution. By averaging the results of the three numerical experiments, we found that temperature & mean sea level(MSL) are increased by approximately 3℃/35cm from 2000 to 2100, respectively. Interestingly, The East Sea (Japan sea) appeared to show the largest change of MSL due to steric effect compared with Yellow and East China Sea. Numerical results showed that larger influence on East/Japan Sea is caused by the temperature and volume transport change in Tsushima Warm Current, which passes through the Korea Strait. A direct simulation of steric effect results in higher sea level rise compared with in-direct simulation of steric effect. Also, the Kuroshio Current, which is one of the major currents in the Northwestern Pacific, showed a decrease in transport as global warming progressed. Although there were differences between models, approximately 4 5SV of transport was reduced in 2100. However, there was no huge change in the transport of the Tsushima Warm Current.

  20. International conference on the role of the polar regions in global change: Proceedings. Volume 2

    SciTech Connect

    Weller, G.; Wilson, C.L.; Severin, B.A.B.

    1991-12-01

    The International Conference on the Role of the Polar Regions in Global Change took place on the campus of the University of Alaska Fairbanks on June 11--15, 1990. The goal of the conference was to define and summarize the state of knowledge on the role of the polar regions in global change, and to identify gaps in knowledge. To this purpose experts in a wide variety of relevant disciplines were invited to present papers and hold panel discussions. While there are numerous conferences on global change, this conference dealt specifically with the polar regions which occupy key positions in the global system. These two volumes of conference proceedings include papers on (1) detection and monitoring of change; (2) climate variability and climate forcing; (3) ocean, sea ice, and atmosphere interactions and processes; and (4) effects on biota and biological feedbacks; (5) ice sheet, glacier and permafrost responses and feedbacks, (6) paleoenvironmental studies; and, (7) aerosol and trace gases.

  1. International conference on the role of the polar regions in global change: Proceedings. Volume 1

    SciTech Connect

    Weller, G.; Wilson, C.L.; Severin, B.A.B.

    1991-12-01

    The International Conference on the Role of the Polar Regions in Global Change took place on the campus of the University of Alaska Fairbanks on June 11--15, 1990. The goal of the conference was to define and summarize the state of knowledge on the role of the polar regions in global change, and to identify gaps in knowledge. To this purpose experts in a wide variety of relevant disciplines were invited to present papers and hold panel discussions. While there are numerous conferences on global change, this conference dealt specifically with polar regions which occupy key positions in the global system. These two volumes of conference proceedings include papers on (1) detection and monitoring of change; (2) climate variability and climate forcing; (3) ocean, sea ice, and atmosphere interactions and processes; (4) effects on biota and biological feedbacks; (5) ice sheet, glacier and permafrost responses and feedbacks; (6) paleoenvironmental studies; and, (7) aerosols and trace gases.

  2. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 4; Determination of surface and atmosphere fluxes and temporally and spatially averaged products (subsystems 5-12); Determination of surface and atmosphere fluxes and temporally and spatially averaged products

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.

  3. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    PubMed

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  4. Introduction to “Global tsunami science: Past and future, Volume I”

    USGS Publications Warehouse

    Geist, Eric L.; Fritz, Hermann; Rabinovich, Alexander B.; Tanioka, Yuichiro

    2016-01-01

    Twenty-five papers on the study of tsunamis are included in Volume I of the PAGEOPH topical issue “Global Tsunami Science: Past and Future”. Six papers examine various aspects of tsunami probability and uncertainty analysis related to hazard assessment. Three papers relate to deterministic hazard and risk assessment. Five more papers present new methods for tsunami warning and detection. Six papers describe new methods for modeling tsunami hydrodynamics. Two papers investigate tsunamis generated by non-seismic sources: landslides and meteorological disturbances. The final three papers describe important case studies of recent and historical events. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.

  5. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    This paper extends the results I reported at this year's ION International Technical Meeting on multi-constellation GNSS coverage by showing how the use of multi-constellation GNSS improves Geometric Dilution of Precision (GDOP). Originally developed to provide position, navigation, and timing for terrestrial users, GPS has found increasing use for in space for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis attitude control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Galileo, and Beidou) and the development of Satellite Based Augmentation Services, it is possible to obtain improved precision by using evolving multi-constellation receiver. The Space Service Volume formally defined as the volume of space between three thousand kilometers altitude and geosynchronous altitude ((is) approximately 36,500 km), with the volume below three thousand kilometers defined as the Terrestrial Service Volume (TSV). The USA has established signal requirements for the Space Service Volume (SSV) as part of the GPS Capability Development Documentation (CDD). Diplomatic efforts are underway to extend Space service Volume commitments to the other Position, Navigation, and Timing (PNT) service providers in an effort to assure that all space users will benefit from the enhanced capabilities of interoperating GNSS services in the space domain.

  6. Mars: Crustal pore volume, cryospheric depth, and the global occurrence of groundwater

    NASA Technical Reports Server (NTRS)

    Clifford, Stephen M.

    1987-01-01

    It is argued that most of the Martian hydrosphere resides in a porous outer layer of crust that, based on a lunar analogy, appears to extend to a depth of about 10 km. The total pore volume of this layer is sufficient to store the equivalent of a global ocean of water some 500 to 1500 m deep. Thermal modeling suggests that about 300 to 500 m of water could be stored as ice within the crust. Any excess must exist as groundwater.

  7. Size and distribution of the global volume of surgery in 2012

    PubMed Central

    Haynes, Alex B; Molina, George; Lipsitz, Stuart R; Esquivel, Micaela M; Uribe-Leitz, Tarsicio; Fu, Rui; Azad, Tej; Chao, Tiffany E; Berry, William R; Gawande, Atul A

    2016-01-01

    Abstract Objective To estimate global surgical volume in 2012 and compare it with estimates from 2004. Methods For the 194 Member States of the World Health Organization, we searched PubMed for studies and contacted key informants for reports on surgical volumes between 2005 and 2012. We obtained data on population and total health expenditure per capita for 2012 and categorized Member States as very-low, low, middle and high expenditure. Data on caesarean delivery were obtained from validated statistical reports. For Member States without recorded surgical data, we estimated volumes by multiple imputation using data on total health expenditure. We estimated caesarean deliveries as a proportion of all surgery. Findings We identified 66 Member States reporting surgical data. We estimated that 312.9 million operations (95% confidence interval, CI: 266.2–359.5) took place in 2012, an increase from the 2004 estimate of 226.4 million operations. Only 6.3% (95% CI: 1.7–22.9) and 23.1% (95% CI: 14.8–36.7) of operations took place in very-low- and low-expenditure Member States representing 36.8% (2573 million people) and 34.2% (2393 million people) of the global population of 7001 million people, respectively. Caesarean deliveries comprised 29.6% (5.8/19.6 million operations; 95% CI: 9.7–91.7) of the total surgical volume in very-low-expenditure Member States, but only 2.7% (5.1/187.0 million operations; 95% CI: 2.2–3.4) in high-expenditure Member States. Conclusion Surgical volume is large and growing, with caesarean delivery comprising nearly a third of operations in most resource-poor settings. Nonetheless, there remains disparity in the provision of surgical services globally. PMID:26966331

  8. Plio-Pleistocene ice volume, Antarctic climate, and the global delta18O record.

    PubMed

    Raymo, M E; Lisiecki, L E; Nisancioglu, Kerim H

    2006-07-28

    We propose that from approximately 3 to 1 million years ago, ice volume changes occurred in both the Northern and Southern Hemispheres, each controlled by local summer insolation. Because Earth's orbital precession is out of phase between hemispheres, 23,000-year changes in ice volume in each hemisphere cancel out in globally integrated proxies such as ocean delta18O or sea level, leaving the in-phase obliquity (41,000 years) component of insolation to dominate those records. Only a modest ice mass change in Antarctica is required to effectively cancel out a much larger northern ice volume signal. At the mid-Pleistocene transition, we propose that marine-based ice sheet margins replaced terrestrial ice margins around the perimeter of East Antarctica, resulting in a shift to in-phase behavior of northern and southern ice sheets as well as the strengthening of 23,000-year cyclicity in the marine delta18O record.

  9. Modelling the flow of a second order fluid through and over a porous medium using the volume averages. I. The generalized Brinkman's equation

    NASA Astrophysics Data System (ADS)

    Minale, Mario

    2016-02-01

    In this paper, the generalized Brinkman's equation for a viscoelastic fluid is derived using the volume averages. Darcy's generalised equation is consequently obtained neglecting the first and the second Brinkman's correction with respect to the drag term. The latter differs from the Newtonian drag because of an additional term quadratic in the velocity and inversely proportional to a "viscoelastic" permeability defined in the paper. The viscoelastic permeability tensor can be calculated by solving a boundary value problem, but it must be in fact experimentally measured. To isolate the elastic contribution, the constitutive equation of the second order fluid of Coleman and Noll is chosen because, in simple shear at steady state, second order fluids show a constant viscosity and first and second normal stress differences quadratic in the shear rate. The model predictions are compared with data of the literature obtained in a Darcy's experiment and the agreement is good.

  10. Global average concentration and trend for hydroxyl radicals deduced from ALE/GAGE trichloroethane (methyl chloroform) data for 1978-1990

    NASA Technical Reports Server (NTRS)

    Prinn, R.; Cunnold, D.; Simmonds, P.; Alyea, F.; Boldi, R.; Crawford, A.; Fraser, P.; Gutzler, D.; Hartley, D.; Rosen, R.

    1992-01-01

    An optimal estimation inversion scheme is utilized with atmospheric data and emission estimates to determined the globally averaged CH3CCl3 tropospheric lifetime and OH concentration. The data are taken from atmospheric measurements from surface stations of 1,1,1-trichloroethane and show an annual increase of 4.4 +/- 0.2 percent. Industrial emission estimates and a small oceanic loss rate are included, and the OH concentration for the same period (1978-1990) are incorporated at 1.0 +/- 0.8 percent/yr. The positive OH trend is consistent with theories regarding OH and ozone trends with respect to land use and global warming. Attention is given to the effects of the ENSO on the CH3CCl3 data and the assumption of continuing current industrial anthropogenic emissions. A novel tropical atmospheric tracer-transport mechanism is noted with respect to the CH3CCl3 data.

  11. Global average concentration and trend for hydroxyl radicals deduced from ALE/GAGE trichloroethane (methyl chloroform) data for 1978-1990

    NASA Technical Reports Server (NTRS)

    Prinn, R.; Cunnold, D.; Simmonds, P.; Alyea, F.; Boldi, R.; Crawford, A.; Fraser, P.; Gutzler, D.; Hartley, D.; Rosen, R.

    1992-01-01

    An optimal estimation inversion scheme is utilized with atmospheric data and emission estimates to determined the globally averaged CH3CCl3 tropospheric lifetime and OH concentration. The data are taken from atmospheric measurements from surface stations of 1,1,1-trichloroethane and show an annual increase of 4.4 +/- 0.2 percent. Industrial emission estimates and a small oceanic loss rate are included, and the OH concentration for the same period (1978-1990) are incorporated at 1.0 +/- 0.8 percent/yr. The positive OH trend is consistent with theories regarding OH and ozone trends with respect to land use and global warming. Attention is given to the effects of the ENSO on the CH3CCl3 data and the assumption of continuing current industrial anthropogenic emissions. A novel tropical atmospheric tracer-transport mechanism is noted with respect to the CH3CCl3 data.

  12. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    SciTech Connect

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-04-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A{sub 1}) was set in the range of 0.0-12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of {gamma} index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within {+-} 0.7%. From the dose area histograms on the film, the mean {+-} standard deviation of the dose covering 100% of the cross section of the target was 102.32 {+-} 1.20% (range, 100.59-103.49%). By contrast, the irradiated areas receiving more than 95% dose for A{sub 1} = 12 mm were 1.46 and 1.33 times larger than those for A{sub 1} = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  13. Global sea level fluctuations and uncertainties through a Wilson cycle based on ocean basin volume reconstructions

    NASA Astrophysics Data System (ADS)

    Wright, Nicky; Seton, Maria; Williams, Simon E.; Dietmar Müller, R.

    2017-04-01

    Variations in the volume of ocean basins is the main driving force for (long-wavelength) changes in eustatic sea level in an ice-free world, i.e. most of the Mesozoic and Cenozoic. The volume of ocean basins is largely dependent on changes in the seafloor spreading history, which can be reconstructed based on an age-depth relationship for oceanic crust and an underlying global plate kinematic model. Ocean basin volume reconstructions need to include: (1) a predicted history of back-arc basin formation, including where geological evidence exists for the opening and closing of back-arc basins within a single Wilson cycle, (2) the emplacement and subsidence of oceanic plateaus (LIPs), (3) variations in sediment thickness through time, and (4) a reconstruction of the depth of continental margins and fragments. Unfortunately, due to subduction of oceanic crust, we must rely on synthetically modelled ocean crust for much of Earth's history, for which it is impossible to ground truth the history of LIPs and sediment thickness. In order to improve reconstructions of sea level on geologic time scales and assess the uncertainty in deriving the volume of ocean basins based on a global plate kinematic model, we investigate the influence of these poorly constrained features (e.g. LIPs, back-arc basins, sediment thickness, passive margins) on ocean basin volume since 230 Ma (i.e. throughout an entire Wilson cycle). We assess the characteristics for each feature at present-day and during well-constrained times during the Cenozoic, and create suites of alternative paleobathymetry grids which incorporate varying degrees of each feature's influence. Further, we derive a global sea level curve based only on the reconstruction of ocean basin volume (i.e. excluding effects such as dynamic topography and glaciation), and present the influence of each component and their uncertainties through time. We find that by incorporating reasonable predictions for these components during times

  14. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  15. An analysis of the global spatial variability of column-averaged CO2 from SCIAMACHY and its implications for CO2 sources and sinks

    USGS Publications Warehouse

    Zhang, Zhen; Jiang, Hong; Liu, Jinxun; Zhang, Xiuying; Huang, Chunlin; Lu, Xuehe; Jin, Jiaxin; Zhou, Guomo

    2014-01-01

    Satellite observations of carbon dioxide (CO2) are important because of their potential for improving the scientific understanding of global carbon cycle processes and budgets. We present an analysis of the column-averaged dry air mole fractions of CO2 (denoted XCO2) of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) retrievals, which were derived from a satellite instrument with relatively long-term records (2003–2009) and with measurements sensitive to the near surface. The spatial-temporal distributions of remotely sensed XCO2 have significant spatial heterogeneity with about 6–8% variations (367–397 ppm) during 2003–2009, challenging the traditional view that the spatial heterogeneity of atmospheric CO2 is not significant enough (2 and surface CO2 were found for major ecosystems, with the exception of tropical forest. In addition, when compared with a simulated terrestrial carbon uptake from the Integrated Biosphere Simulator (IBIS) and the Emissions Database for Global Atmospheric Research (EDGAR) carbon emission inventory, the latitudinal gradient of XCO2 seasonal amplitude was influenced by the combined effect of terrestrial carbon uptake, carbon emission, and atmospheric transport, suggesting no direct implications for terrestrial carbon sinks. From the investigation of the growth rate of XCO2 we found that the increase of CO2 concentration was dominated by temperature in the northern hemisphere (20–90°N) and by precipitation in the southern hemisphere (20–90°S), with the major contribution to global average occurring in the northern hemisphere. These findings indicated that the satellite measurements of atmospheric CO2 improve not only the estimations of atmospheric inversion, but also the understanding of the terrestrial ecosystem carbon dynamics and its feedback to atmospheric CO2.

  16. Global end-diastolic volume is associated with the occurrence of delayed cerebral ischemia and pulmonary edema after subarachnoid hemorrhage.

    PubMed

    Watanabe, Akihiro; Tagami, Takashi; Yokobori, Shoji; Matsumoto, Gaku; Igarashi, Yutaka; Suzuki, Go; Onda, Hidetaka; Fuse, Akira; Yokota, Hiroyuki

    2012-11-01

    Predictive variables of delayed cerebral ischemia (DCI) and pulmonary edema following subarachnoid hemorrhage (SAH) remain unknown. We aimed to determine associations between transpulmonary thermodilution-derived variables and DCI and pulmonary edema occurrence after SAH. We reviewed 34 consecutive SAH patients monitored by the PiCCO system. Six patients developed DCI at 7 days after SAH on average; 28 did not (non-DCI). We compared the variable measures for 1 day before DCI occurred (DCI day -1) in the DCI group and 6 days after SAH (non-DCI day -1) in the non-DCI group for control. The mean value of the global end-diastolic volume index (GEDI) for DCI day -1 was lower than that for non-DCI day -1 (676 ± 65 vs. 872 ± 85 mL/m, P = 0.04). Central venous pressure (CVP) was not significantly different (7.8 ± 3.1 vs. 9.4 ± 1.9 cm H2O, P = 0.45). At day -1 for both DCI and non-DCI, 11 patients (32%) had pulmonary edema. Global end-diastolic volume index was significantly higher in patients with pulmonary edema than in those without this condition (947 ± 126 vs. 766 ± 81 mL/m, P = 0.02); CVP was not significantly different (8.7 ± 2.8 vs. 9.2 ± 2.1 cm H2O, P = 0.78). Although significant correlation was found between extravascular lung water (EVLW) measures and GEDI (r = 0.58, P = 0.001), EVLW and CVP were not correlated (r = 0.03, P = 0.88). Thus, GEDI might be associated with DCI occurrence and EVLW accumulation after SAH.

  17. Transports and budgets of volume, heat, and salt from a global eddy-resolving ocean model

    SciTech Connect

    McCann, M.P.; Semtner, A.J. Jr.; Chervin, R.M.

    1994-07-01

    The results from an integration of a global ocean circulation model have been condensed into an analysis of the volume, heat, and salt transports among the major ocean basins. Transports are also broken down between the model`s Ekman, thermocline, and deep layers. Overall, the model does well. Horizontal exchanges of mass, heat, and salt between ocean basins have reasonable values: and the volume of North Atlantic Deep Water (NADW) transport is in general agreement with what limited observations exist. On a global basis the zonally integrated meridional heat transport is poleward at all latitudes except for the latitude band 30{degrees}S to 45{degrees}S. This anomalous transport is most likely a signature of the model`s inability to form Antarctic Intermediate (AAIW) and Antarctic bottom water (AABW) properly. Eddy heat transport is strong at the equator where its convergence heats the equatorial Pacific about twice as much as it heats the equatorial Atlantic. The greater heating in the Pacific suggests that mesoscale eddies may be a vital mechanism for warming and maintaining an upwelling portion of the global conveyor-belt circulation. The model`s fresh water transport compares well with observations. However, in the Atlantic there is an excessive southward transport of fresh water due to the absence of the Mediterranean outflow and weak northward flow of AAIW. Perhaps the model`s greatest weakness is the lack of strong AAIW and AABW circulation cells. Accurate thermohaline forcing in the North Atlantic (based on numerous hydrographic observations) helps the model adequately produce NADW. In contrast, the southern ocean is an area of sparse observation. Better thermohaline observations in this area may be needed if models such as this are to produce the deep convection that will achieve more accurate simulations of the global 3-dimensional circulation. 41 refs., 18 figs., 1 tab.

  18. PET imaging of thin objects: measuring the effects of positron range and partial-volume averaging in the leag of Nicotiana Tabacum

    SciTech Connect

    Alexoff, D.L.; Alexoff, D.L.; Dewey, S.L.; Vaska, P.; Krishnamoorthy, S.; Ferrieri, R.; Schueller, M.; Schlyer, D.; Fowler, J.S.

    2011-03-01

    PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean {+-} S.D.) escaping the leaf parenchyma were measured to be 59 {+-} 1.1%, 64 {+-} 4.4% and 67 {+-} 1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.

  19. SU-C-304-01: Investigation of Various Detector Response Functions and Their Geometry Dependence in a Novel Method to Address Ion Chamber Volume Averaging Effect

    SciTech Connect

    Barraclough, B; Lebron, S; Li, J; Fan, Qiyong; Liu, C; Yan, G

    2015-06-15

    Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and

  20. Global and local development of gray and white matter volume in normal children and adolescents.

    PubMed

    Wilke, Marko; Krägeloh-Mann, Ingeborg; Holland, Scott K

    2007-04-01

    Over the last decade, non-invasive, high-resolution magnetic resonance imaging has allowed investigating normal brain development. However, much is still not known in this context, especially with regard to regional differences in brain morphology between genders. We conducted a large-scale study utilizing fully automated analysis-approaches, using high-resolution MR-imaging data from 200 normal children and aimed at providing reference data for future neuroimaging studies. Global and local aspects of normal development of gray and white matter volume were investigated as a function of age and gender while covarying for known nuisance variables. Global developmental patterns were apparent in both gray and white matter, with gray matter decreasing and white matter increasing significantly with age. Gray matter loss was most pronounced in the parietal lobes and least in the cingulate and in posterior temporal regions. White matter volume gains with age were almost uniform, with an accentuation of the pyramidal tract. Gender influences were detectable for both gray and white matter. Voxel-based analyses confirmed significant differences in brain morphology between genders, like a larger amygdala in boys or a larger caudate in girls. We could demonstrate profound influences of both age and gender on normal brain morphology, confirming and extending earlier studies. The knowledge of such influence allows for the consideration of age- and gender-effects in future pediatric neuroimaging studies and advances our understanding of normal and abnormal brain development.

  1. A finite-volume module for simulating global all-scale atmospheric flows

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Deconinck, Willem; Hamrud, Mats; Kühnlein, Christian; Mozdzynski, George; Szmelter, Joanna; Wedi, Nils P.

    2016-06-01

    The paper documents the development of a global nonhydrostatic finite-volume module designed to enhance an established spectral-transform based numerical weather prediction (NWP) model. The module adheres to NWP standards, with formulation of the governing equations based on the classical meteorological latitude-longitude spherical framework. In the horizontal, a bespoke unstructured mesh with finite-volumes built about the reduced Gaussian grid of the existing NWP model circumvents the notorious stiffness in the polar regions of the spherical framework. All dependent variables are co-located, accommodating both spectral-transform and grid-point solutions at the same physical locations. In the vertical, a uniform finite-difference discretisation facilitates the solution of intricate elliptic problems in thin spherical shells, while the pliancy of the physical vertical coordinate is delegated to generalised continuous transformations between computational and physical space. The newly developed module assumes the compressible Euler equations as default, but includes reduced soundproof PDEs as an option. Furthermore, it employs semi-implicit forward-in-time integrators of the governing PDE systems, akin to but more general than those used in the NWP model. The module shares the equal regions parallelisation scheme with the NWP model, with multiple layers of parallelism hybridising MPI tasks and OpenMP threads. The efficacy of the developed nonhydrostatic module is illustrated with benchmarks of idealised global weather.

  2. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    PubMed

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found.

  3. Strategy for harmonized retrieval of column-averaged methane from the mid-infrared NDACC FTS-network and intercomparison with SCIAMACHY satellite data on global scale

    NASA Astrophysics Data System (ADS)

    Forster, Frank; Sussmann, Ralf; Borsdorff, Tobias; Rettinger, Markus

    2010-05-01

    Authorship: Forster, F., Sussmann, R., Borsdorff, T., Rettinger, M., Blumenstock, T., Buchwitz, M., Burrows, J.P., Duchatelet, P., Frankenberg, C., Hannigan, J., Hase, F., Jones, N., Klyft, J., Mahieu, E., De Mazière, M., Mellqvist, J., Notholt, J., Petersen, K., Schneising, O., Strong, K., Taylor, J., Vigouroux, C. Global measurements of column-averaged methane have recently shown a step forward in data quality via year 2003 and 2004 retrievals from two different processors, namely IMAP-DOAS ver. 49 and WFM-DOAS ver. 1.0 (Frankenberg et al., 2008; Schneising et al., 2009). Accuracy and precision have approached the order of 1 %, and can be considered for inverse modelling of sources and sinks. This means at the same time that the quality requirements for ground-based validation data have become higher. In order to guarantee a station-to-station consistency of

  4. Estimating the global volume of deeply recycled continental crust at continental collision zones

    NASA Astrophysics Data System (ADS)

    Scholl, D. W.; Huene, R. V.

    2006-12-01

    CRUSTAL RECYCLING AT OCEAN MARGINS: Large volumes of rock and sediment are missing from the submerged forearcs of ocean margin subduction zones--OMSZs. This observation means that (1) oceanic sediment is transported beneath the margin to either crustally underplate the coastal region or reach mantle depths, and that (2) the crust of the forearc is vertically thinned and horizontally truncated and the removed material transported toward the mantle. Transport of rock and sediment debris occurs in the subduction channel that separates the upper and lower plates. At OMSZs the solid-volume flux of recycling crustal material is estimated to be globally ~2.5 km3/yr (i.e., 2.5 Armstrong units or AU). The corresponding rate of forearc truncation (migration of the trench axis toward a fix reference on the continent) is a sluggish 2-3 km/Myr (about 1/50th the orthogonal convergence rate). Nonetheless during the past 2.5 Gyr (i.e., since the beginning of the Proterozoic) a volume of continental material roughly equal to the existing volume (~7 billion cubic km) has been recycled to the mantle at OMSZs. The amount of crust that has been destroyed is so large that recycling must have been a major factor creating the mapped rock pattern and age-fabric of continental crust. RECYCLING AT CONTINENT/ARC COLLISIONS: The rate at which arc magmatism globally adds juvenile crust to OMSZs has been commonly globally estimated at ~1 AU. But new geophysical and dating information from the Aleutian and IBM arcs imply that the addition rate is at least ~5 AU (equivalent to ~125 km3/Myr/km of arc). If the Armstrong posit is correct that since the early Archean a balance has existed between additions and losses of crust, then a recycling sink for an additional 2-3 AU of continental material must exist. As the exposure of exhumed masses of high P/T blueschist bodies documents that subcrustal streaming of continental material occurs at OMSZs, so does the occurrence of exhumed masses of UHP

  5. TH-E-BRE-03: A Novel Method to Account for Ion Chamber Volume Averaging Effect in a Commercial Treatment Planning System Through Convolution

    SciTech Connect

    Barraclough, B; Li, J; Liu, C; Yan, G

    2014-06-15

    Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle{sup 3}), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA

  6. Signal Strength-Based Global Navigation Satellite System Performance Assessment in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user in the Space Service Volume (SSV) when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The second phase of that increasing complexity and fidelity analysis initiative is based on augmenting the Phase 1 pure geometrical approach with signal strength-based limitations to determine if access is valid. The second phase of analysis has been completed, and the results are documented in this paper.

  7. A GPU-enabled Finite Volume solver for global magnetospheric simulations on unstructured grids

    NASA Astrophysics Data System (ADS)

    Lani, Andrea; Yalim, Mehmet Sarp; Poedts, Stefaan

    2014-10-01

    This paper describes an ideal Magnetohydrodynamics (MHD) solver for global magnetospheric simulations based on a B1 +B0 splitting approach, which has been implemented within the COOLFluiD platform and adapted to run on modern heterogeneous architectures featuring General Purpose Graphical Processing Units (GPGPUs). The code is based on a state-of-the-art Finite Volume discretization for unstructured grids and either explicit or implicit time integration, suitable for both steady and time accurate problems. Innovative object-oriented design and coding techniques mixing C++ and CUDA are discussed. Performance results of the modified code on single and multiple processors are presented and compared with those provided by the original solver.

  8. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can

  9. The Global Classroom: A Thematic Multicultural Model for the K-6 and ESL Classroom. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    De Cou-Landberg, Michelle

    This two-volume resource guide is designed to help K-6 and ESL teachers implement multicultural whole language learning through thematic social studies units. The four chapters in Volume 1 address universal themes: (1) "Climates and Seasons: Watching the Weather"; (2) "Trees and Plants: Our Rich, Green World"; (3) "Animals around the World: Tame,…

  10. The Global Classroom: A Thematic Multicultural Model for the K-6 and ESL Classroom. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    De Cou-Landberg, Michelle

    This two-volume resource guide is designed to help K-6 and ESL teachers implement multicultural whole language learning through thematic social studies units. The four chapters in Volume 1 address universal themes: (1) "Climates and Seasons: Watching the Weather"; (2) "Trees and Plants: Our Rich, Green World"; (3) "Animals around the World: Tame,…

  11. A finite-volume module for cloud-resolving simulations of global atmospheric flows

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Kühnlein, Christian; Grabowski, Wojciech W.

    2017-07-01

    The paper extends to moist-precipitating dynamics a recently documented high-performance finite-volume module (FVM) for simulating global all-scale atmospheric flows (Smolarkiewicz et al., 2016) [62]. The thrust of the paper is a seamless coupling of the conservation laws for moist variables engendered by cloud physics with the semi-implicit, non-oscillatory forward-in-time integrators proven for dry dynamics of FVM. The representation of the water substance and the associated processes in weather and climate models can vary widely in formulation details and complexity levels. The representation adopted for this paper assumes a canonical ;warm-rain; bulk microphysics parametrisation, recognised for its minimal physical intricacy while accounting for the essential mathematical complexity of cloud-resolving models. A key feature of the presented numerical approach is global conservation of the water substance to machine precision-implied by the local conservativeness and positivity preservation of the numerics-for all water species including water vapour, cloud water, and precipitation. The moist formulation assumes the compressible Euler equations as default, but includes reduced anelastic equations as an option. The theoretical considerations are illustrated with a benchmark simulation of a tornadic thunderstorm on a reduced size planet, supported with a series of numerical experiments addressing the accuracy of the associated water budget.

  12. Global production, use, and emission volumes of short-chain chlorinated paraffins - A minimum scenario.

    PubMed

    Glüge, Juliane; Wang, Zhanyun; Bogdal, Christian; Scheringer, Martin; Hungerbühler, Konrad

    2016-12-15

    Short-chain chlorinated paraffins (SCCPs) show high persistence, bioaccumulation potential, and toxicity (PBT properties). Consequently, restrictions on production and use have been enforced in several countries/regions. The Stockholm Convention on Persistent Organic Pollutants recognized the PBT properties and long-range transport potential of SCCPs in 2015 and is now evaluating a possible global phase-out or restrictions. In this context, it is relevant to know which countries are producing/using SCCPs and in which amounts, and which applications contribute most to their environmental emissions. To provide a first comprehensive overview, we review and integrate all publicly available data on the global production and use of both chlorinated paraffins (CPs) as a whole and specifically SCCPs. Considerable amount of data on production/use of CPs and SCCPs are missing. Based on the available data and reported emission factors, we estimate the past and current worldwide SCCP emissions from individual applications. Using the available data as a minimum scenario, we conclude: (i) SCCP production and use is increasing, with the current worldwide production volume being 165,000t/year at least, whereas the global production of total CPs exceeds 1milliont/year. (ii) The worldwide release of SCCPs from their production and use to air, surface water, and soil between 1935 and 2012 has been in the range of 1690-41,400t, 1660-105,000t, and 9460-81,000t, respectively. (iii) The SCCP manufacture and use in PVC, the use in metal working applications and sealants/adhesives, and the use in plastics and rubber contribute most to the emissions to air, surface water, and soil. Thus, the decrease in the environmental emissions of SCCPs requires reduction of SCCP use in (almost) all applications. (iv) Emissions due to the disposal of waste SCCPs cannot be accurately estimated, because relevant information is missing. Instead, we conduct a scenario analysis to provide some insights into

  13. Multi-parallel open technology to enable collaborative volume visualization: how to create global immersive virtual anatomy classrooms.

    PubMed

    Silverstein, Jonathan C; Walsh, Colin; Dech, Fred; Olson, Eric; E, Michael; Parsad, Nigel; Stevens, Rick

    2008-01-01

    Many prototype projects aspire to develop a sustainable model of immersive radiological volume visualization for virtual anatomic education. Some have focused on distributed or parallel architectures. However, very few, if any others, have combined multi-location, multi-directional, multi-stream sharing of video, audio, desktop applications, and parallel stereo volume rendering, to converge on an open, globally scalable, and inexpensive collaborative architecture and implementation method for anatomic teaching using radiological volumes. We have focused our efforts on bringing this all together for several years. We outline here the technology we're making available to the open source community and a system implementation suggestion for how to create global immersive virtual anatomy classrooms. With the releases of Access Grid 3.1 and our parallel stereo volume rendering code, inexpensive globally scalable technology is available to enable collaborative volume visualization upon an award-winning framework. Based upon these technologies, immersive virtual anatomy classrooms that share educational or clinical principles can be constructed with the setup described with moderate technological expertise and global scalability.

  14. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect.

    PubMed

    Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2016-05-01

    To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit "real" ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%-80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Although all three DRFs were found adequate to represent the response of the studied ionization

  15. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect

    SciTech Connect

    Barraclough, Brendan; Lebron, Sharon; Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2016-05-15

    Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to

  16. Proceedings of the First National Workshop on the Global Weather Experiment: Current Achievements and Future Directions, volume 2, part 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An assessment of the status of research using Global Weather Experiment (GWE) data and of the progress in meeting the objectives of the GWE, i.e., better knowledge and understanding of the atmosphere in order to provide more useful weather prediction services. Volume Two consists of a compilation of the papers presented during the workshop. These cover studies that addressed GWE research objectives and utilized GWE information. The titles in Part 2 of this volume include General Circulation Planetary Waves, Interhemispheric, Cross-Equatorial Exchange, Global Aspects of Monsoons, Midlatitude-Tropical Interactions During Monsoons, Stratosphere, Southern Hemisphere, Parameterization, Design of Observations, Oceanography, Future Possibilities, Research Gaps, with an Appendix.

  17. Optimal range of global end-diastolic volume for fluid management after aneurysmal subarachnoid hemorrhage: a multicenter prospective cohort study.

    PubMed

    Tagami, Takashi; Kuwamoto, Kentaro; Watanabe, Akihiro; Unemoto, Kyoko; Yokobori, Shoji; Matsumoto, Gaku; Yokota, Hiroyuki

    2014-06-01

    Limited evidence supports the use of hemodynamic variables that correlate with delayed cerebral ischemia or pulmonary edema after aneurysmal subarachnoid hemorrhage. The aim of this study was to identify those hemodynamic variables that are associated with delayed cerebral ischemia and pulmonary edema after subarachnoid hemorrhage. A multicenter prospective cohort study. Nine university hospitals in Japan. A total of 180 patients with aneurysmal subarachnoid hemorrhage. None. Patients were prospectively monitored using a transpulmonary thermodilution system in the 14 days following subarachnoid hemorrhage. Delayed cerebral ischemia was developed in 35 patients (19.4%) and severe pulmonary edema was developed in 47 patients (26.1%). Using the Cox proportional hazards model, the mean global end-diastolic volume index (normal range, 680-800 mL/m) was the independent factor associated with the occurrence of delayed cerebral ischemia (hazard ratio, 0.74; 95% CI, 0.60-0.93; p = 0.008). Significant differences in global end-diastolic volume index were detected between the delayed cerebral ischemia and non-delayed cerebral ischemia groups (783 ± 25 mL/m vs 870 ± 14 mL/m; p = 0.007). The global end-diastolic volume index threshold that best correlated with delayed cerebral ischemia was less than 822 mL/m, as determined by receiver operating characteristic curves. Analysis of the Cox proportional hazards model indicated that the mean global end-diastolic volume index was the independent factor that associated with the occurrence of pulmonary edema (hazard ratio, 1.31; 95% CI, 1.02-1.71; p = 0.03). Furthermore, a significant positive correlation was identified between global end-diastolic volume index and extravascular lung water (r = 0.46; p < 0.001). The global end-diastolic volume index threshold that best correlated with severe pulmonary edema was greater than 921 mL/m. Our findings suggest that global end-diastolic volume index impacts both delayed cerebral ischemia

  18. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students

    PubMed Central

    Lee, Romeo B.; Baring, Rito V.; Sta. Maria, Madelene A.

    2016-01-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700

  19. Gender Variations in the Effects of Number of Organizational Memberships, Number of Social Networking Sites, and Grade-Point Average on Global Social Responsibility in Filipino University Students.

    PubMed

    Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A

    2016-02-01

    The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.

  20. Seasonal cycle of volume transport through Kerama Gap revealed by a 20-year global HYbrid Coordinate Ocean Model reanalysis

    DTIC Science & Technology

    2015-11-10

    cycle of volume transport through Kerama Gap revealed by a 20-year global HYbrid Coordinate Ocean Model reanalysis Zhitao Yua,b,∗, E. Joseph Metzgerb...Island − a part of Ryukyu Islands Arc) is investigated using a 20-year global HYbrid Coordinate Ocean Model (HYCOM) reanalysis with the Navy Coupled...Gordon et al., 2014) and hen enters the East China Sea (ECS) through the East Taiwan Channel ETC) between Taiwan and Ishigaki Island; it carries warm

  1. An 800-kyr Record of Global Surface Ocean δ18Oand Implications for Ice Volume-Temperature Coupling

    NASA Astrophysics Data System (ADS)

    Shakun, J. D.; Lea, D. W.; Lisiecki, L. E.; Raymo, M. E.

    2015-12-01

    The sequence of feedbacks that characterized 100-kyr glacial cycles of the past million years remains uncertain, hampering an understanding of the interconnections between insolation, ice sheets, greenhouse gas forcing, and climate. Critical to addressing this issue is an accurate interpretation of the marine δ18O record, the main template for the Ice Ages. This study uses a global compilation of 49 paired sea surface temperature-planktonic δ18O records to extract the mean δ18O of surface ocean seawater over the past 800 kyr, which we interpret to dominantly reflect global ice volume. The results indicate that global surface temperature, inferred deep ocean temperature, and atmospheric CO2 decrease early during each glacial cycle in close association with one another, whereas major ice sheet growth occurs later in glacial cycles. These relationships suggest that ice volume may have exhibited a threshold response to global cooling, and that global deglaciations do not occur until after the growth of large ice sheets. This phase sequence also suggests that the ice sheets had relatively little feedback on global cooling. Simple modeling shows that the rate of ice volume change through time is largely determined by the combined influence of insolation, temperature, and ice sheet size, with possible implications for the evolution of glacial cycles over the past three million years.

  2. An 800-kyr Record of Global Surface Ocean δ18Osw and Implications for Ice Volume-Temperature Coupling

    NASA Astrophysics Data System (ADS)

    Shakun, J. D.; Lea, D. W.; Lisiecki, L. E.; Raymo, M. E.

    2014-12-01

    We use 49 paired sea surface temperature (SST)-planktonic δ18O records to extract the mean δ18O of surface ocean seawater (δ18Osw) over the past 800 kyr, which we interpret to dominantly reflect global ice volume, and compare it to SST variability on the same stratigraphy. This analysis suggests that ice volume and temperature contribute to the marine isotope record in ~60/40 proportions, but they show consistently different patterns over glacial cycles. Global temperature cools early during each cycle while major ice sheet growth occurs later, suggesting that ice volume may have exhibited a threshold response to cooling and also had relatively little feedback on it. Multivariate regression analysis suggests that the rate of ice volume change through time is largely determined by the combined influence of orbital forcing, global temperature, and ice volume itself (r2 = 0.70 at zero-lag for 0-400 ka), with sea level rising faster with stronger insolation and warmer temperatures and when there is more ice available to melt. Indeed, cross-spectral analysis indicates that ice volume exhibits a smaller phase lag and larger gain relative to SST at the 41 and 23 kyr periods than at the 100 kyr period, consistent with additional forcing from insolation at the obliquity and precession time scales. Removing the surface ocean δ18Osw signal from the global benthic δ18O stack produces a reconstruction of deep ocean temperature that bears considerable similarity to the Antarctic ice core temperature record (r2 = 0.80 for 0-400 ka), including cooler interglacials before 400 ka. Overall, we find a close association between global surface temperature, deep ocean temperature, and atmospheric CO2. Additionally, we find that rapid cooling precedes the gradual buildup of large continental ice sheets, which may then be instrumental in terminating the cycle.

  3. Developing a Robust, Interoperable GNSS Space Service Volume (SSV) for the Global Space User Community

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Parker, Joel J. K.; Welch, Bryan; Enderle, Werner

    2017-01-01

    For over two decades, researchers, space users, Global Navigation Satellite System (GNSS) service providers, and international policy makers have been working diligently to expand the space-borne use of the Global Positioning System (GPS) and, most recently, to employ the full complement of GNSS constellations to increase spacecraft navigation performance. Space-borne Positioning, Navigation, and Timing (PNT) applications employing GNSS are now ubiquitous in Low Earth Orbit (LEO). GNSS use in space is quickly expanding into the Space Service Volume (SSV), the signal environment in the volume surrounding the Earth that enables real-time PNT measurements from GNSS systems at altitudes of 3000 km and above. To support the current missions and planned future missions within the SSV, initiatives are being conducted in the United States and internationally to ensure that GNSS signals are available, robust, and yield precise navigation performance. These initiatives include the Interagency Forum for Operational Requirements (IFOR) effort in the United States, to support GPS SSV signal robustness through future design changes, and the United Nations-sponsored International Committee on GNSS (ICG), to coordinate SSV development across all international GNSS constellations and regional augmentations. The results of these efforts have already proven fruitful, enabling new missions through radically improved navigation and timing performance, ensuring quick recovery from trajectory maneuvers, improving space vehicle autonomy and making GNSS signals more resilient from potential disruptions. Missions in the SSV are operational now and have demonstrated outstanding PNT performance characteristics; much better than what was envisioned less than a decade ago. The recent launch of the first in a series of US weather satellites will employ the use of GNSS in the SSV to substantially improve weather prediction and public-safety situational awareness of fast moving events, including

  4. Dependence on solar elevation and the daily sunshine fraction of the correlation between monthly-average-hourly diffuse and global radiation

    SciTech Connect

    Soler, A. )

    1992-01-01

    In the present work the authors study for Uccle, Belgium data (50{degree}48 minutes N, 4{degree}21 minutes E), the dependence on {anti {gamma}} and {sigma} of the correlations between {anti K}{sub d} = {anti I}{sub d}/{anti I}{sub o} and {anti I}{sub t} = {anti I}/{anti I}{sub o}, where {anti I}, {anti I}{sub d}, and {anti I}{sub o} are respectively, the monthly-average-hourly value of global, diffuse, and extraterrestrial radiation (all of them on a horizontal surface), {anti {gamma}} is the solar elevation at midhour and {sigma} the daily sunshine fraction. The dependence on {sigma} is studied for different ranges of values, from {sigma} = 0 to {sigma} > 0.9. The dependence on {anti {gamma}} is studied for {anti {gamma}} = 5{degree}, 10{degree}, 15{degree}, 25{degree}-30{degree}; 35{degree}-40{degree}; 45{degree}-60{degree} ({delta}{anti {gamma}} = 5{degree}). Relating the dependence on {sigma}, for increasing values of {sigma}({sigma} {>=} 0), there is an increase in {anti K}{sub d} with the increase in {anti K}{sub t}. For 0.42 < {anti K}{sub t} < 0.52 a maximum is obtained for {anti K}{sub d}. After the maximum, as the skies become clearer, {anti K}{sub d} decreases as {anti K}{sub t} increases. Relating the dependence on {anti {gamma}}, for each range of values of {sigma} ({sigma} > 0.2), values of the slope for linear {anti K}{sub d} = f({anti K}{sub t}) correlations show a tendency to decrease as {anti {gamma}} increases. For each value of {anti {gamma}} the slopes of the linear {anti K}{sub d} = f({anti K}{sub t}) correlations tend to decrease when {sigma} increases.

  5. The Global 2000 Report to the President: Entering the Twenty-First Century. Volume One - The Summary Report.

    ERIC Educational Resources Information Center

    Council on Environmental Quality, Washington, DC.

    This summary volume presents the conclusions of a United States' Government effort to look at the issues and interdependencies of population, resources, and environment in the long-term global perspective. The report concludes that, if present trends continue, serious stresses of overcrowding, pollution, ecological instability, and vulnerability…

  6. Global and Regional Effects of Type 2 Diabetes on Brain Tissue Volumes and Cerebral Vasoreactivity

    PubMed Central

    Last, David; de Bazelaire, Cedric; Alsop, David C.; Hu, Kun; Abduljalil, Amir M.; Cavallerano, Jerry; Marquis, Robert P.; Novak, Vera

    2007-01-01

    OBJECTIVE— The aim of this study was to evaluate the regional effects of type 2 diabetes and associated conditions on cerebral tissue volumes and cerebral blood flow (CBF) regulation. RESEARCH DESIGN AND METHODS— CBF was examined in 26 diabetic (aged 61.6 ± 6.6 years) and 25 control (aged 60.4 ± 8.6 years) subjects using continuous arterial spin labeling (CASL) imaging during baseline, hyperventilation, and CO2 rebreathing. Regional gray and white matter, cerebrospinal fluid (CSF), and white matter hyperintensity (WMH) volumes were measured on a T1-weighted inversion recovery fast-gradient echo and a fluid attenuation inversion recovery magnetic resonance imaging at 3 Tesla. RESULTS— The diabetic group had smaller global white (P = 0.006) and gray (P = 0.001) matter and larger CSF (36.3%, P < 0.0001) volumes than the control group. Regional differences were observed for white matter (−13.1%, P = 0.0008) and CSF (36.3%, P < 0.0001) in the frontal region, for CSF (20.9%, P = 0.0002) in the temporal region, and for gray matter (−3.0%, P = 0.04) and CSF (17.6%, P = 0.01) in the parieto-occipital region. Baseline regional CBF (P = 0.006) and CO2 reactivity (P = 0.005) were reduced in the diabetic group. Hypoperfusion in the frontal region was associated with gray matter atrophy (P < 0.0001). Higher A1C was associated with lower CBF (P < 0.0001) and greater CSF (P = 0.002) within the temporal region. CONCLUSIONS— Type 2 diabetes is associated with cortical and subcortical atrophy involving several brain regions and with diminished regional cerebral perfusion and vasoreactivity. Uncontrolled diabetes may further contribute to hypoperfusion and atrophy. Diabetic metabolic disturbance and blood flow dysregulation that affects preferentially frontal and temporal regions may have implications for cognition and balance in elderly subjects with diabetes. PMID:17290035

  7. Global and Regional Associations of Smaller Cerebral Gray and White Matter Volumes with Gait in Older People

    PubMed Central

    Phan, Thanh G.; Chen, Jian; Srikanth, Velandai K.

    2014-01-01

    Background Gait impairments increase with advancing age and can lead to falls and loss of independence. Brain atrophy also occurs in older age and may contribute to gait decline. We aimed to investigate global and regional relationships of cerebral gray and white matter volumes with gait speed, and its determinants step length and cadence, in older people. Methods In a population-based study, participants aged >60 years without Parkinson's disease or brain infarcts underwent magnetic resonance imaging and gait measurements using a computerized walkway. Linear regression was used to study associations of total gray and white matter volumes with gait, adjusting for each other, age, sex, height and white matter hyperintensity volume. Other covariates considered in analyses included weight and vascular disease history. Voxel-based morphometry was used to study regional relationships of gray and white matter with gait. Results There were 305 participants, mean age 71.4 (6.9) years, 54% male, mean gait speed 1.16 (0.22) m/s. Smaller total gray matter volume was independently associated with poorer gait speed (p = 0.001) and step length (p<0.001), but not cadence. Smaller volumes of cortical and subcortical gray matter in bilateral regions important for motor control, vision, perception and memory were independently associated with slower gait speed and shorter steps. No global or regional associations were observed between white matter volume and gait independent of gray matter volume, white matter hyperintensity volume and other covariates. Conclusion Smaller gray matter volume in bilaterally distributed brain networks serving motor control was associated with slower gait speed and step length, but not cadence. PMID:24416309

  8. Global and regional associations of smaller cerebral gray and white matter volumes with gait in older people.

    PubMed

    Callisaya, Michele L; Beare, Richard; Phan, Thanh G; Chen, Jian; Srikanth, Velandai K

    2014-01-01

    Gait impairments increase with advancing age and can lead to falls and loss of independence. Brain atrophy also occurs in older age and may contribute to gait decline. We aimed to investigate global and regional relationships of cerebral gray and white matter volumes with gait speed, and its determinants step length and cadence, in older people. In a population-based study, participants aged >60 years without Parkinson's disease or brain infarcts underwent magnetic resonance imaging and gait measurements using a computerized walkway. Linear regression was used to study associations of total gray and white matter volumes with gait, adjusting for each other, age, sex, height and white matter hyperintensity volume. Other covariates considered in analyses included weight and vascular disease history. Voxel-based morphometry was used to study regional relationships of gray and white matter with gait. There were 305 participants, mean age 71.4 (6.9) years, 54% male, mean gait speed 1.16 (0.22) m/s. Smaller total gray matter volume was independently associated with poorer gait speed (p = 0.001) and step length (p<0.001), but not cadence. Smaller volumes of cortical and subcortical gray matter in bilateral regions important for motor control, vision, perception and memory were independently associated with slower gait speed and shorter steps. No global or regional associations were observed between white matter volume and gait independent of gray matter volume, white matter hyperintensity volume and other covariates. Smaller gray matter volume in bilaterally distributed brain networks serving motor control was associated with slower gait speed and step length, but not cadence.

  9. Global grey matter volume in adult bipolar patients with and without lithium treatment: A meta-analysis.

    PubMed

    Sun, Yue Ran; Herrmann, Nathan; Scott, Christopher J M; Black, Sandra E; Khan, Maisha M; Lanctôt, Krista L

    2018-01-01

    The goal of this meta-analysis was to quantitatively summarize the evidence available on the differences in grey matter volume between lithium-treated and lithium-free bipolar patients. A systematic search was conducted in Cochrane Central, Embase, MEDLINE, and PsycINFO databases for original peer-reviewed journal articles that reported on global grey matter volume in lithium-medicated and lithium-free bipolar patients. Standard mean difference and Hedges' g were used to calculate effect size in a random-effects model. Risk of publication bias was assessed using Egger's test and quality of evidence was assessed using standard criteria. There were 15 studies with a total of 854 patients (368 lithium-medicated, 486 lithium-free) included in the meta-analysis. Global grey matter volume was significantly larger in lithium-treated bipolar patients compared to lithium-free patients (SMD: 0.17, 95% CI: 0.01-0.33; z = 2.11, p = 0.035). Additionally, there was a difference in global grey matter volume between groups in studies that employed semi-automated segmentation methods (SMD: 0.66, 95% CI: 0.01-1.31; z = 1.99, p = 0.047), but no significant difference in studies that used fully-automated segmentation. No publication bias was detected (bias coefficient = - 0.65, p = 0.46). Variability in imaging methods and lack of high-quality evidence limits the interpretation of the findings. Results suggest that lithium-treated patients have a greater global grey matter volume than those who were lithium-free. Further study of the relationship between lithium and grey matter volume may elucidate the therapeutic potential of lithium in conditions characterized by abnormal changes in brain structure. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  10. Handbook of solar energy data for south-facing surfaces in the United States. Volume 2: Average hourly and total daily insolation data for 235 localities. Alaska - Montana

    NASA Technical Reports Server (NTRS)

    Smith, J. H.

    1980-01-01

    Average hourly and daily total insolation estimates for 235 United States locations are presented. Values are presented for a selected number of array tilt angles on a monthly basis. All units are in kilowatt hours per square meter.

  11. Estimating the volume and age of water stored in global lakes using a geo-statistical approach

    NASA Astrophysics Data System (ADS)

    Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver

    2016-12-01

    Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 106 km2 (1.8% of global land area), a total shoreline length of 7.2 × 106 km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 103 km3 (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively.

  12. Estimating the volume and age of water stored in global lakes using a geo-statistical approach.

    PubMed

    Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver

    2016-12-15

    Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 10(6) km(2) (1.8% of global land area), a total shoreline length of 7.2 × 10(6) km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 10(3) km(3) (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively.

  13. Estimating the volume and age of water stored in global lakes using a geo-statistical approach

    PubMed Central

    Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver

    2016-01-01

    Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 106 km2 (1.8% of global land area), a total shoreline length of 7.2 × 106 km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 103 km3 (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively. PMID:27976671

  14. Variations of the earth's magnetic field and rapid climatic cooling: A possible link through changes in global ice volume

    NASA Technical Reports Server (NTRS)

    Rampino, M. R.

    1979-01-01

    A possible relationship between large scale changes in global ice volume, variations in the earth's magnetic field, and short term climatic cooling is investigated through a study of the geomagnetic and climatic records of the past 300,000 years. The calculations suggest that redistribution of the Earth's water mass can cause rotational instabilities which lead to geomagnetic excursions; these magnetic variations in turn may lead to short-term coolings through upper atmosphere effects. Such double coincidences of magnetic excursions and sudden coolings at times of ice volume changes have occurred at 13,500, 30,000, 110,000, and 135,000 YBP.

  15. Citizenship and Citizenship Education in a Global Age: Politics, Policies, and Practices in China. Global Studies in Education. Volume 2

    ERIC Educational Resources Information Center

    Law, Wing-Wah

    2011-01-01

    This book examines issues of citizenship, citizenship education, and social change in China, exploring the complexity of interactions among global forces, the nation-state, local governments, schools, and individuals--including students--in selecting and identifying with elements of citizenship and citizenship education in a multileveled polity.…

  16. Citizenship and Citizenship Education in a Global Age: Politics, Policies, and Practices in China. Global Studies in Education. Volume 2

    ERIC Educational Resources Information Center

    Law, Wing-Wah

    2011-01-01

    This book examines issues of citizenship, citizenship education, and social change in China, exploring the complexity of interactions among global forces, the nation-state, local governments, schools, and individuals--including students--in selecting and identifying with elements of citizenship and citizenship education in a multileveled polity.…

  17. Measuring Global Brain Atrophy with the Brain Volume/Cerebrospinal Fluid Index: Normative Values, Cut-Offs and Clinical Associations.

    PubMed

    Orellana, Camila; Ferreira, Daniel; Muehlboeck, J-Sebastian; Mecocci, Patrizia; Vellas, Bruno; Tsolaki, Magda; Kłoszewska, Iwona; Soininen, Hilkka; Lovestone, Simon; Simmons, Andrew; Wahlund, Lars-Olof; Westman, Eric

    2016-01-01

    Global brain atrophy is present in normal aging and different neurodegenerative disorders such as Alzheimer's disease (AD) and is becoming widely used to monitor disease progression. The brain volume/cerebrospinal fluid index (BV/CSF index) is validated in this study as a measurement of global brain atrophy. We tested the ability of the BV/CSF index to detect global brain atrophy, investigated the influence of confounders, provided normative values and cut-offs for mild, moderate and severe brain atrophy, and studied associations with different outcome variables. A total of 1,009 individuals were included [324 healthy controls, 408 patients with mild cognitive impairment (MCI) and 277 patients with AD]. Magnetic resonance images were segmented using FreeSurfer, and the BV/CSF index was calculated and studied both cross-sectionally and longitudinally (1-year follow-up). Both AD patients and MCI patients who progressed to AD showed greater global brain atrophy compared to stable MCI patients and controls. Atrophy was associated with older age, larger intracranial volume, less education and presence of the ApoE ε4 allele. Significant correlations were found with clinical variables, CSF biomarkers and several cognitive tests. The BV/CSF index may be useful for staging individuals according to the degree of global brain atrophy, and for monitoring disease progression. It also shows potential for predicting clinical changes and for being used in the clinical routine. © 2015 S. Karger AG, Basel.

  18. Effects of Meso-Scale and Small-Scale Interactions on Global Climate. Volume I. Orographic Effects on Global Climate

    DTIC Science & Technology

    1975-02-28

    Function of the Topography Grid ^ for Atmosphere 1 5.26 Schematic Diagram of Mountain Wave Configuration. .5-90 5.27 Coordinate System in the 2...global atmospheric model may arise from atmospheric mo- tions that occur in quite small regions (e.g., mountain lee waves ). Transport is also effected...compressibility and moisture. 2.1 THE BASIC HAIFA EQUATIONS The numerical investigation of mountain waves requires that the effects of inertia

  19. Excluded volume effect of counterions and water dipoles near a highly charged surface due to a rotationally averaged Boltzmann factor for water dipoles.

    PubMed

    Gongadze, Ekaterina; Iglič, Aleš

    2013-03-01

    Water ordering near a negatively charged electrode is one of the decisive factors determining the interactions of an electrode with the surrounding electrolyte solution or tissue. In this work, the generalized Langevin-Bikerman model (Gongadze-Iglič model) taking into account the cavity field and the excluded volume principle is used to calculate the space dependency of ions and water number densities in the vicinity of a highly charged surface. It is shown that for high enough surface charged densities the usual trend of increasing counterion number density towards the charged surface may be completely reversed, i.e. the drop in the counterions number density near the charged surface is predicted.

  20. Bibliography on tropical rain forests and the global carbon cycle: Volume 1, An introduction to the literature

    SciTech Connect

    Hall, C.A.S.; Brown, S.; O'Hara, F.M. Jr.; Bogdonoff, P.B.; Barshaw, D.; Kaufman, E.; Underhill, S.

    1988-05-01

    This bibliography covers the world literature on tropical rain forests, tropical deforestation, land-use change in the tropics, tropical forest conversion, and swidden agriculture as related to the global carbon cycle. Historic papers and books are included, but comprehensive coverage was only sought for 1980 through 1987. This compendium of nearly 2000 entries forms the point of departure for a series of bibliographies on this topic. Other works in this series will be on the global carbon cycle and rain forests in specific geographic areas, whereas this volume includes references to literature about the global carbon cycle and rain forests anywhere in the world. The bibliography is ordered alphabetically by author and is indexed by subject and author.

  1. The Partial Molar Volume and Thermal Expansivity of Fe2O3 in Alkali Silicate Liquids: Evidence for the Average Coordination of Fe3+

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Lange, R.

    2003-12-01

    Ferric iron is an important component in magmatic liquids, especially in those formed at subduction zones. Although it has long been known that Fe3+ occurs in four-, five- and six-fold coordination in crystalline compounds, only recently have all three Fe3+ coordination sites been confirmed in silicate glasses utilizing XANES spectroscopy at the Fe K-edge (Farges et al., 2003). Because the density of a magmatic liquid is largely determined by the geometrical packing of its network-forming cations (e.g., Si4+, Al3+, Ti4+, and Fe3+), the capacity of Fe3+ to undergo composition-induced coordination change affects the partial molar volume of the Fe2O3 component, which must be known to calculate how the ferric-ferrous ratio in magmatic liquids changes with pressure. Previous work has shown that the partial molar volume of Fe2O3 (VFe2O3) varies between calcic vs. sodic silicate melts (Mo et al., 1982; Dingwell and Brearley, 1988; Dingwell et al., 1988). The purpose of this study is to extend the data set in order to search for systematic variations in VFe2O3 with melt composition. High temperature (867-1534° C) density measurements were performed on eleven liquids in the Na2O-Fe2O3-FeO-SiO2 (NFS) system and five liquids in the K2O-Fe2O3-FeO-SiO2 (KFS) system using Pt double-bob Archimedean method. The ferric-ferrous ratio in the sodic and potassic liquids at each temperature of density measurement were calculated from the experimentally calibrated models of Lange and Carmichael (1989) and Tangeman et al. (2001) respectively. Compositions range (in mol%) from 4-18 Fe2O3, 0-3 FeO, 12-39 Na2O, 25-37 K2O, and 43-78 SiO2. Our density data are consistent with those of Dingwell et al. (1988) on similar sodic liquids. Our results indicate that for all five KFS liquids and for eight of eleven NFS liquids, the partial molar volume of the Fe2O3 component is a constant (41.57 ñ 0.14 cm3/mol) and exhibits zero thermal expansivity (similar to that for the SiO2 component). This value

  2. Global Sentry: NASA/USRA high altitude reconnaissance aircraft design, volume 2

    NASA Technical Reports Server (NTRS)

    Alexandru, Mona-Lisa; Martinez, Frank; Tsou, Jim; Do, Henry; Peters, Ashish; Chatsworth, Tom; Yu, YE; Dhillon, Jaskiran

    1990-01-01

    The Global Sentry is a high altitude reconnaissance aircraft design for the NASA/USRA design project. The Global Sentry uses proven technologies, light-weight composites, and meets the R.F.P. requirements. The mission requirements for the Global Sentry are described. The configuration option is discussed and a description of the final design is given. Preliminary sizing analyses and the mass properties of the design are presented. The aerodynamic features of the Global Sentry are described along with the stability and control characteristics designed into the flight control system. The performance characteristics are discussed as is the propulsion installation and system layout. The Global Sentry structural design is examined, including a wing structural analysis. The cockpit, controls and display layouts are covered. Manufacturing is covered and the life cost estimation. Reliability is discussed. Conclusions about the current Global Sentry design are presented, along with suggested areas for future engineering work.

  3. The role of global and regional gray matter volume decrease in multiple sclerosis.

    PubMed

    Grothe, Matthias; Lotze, Martin; Langner, Sönke; Dressel, Alexander

    2016-06-01

    Disability in multiple sclerosis (MS) patients is associated with white matter (WM) and gray matter (GM) pathology, and both processes contribute differently over the disease course. Total and regional GM volume loss can be imaged via voxel-based morphometry (VBM). Here, we retrospectively analyzed a group of 213 MS patients [163 relapsing remitting (RR) and 50 secondary progressive (SP)] using semi-automated white matter (WM) lesion mapping and voxel-based morphometry (VBM). Our aim was to assess the association of increasing disability with decreasing total and regional GM volume. As expected, total GM volume and WM lesion load were associated with patients disability, measured with the Expanded Disability Status Scale (EDSS). The more impaired the patients, the greater the statistical association to the total GM volume. Regional volume loss in the cerebellar gray matter was associated with increasing EDSS and WM lesion volume. Furthermore, SPMS patients had significantly more gray matter volume loss in the cerebellum and the hippocampus compared to RRMS patients. Our results confirm histopathological studies emphasizing the important role of the cerebellum and the hippocampus in MS patients' disability.

  4. Infusing a Global Perspective into the Study of Agriculture: Student Activities Volume II.

    ERIC Educational Resources Information Center

    Martin, Robert A., Ed.

    These student activities are designed to be used in a variety of places in the curriculum to provide a global perspective for students as they study agriculture. This document is not a unit of instruction; rather, teachers are encouraged to study the materials and decide which will be helpful in adding a global perspective to the learning…

  5. Global and regional brain volumes normalization in weight-recovered adolescents with anorexia nervosa: preliminary findings of a longitudinal voxel-based morphometry study.

    PubMed

    Bomba, Monica; Riva, Anna; Morzenti, Sabrina; Grimaldi, Marco; Neri, Francesca; Nacinovich, Renata

    2015-01-01

    The recent literature on anorexia nervosa (AN) suggests that functional and structural abnormalities of cortico-limbic areas might play a role in the evolution of the disease. We explored global and regional brain volumes in a cross-sectional and follow-up study on adolescents affected by AN. Eleven adolescents with AN underwent a voxel-based morphometry study at time of diagnosis and immediately after weight recovery. Data were compared to volumes carried out in eight healthy, age and sex matched controls. Subjects with AN showed increased cerebrospinal fluid volumes and decreased white and gray matter volumes, when compared to controls. Moreover, significant regional gray matter decrease in insular cortex and cerebellum was found at time of diagnosis. No regional white matter decrease was found between samples and controls. Correlations between psychological evaluation and insular volumes were explored. After weight recovery gray matter volumes normalized while reduced global white matter volumes persisted.

  6. The computational structural mechanics testbed architecture. Volume 4: The global-database manager GAL-DBM

    NASA Technical Reports Server (NTRS)

    Wright, Mary A.; Regelbrugge, Marc E.; Felippa, Carlos A.

    1989-01-01

    This is the fourth of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 4 describes the nominal-record data management component of the NICE software. It is intended for all users.

  7. Response of the global hydrate stability zone volume and hydrate inventory to IPCC AR5 RCP future scenarios.

    NASA Astrophysics Data System (ADS)

    Hunter, S. J.; Goldobin, D.; Haywood, A.; Ridgwell, A.; Rees, J.

    2012-04-01

    We present results from a multi-model study investigating how the global Hydrate Stability Zone (HSZ) volume and methane hydrate inventory will respond to the four Representative Concentration Pathways (RCPs) modelled within CMIP5. We begin by evaluating GCM model performance against WOA05 bottom water conditions and generate model weights to guide our multi-model mean. From initial pre-industrial conditions we model the propagation through the sediment column of bottom water temperatures through the historical and RCP scenarios to 10 kyr into the future (with conditions held fixed from the end of the RCP). Incorporating models of potential sea-level change we then model the temporal evolution of the extent of the HSZ on a global scale. Preliminary results suggest that for the RCP85 scenario (business as usual) the fractional change in global HSZ volume will exceed the envelope of modelled global change felt during the last glacial cycle (120 kyr) within as little as 2-3 kyrs depending upon the sea-level scenario. Modelling global hydrate evolution is more speculative. Starting from a mean equilibrium state derived from pre-industrial conditions we will model the first-order transient behaviour of the hydrate inventory using a 1-D model adapted from Davie and Buffett (2003). We will present results of a sensitivity analysis and describe the caveats associated with this work. M. K. Davie, B. A. Buffett, (2003) Sources of methane for marine gas hydrate: inferences from a comparison of observations and numerical models, EPSL v. 206, p. 51-63

  8. How obliquity cycles powered early Pleistocene global ice-volume variability

    NASA Astrophysics Data System (ADS)

    Tabor, Clay R.; Poulsen, Christopher J.; Pollard, David

    2015-03-01

    Milankovitch theory proposes that the magnitude of high-latitude summer insolation dictates the continental ice-volume response by controlling summer snow melt, thus anticipating a substantial ice-volume contribution from the strong summer insolation signal of precession. Yet almost all of the early Pleistocene δ18O records' signal strength resides at the frequency of obliquity. Here we explore this discrepancy using a climate-vegetation-ice sheet model to simulate climate-ice sheet response to transient orbits of varying obliquity and precession. Spectral analysis of our results shows that despite contributing significantly less to the summer insolation signal, almost 60% of the ice-volume power exists at the frequency of obliquity due to a combination of albedo feedbacks, seasonal offsets, and orbital cycle duration differences. Including eccentricity modulation of the precession ice-volume component and assuming a small Antarctic ice response to orbital forcing produce a signal that agrees with the δ18O ice-volume proxy records.

  9. The Genetic Association Between Neocortical Volume and General Cognitive Ability Is Driven by Global Surface Area Rather Than Thickness.

    PubMed

    Vuoksimaa, Eero; Panizzon, Matthew S; Chen, Chi-Hua; Fiecas, Mark; Eyler, Lisa T; Fennema-Notestine, Christine; Hagler, Donald J; Fischl, Bruce; Franz, Carol E; Jak, Amy; Lyons, Michael J; Neale, Michael C; Rinker, Daniel A; Thompson, Wesley K; Tsuang, Ming T; Dale, Anders M; Kremen, William S

    2015-08-01

    Total gray matter volume is associated with general cognitive ability (GCA), an association mediated by genetic factors. It is expectable that total neocortical volume should be similarly associated with GCA. Neocortical volume is the product of thickness and surface area, but global thickness and surface area are unrelated phenotypically and genetically in humans. The nature of the genetic association between GCA and either of these 2 cortical dimensions has not been examined. Humans possess greater cognitive capacity than other species, and surface area increases appear to be the primary driver of the increased size of the human cortex. Thus, we expected neocortical surface area to be more strongly associated with cognition than thickness. Using multivariate genetic analysis in 515 middle-aged twins, we demonstrated that both the phenotypic and genetic associations between neocortical volume and GCA are driven primarily by surface area rather than thickness. Results were generally similar for each of 4 specific cognitive abilities that comprised the GCA measure. Our results suggest that emphasis on neocortical surface area, rather than thickness, could be more fruitful for elucidating neocortical-GCA associations and identifying specific genes underlying those associations. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Global end-diastolic volume an emerging preload marker vis-a-vis other markers - Have we reached our goal?

    PubMed Central

    Kapoor, P. M; Bhardwaj, Vandana; Sharma, Amita; Kiran, Usha

    2016-01-01

    A reliable estimation of cardiac preload is helpful in the management of severe circulatory dysfunction. The estimation of cardiac preload has evolved from nuclear angiography, pulmonary artery catheterization to echocardiography, and transpulmonary thermodilution (TPTD). Global end-diastolic volume (GEDV) is the combined end-diastolic volumes of all the four cardiac chambers. GEDV has been demonstrated to be a reliable preload marker in comparison with traditionally used pulmonary artery catheter-derived pressure preload parameters. Recently, a new TPTD system called EV1000™ has been developed and introduced into the expanding field of advanced hemodynamic monitoring. GEDV has emerged as a better preload marker than its previous conventional counterparts. The advantage of it being measured by minimum invasive methods such as PiCCO™ and newly developed EV1000™ system makes it a promising bedside advanced hemodynamic parameter. PMID:27716702

  11. Comparison of average global exposure of population induced by a macro 3G network in different geographical areas in France and Serbia.

    PubMed

    Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe

    2016-09-01

    This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc.

  12. Tectonics, orbital forcing, global climate change, and human evolution in Africa: introduction to the African paleoclimate special volume.

    PubMed

    Maslin, Mark A; Christensen, Beth

    2007-11-01

    The late Cenozoic climate of Africa is a critical component for understanding human evolution. African climate is controlled by major tectonic changes, global climate transitions, and local variations in orbital forcing. We introduce the special African Paleoclimate Issue of the Journal of Human Evolution by providing a background for and synthesis of the latest work relating to the environmental context for human evolution. Records presented in this special issue suggest that the regional tectonics, appearance of C(4) plants in East Africa, and late Cenozoic global cooling combined to produce a long-term drying trend in East Africa. Of particular importance is the uplift associated with the East African Rift Valley formation, which altered wind flow patterns from a more zonal to more meridinal direction. Results in this volume suggest a marked difference in the climate history of southern and eastern Africa, though both are clearly influenced by the major global climate thresholds crossed in the last 3 million years. Papers in this volume present lake, speleothem, and marine paleoclimate records showing that the East African long-term drying trend is punctuated by episodes of short, alternating periods of extreme wetness and aridity. These periods of extreme climate variability are characterized by the precession-forced appearance and disappearance of large, deep lakes in the East African Rift Valley and paralleled by low and high wind-driven dust loads reaching the adjacent ocean basins. Dating of these records show that over the last 3 million years such periods only occur at the times of major global climatic transitions, such as the intensification of Northern Hemisphere Glaciation (2.7-2.5 Ma), intensification of the Walker Circulation (1.9-1.7 Ma), and the Mid-Pleistocene Revolution (1-0.7 Ma). Authors in this volume suggest this onset occurs as high latitude forcing in both Hemispheres compresses the Intertropical Convergence Zone so that East Africa

  13. Volumetrically-Derived Global Navigation Satellite System Performance Assessment from the Earths Surface through the Terrestrial Service Volume and the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user from the Earth's surface through the Terrestrial Service Volume (TSV) to the edge of the Space Service Volume (SSV), when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative was recently expanded to compare nadir-facing and zenith-facing user hemispherical antenna coverage with omnidirectional antenna coverage at different distances of 8,000 km altitude and 36,000 km altitude. This report summarizes the performance using these antenna coverage techniques at distances ranging from 100 km altitude to 36,000 km to be all encompassing, as well as the volumetrically-derived system availability metrics.

  14. Mars Global Digital Dune Database (MGD3): North polar region (MC-1) distribution, applications, and volume estimates

    USGS Publications Warehouse

    Hayward, R.K.

    2011-01-01

    The Mars Global Digital Dune Database (MGD3) now extends from 90??N to 65??S. The recently released north polar portion (MC-1) of MGD3 adds ~844 000km2 of moderate- to large-size dark dunes to the previously released equatorial portion (MC-2 to MC-29) of the database. The database, available in GIS- and tabular-format in USGS Open-File Reports, makes it possible to examine global dune distribution patterns and to compare dunes with other global data sets (e.g. atmospheric models). MGD3 can also be used by researchers to identify areas suitable for more focused studies. The utility of MGD3 is demonstrated through three example applications. First, the uneven geographic distribution of the dunes is discussed and described. Second, dune-derived wind direction and its role as ground truth for atmospheric models is reviewed. Comparisons between dune-derived winds and global and mesoscale atmospheric models suggest that local topography may have an important influence on dune-forming winds. Third, the methods used here to estimate north polar dune volume are presented and these methods and estimates (1130km3 to 3250km3) are compared with those of previous researchers (1158km3 to 15 000km3). In the near future, MGD3 will be extended to include the south polar region. ?? 2011 by John Wiley and Sons, Ltd.

  15. Impact of Depression, Fatigue, and Global Measure of Cortical Volume on Cognitive Impairment in Multiple Sclerosis

    PubMed Central

    De Cola, Maria Cristina; D'Aleo, Giangaetano; Sessa, Edoardo; Marino, Silvia

    2015-01-01

    Objective. To investigate the influence of demographic and clinical variables, such as depression, fatigue, and quantitative MRI marker on cognitive performances in a sample of patients affected by multiple sclerosis (MS). Methods. 60 MS patients (52 relapsing remitting and 8 primary progressive) underwent neuropsychological assessments using Rao's Brief Repeatable Battery of Neuropsychological Tests (BRB-N), the Beck Depression Inventory-second edition (BDI-II), and the Fatigue Severity Scale (FSS). We performed magnetic resonance imaging to all subjects using a 3 T scanner and obtained tissue-specific volumes (normalized brain volume and cortical brain volume). We used Student's t-test to compare depressed and nondepressed MS patients. Finally, we performed a multivariate regression analysis in order to assess possible predictors of patients' cognitive outcome among demographic and clinical variables. Results. 27.12% of the sample (16/59) was cognitively impaired, especially in tasks requiring attention and information processing speed. From between group comparison, we find that depressed patients had worse performances on BRB-N score, greater disability and disease duration, and brain volume decrease. According to multiple regression analysis, the BDI-II score was a significant predictor for most of the neuropsychological tests. Conclusions. Our findings suggest that the presence of depressive symptoms is an important determinant of cognitive performance in MS patients. PMID:25861633

  16. Global Trends in Educational Policy. International Perspectives on Education and Society. Volume 6

    ERIC Educational Resources Information Center

    Baker, David, Ed.; Wiseman, Alex, Ed.

    2005-01-01

    This volume of International Perspectives on Education and Society highlights the valuable role that educational policy plays in the development of education and society around the world. The role of policy in the development of education is crucial. Much rests on the decisions, support, and most of all resources that policymakers can either give…

  17. Navigation Performance of Global Navigation Satellite Systems in the Space Service Volume

    NASA Technical Reports Server (NTRS)

    Force, Dale A.

    2013-01-01

    GPS has been used for spacecraft navigation for many years center dot In support of this, the US has committed that future GPS satellites will continue to provide signals in the Space Service Volume center dot NASA is working with international agencies to obtain similar commitments from other providers center dot In support of this effort, I simulated multi-constellation navigation in the Space Service Volume In this presentation, I extend the work to examine the navigational benefits and drawbacks of the new constellations center dot A major benefit is the reduced geometric dilution of precision (GDOP). I show that there is a substantial reduction in GDOP by using all of the GNSS constellations center dot The increased number of GNSS satellites broadcasting does produce mutual interference, raising the noise floor. A near/far signal problem can also occur where a nearby satellite drowns out satellites that are far away. - In these simulations, no major effect was observed Typically, the use of multi-constellation GNSS navigation improves GDOP by a factor of two or more over GPS alone center dot In addition, at the higher altitudes, four satellite solutions can be obtained much more often center dot This show the value of having commitments to provide signals in the Space Service Volume Besides a commitment to provide a minimum signal in the Space Service Volume, detailed signal gain information is useful for mission planning center dot Knowledge of group and phase delay over the pattern would also reduce the navigational uncertainty

  18. Global Trends in Educational Policy. International Perspectives on Education and Society. Volume 6

    ERIC Educational Resources Information Center

    Baker, David, Ed.; Wiseman, Alex, Ed.

    2005-01-01

    This volume of International Perspectives on Education and Society highlights the valuable role that educational policy plays in the development of education and society around the world. The role of policy in the development of education is crucial. Much rests on the decisions, support, and most of all resources that policymakers can either give…

  19. Proceedings of Eco-Informa `96 - global networks for environmental information. Volume 10 and 11

    SciTech Connect

    1996-12-31

    This fourth Eco-Informa forum has been designed to bridge the gap between scientific knowledge and real world applications. Enhancement of of international and exchange of global environmental technology among scientific, governmental, and commercial communities is the goal. Researchers, policy makers, and information managers presented papers that integrate scientific and technical issues with the global needs for expanded networks effective communication and responsible decision making. Special emphasis was given to environmental information management and decision support systems, including environmental computing and modeling, data banks and environmental education. In addition, fields such as waste management and remediation, sustainable food production, life-cycle analysis and auditing were also addressed.

  20. The deep sea oxygen isotopic record: Significance for tertiary global ice volume history, with emphasis on the latest Miocene/early Pliocene

    SciTech Connect

    Prentice, M.L.

    1988-01-01

    Planktonic and benthic isotopic records as well as carbonate sedimentation records extending from 6.1 to 4.1 Ma for eastern South Atlantic Holes 526A and 525B are presented. These data suggest ice volume variations about a constant mean sufficient to drive sea level between 10 m and 75 m below present. Isotopic records at the deeper (2500 m) site have been enriched by up to 0.5% by dissolution. Carbonate accumulation rates at both sites quadrupled at 4.6 Ma primarily because of increased production and, secondarily, decreased dissolution. The second part presents a Cenozoic-long composite {delta}{sup 18}O curve for tropical shallow-dwelling planktonic foraminifers and the benthic foraminifer Cibicides at 2-4 km depths. Surface {delta}{sup 18}O gradients between various low-and-mid latitude sites reflect: (1) widespread SST stability through the Cenozoic and (2) significant change in Tasman Sea SST through the Tertiary. Assuming average SST for tropical non-upwelling areas was constant, the planktonic composite suggest that global ice volume for the last 40 my has not been significantly less than today. Residual benthic {delta}{sup 18}O reflect relatively warm and saline deep water until the early Miocene after which time deep water progressively cooled. The third part presents {delta}{sup 18}O for Recent Orbulina universa from 44 core-tops distributed through the Atlantic and Indian Oceans. The purpose was to test the hypothesis that Orbulina calcifies at constant temperature and so records only ice volume changes. Orbulina commonly calcifies at intermediate depths over a wide range of temperatures salinities, and densities. These physical factors are not the primary controls on the spatial and vertical distribution of Orbulina.

  1. Technical Report Series on Global Modeling and Data Assimilation, Volume 41 : GDIS Workshop Report

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Schubert, Siegfried; Pozzi, Will; Mo, Kingtse; Wood, Eric F.; Stahl, Kerstin; Hayes, Mike; Vogt, Juergen; Seneviratne, Sonia; Stewart, Ron; Pulwarty, Roger; Stefanski, Robert

    2015-01-01

    The workshop "An International Global Drought Information System Workshop: Next Steps" was held on 10-13 December 2014 in Pasadena, California. The more than 60 participants from 15 countries spanned the drought research community and included select representatives from applications communities as well as providers of regional and global drought information products. The workshop was sponsored and supported by the US National Integrated Drought Information System (NIDIS) program, the World Climate Research Program (WCRP: GEWEX, CLIVAR), the World Meteorological Organization (WMO), the Group on Earth Observations (GEO), the European Commission Joint Research Centre (JRC), the US Climate Variability and Predictability (CLIVAR) program, and the US National Oceanic and Atmospheric Administration (NOAA) programs on Modeling, Analysis, Predictions and Projections (MAPP) and Climate Variability & Predictability (CVP). NASA/JPL hosted the workshop with logistical support provided by the GEWEX program office. The goal of the workshop was to build on past Global Drought Information System (GDIS) progress toward developing an experimental global drought information system. Specific goals were threefold: (i) to review recent research results focused on understanding drought mechanisms and their predictability on a wide range of time scales and to identify gaps in understanding that could be addressed by coordinated research; (ii) to help ensure that WRCP research priorities mesh with efforts to build capacity to address drought at the regional level; and (iii) to produce an implementation plan for a short duration pilot project to demonstrate current GDIS capabilities. See http://www.wcrp-climate.org/gdis-wkshp-2014-objectives for more information.

  2. Transforming America: Cultural Cohesion, Educational Achievement, and Global Competitiveness. Educational Psychology. Volume 7

    ERIC Educational Resources Information Center

    DeVillar, Robert A.; Jiang, Binbin

    2011-01-01

    Creatively and rigorously blending historical research and contemporary data from various disciplines, this book cogently and comprehensively illustrates the problems and opportunities the American nation faces in education, economics, and the global arena. The authors propose a framework of transformation that would render American culture no…

  3. Global Journal of Computer Science and Technology. Volume 1.2

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2009-01-01

    Articles in this issue of "Global Journal of Computer Science and Technology" include: (1) Input Data Processing Techniques in Intrusion Detection Systems--Short Review (Suhair H. Amer and John A. Hamilton, Jr.); (2) Semantic Annotation of Stock Photography for CBIR Using MPEG-7 standards (R. Balasubramani and V. Kannan); (3) An Experimental Study…

  4. Global Journal of Computer Science and Technology. Volume 9, Issue 5 (Ver. 2.0)

    ERIC Educational Resources Information Center

    Dixit, R. K.

    2010-01-01

    This is a special issue published in version 1.0 of "Global Journal of Computer Science and Technology." Articles in this issue include: (1) [Theta] Scheme (Orthogonal Milstein Scheme), a Better Numerical Approximation for Multi-dimensional SDEs (Klaus Schmitz Abe); (2) Input Data Processing Techniques in Intrusion Detection…

  5. Global Positioning System Control/User Segments. Volume II. System Error Performance.

    DTIC Science & Technology

    RADIO NAVIGATION, *NAVIGATION SATELLITES, * POSITION FINDING, *NAVIGATION COMPUTERS, *IONOSPHERIC PROPAGATION, GLOBAL , EPHEMERIDES, TRADE OFF...ANALYSIS, SPACEBORNE, ERRORS, VELOCITY, SYSTEMS ENGINEERING, DIGITAL COMPUTERS, MEMORY DEVICES, TIME SIGNALS, SITE SELECTION, GROUND STATIONS, MOTION, MATHEMATICAL MODELS, ALGORITHMS, PERFORMANCE(ENGINEERING), USER NEEDS, S BAND, L BAND.

  6. Global Inventory of Regional and National Qualifications Frameworks. Volume I: Thematic Chapters

    ERIC Educational Resources Information Center

    Deij, Arjen; Graham, Michael; Bjornavold, Jens; Grm, Slava Pevec; Villalba, Ernesto; Christensen, Hanne; Chakroun, Borhene; Daelman, Katrien; Carlsen, Arne; Singh, Madhu

    2015-01-01

    The "Global Inventory of Regional and National Qualifications Frameworks," the result of collaborative work between the European Training Foundation (ETF), the European Centre for the Development of Vocational Training (Cedefop), UNESCO [United Nations Educational, Scientific and Cultural Organization] and UIL [UNESCO Institute for…

  7. Impact of case volume on outcomes of ureteroscopy for ureteral stones: the clinical research office of the endourological society ureteroscopy global study.

    PubMed

    Kandasami, Sangam V; Mamoulakis, Charalampos; El-Nahas, Ahmed R; Averch, Timothy; Tuncay, O Levent; Rawandale-Patil, Ashish; Cormio, Luigi; de la Rosette, Jean J

    2014-12-01

    The Clinical Research Office of the Endourological Society (CROES) undertook the Ureteroscopy Global Study to establish a prospective global database to examine the worldwide use of ureteroscopy (URS) and to determine factors affecting outcome. To investigate the influence of case volume on the outcomes of URS for ureteral stones. The URS Global Study collected prospective data on consecutive patients with urinary stones treated with URS at 114 centres worldwide for 1 yr. Centres were identified as low or high volume based on the median overall annual case volume. Pre- and intraoperative characteristics, and postoperative outcomes in patients at low- and high-volume centres were compared. The relationships between case volume and stone-free rate (SFR), stone burden, complications, and hospital stay were explored using multivariate regression analysis. Across all centres, the median case volume was 67; 58 and 56 centres were designated as low volume and high volume, respectively. URS procedures at high-volume centres took significantly less time to conduct. Mean SFR was 91.9% and 86.3% at high- and low-volume centres, respectively (p<0.001); the adjusted probability of a stone-free outcome increased with increasing case volume (p<0.001). Patients treated at a high-volume centre were less likely to need retreatment, had shorter postoperative hospital stay, were less likely to be readmitted within 3 mo, and had fewer and less severe complications. At case volumes approximately >200, the probability of complications decreased with increasing case volume (p=0.02). The study is limited by the heterogeneity of participating centres and surgeons and the inclusion of patients treated by more than one approach. In the treatment of ureteral stones with URS, high-volume centres achieve better outcomes than low-volume centres. Several outcome measures for URS improve with an increase in case volume. Outcomes following treatment of ureteral stones by ureteroscopy (URS) were

  8. Age Differences in Big Five Behavior Averages and Variabilities Across the Adult Lifespan: Moving Beyond Retrospective, Global Summary Accounts of Personality

    PubMed Central

    Noftle, Erik E.; Fleeson, William

    2009-01-01

    In three intensive cross-sectional studies, age differences in behavior averages and variabilities were examined. Three questions were posed: Does variability differ among age groups? Does the sizable variability in young adulthood persist throughout the lifespan? Do past conclusions about trait development, based on trait questionnaires, hold up when actual behavior is examined? Three groups participated: younger adults (18-23 years), middle-aged adults (35-55 years), and older adults (65-81 years). In two experience-sampling studies, participants reported their current behavior multiple times per day for one or two week spans. In a third study, participants interacted in standardized laboratory activities on eight separate occasions. First, results revealed a sizable amount of intraindividual variability in behavior for all adult groups, with standard deviations ranging from about half a point to well over one point on 6-point scales. Second, older adults were most variable in Openness whereas younger adults were most variable in Agreeableness and Emotional Stability. Third, most specific patterns of maturation-related age differences in actual behavior were both more greatly pronounced and differently patterned than those revealed by the trait questionnaire method. When participants interacted in standardized situations, personality differences between younger adults and middle-aged adults were larger, and older adults exhibited a more positive personality profile than they exhibited in their everyday lives. PMID:20230131

  9. Evaluation of the skill of North-American Multi-Model Ensemble (NMME) Global Climate Models in predicting average and extreme precipitation and temperature over the continental USA

    NASA Astrophysics Data System (ADS)

    Slater, Louise J.; Villarini, Gabriele; Bradley, Allen A.

    2016-08-01

    This paper examines the forecasting skill of eight Global Climate Models from the North-American Multi-Model Ensemble project (CCSM3, CCSM4, CanCM3, CanCM4, GFDL2.1, FLORb01, GEOS5, and CFSv2) over seven major regions of the continental United States. The skill of the monthly forecasts is quantified using the mean square error skill score. This score is decomposed to assess the accuracy of the forecast in the absence of biases (potential skill) and in the presence of conditional (slope reliability) and unconditional (standardized mean error) biases. We summarize the forecasting skill of each model according to the initialization month of the forecast and lead time, and test the models' ability to predict extended periods of extreme climate conducive to eight `billion-dollar' historical flood and drought events. Results indicate that the most skillful predictions occur at the shortest lead times and decline rapidly thereafter. Spatially, potential skill varies little, while actual model skill scores exhibit strong spatial and seasonal patterns primarily due to the unconditional biases in the models. The conditional biases vary little by model, lead time, month, or region. Overall, we find that the skill of the ensemble mean is equal to or greater than that of any of the individual models. At the seasonal scale, the drought events are better forecast than the flood events, and are predicted equally well in terms of high temperature and low precipitation. Overall, our findings provide a systematic diagnosis of the strengths and weaknesses of the eight models over a wide range of temporal and spatial scales.

  10. Temperature minima in the average thermal structure of the middle mesosphere (70 - 80 km) from analysis of 40- to 92-km SME global temperature profiles

    NASA Technical Reports Server (NTRS)

    Clancy, R. Todd; Rusch, David W.; Callan, Michael T.

    1994-01-01

    Global temperatures have been derived for the upper stratosphere and mesosphere from analysis of Solar Mesosphere Explorer (SME) limb radiance profiles. The SME temperature represent fixed local time observations at 1400 - 1500 LT, with partial zonal coverage of 3 - 5 longitudes per day over the 1982-1986 period. These new SME temperatures are compared to the COSPAR International Ionosphere Reference Atmosphere 86 (CIRA 86) climatology (Fleming et al., 1990) as well as stratospheric and mesospheric sounder (SAMS); Barnett and Corney, 1984), National Meteorological Center (NMC); (Gelman et al., 1986), and individual lidar and rocket observations. Significant areas of disagreement between the SME and CIRA 86 mesospheric temperatures are 10 K warmer SME temperatures at altitudes above 80 km. The 1981-1982 SAMS temperatures are in much closer agreement with the SME temperatures between 40 and 75 km. Although much of the SME-CIRA 86 disagreement probably stems from the poor vertical resolution of the observations comprising the CIRA 86 modelm, some portion of the differences may reflect 5- to 10-year temporal variations in mesospheric temperatures. The CIRA 86 climatology is based on 1973-1978 measurements. Relatively large (1 K/yr) 5- to 10-year trends in temperatures as functions of longitude, latitude, and altitude have been observed for both the upper stratosphere (Clancy and Rusch, 1989a) and mesosphere (Clancy and Rusch, 1989b; Hauchecorne et al., 1991). The SME temperatures also exhibit enhanced amplitudes for the semiannual oscillation (SAO) of upper mesospheric temperatures at low latitudes, which are not evident in the CIRA 86 climatology. The so-called mesospheric `temperature inversions' at wintertime midlatitudes, which have been observed by ground-based lidar (Hauschecorne et al., 1987) and rocket in situ measurements (Schmidlin, 1976), are shown to be a climatological aspect of the mesosphere, based on the SME observations.

  11. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  12. Remedial Investigation/Feasibility Study (RI/FS) Report, David Global Communications Site. Volume 2

    DTIC Science & Technology

    1994-02-23

    an Air Stripper is Used ............. L.-12 Recommendations and Conclusions .......................... L-13 Case 1: Treatment of SVE Gases (2( X ) scfm...contamination is migrating downward, from the B to the C, near the MW ( X )-3 cluster. Include the 7/93 data for MWE-3. Response: Figure 4-16 was revised as...citations that begin with "R 18-8": 1 believe the correct citations are to 40 CFR. 7. Volume II, Appendix X , p K-11 (Table X -1), middle column, bottom

  13. Remedial Investigation/Feasibility Study Report, Davis Global Communications Site. Volume 1

    DTIC Science & Technology

    1994-02-23

    AD-A277 523 11111 111i II II111! 1 1111 iii11111 Sjjj jIII I I Fll II lll ’l"ll II tI ’• • •’ ; "• .. - - - ,McCbelanAir Force Base Davis Global...nvironmental Restoration Division Environmental Management Directorate Attachment: Acc.cn Fr 1 . Distribution 2. Document ... DI, 94-09206 I -i... I...the official position of the Air Force. S ( RDDt0012A46.WPS (Davs RIi/FS) . 1 V [Draft Proposed Plan Distribution List Interpretive Reports (Coordination

  14. Weighted south-wide average pulpwood prices

    Treesearch

    James E. Granskog; Kevin D. Growther

    1991-01-01

    Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...

  15. A planet under siege: Are we changing earth`s climate?. Global Systems Science, Teacher`s guide to Volume 1

    SciTech Connect

    Sneider, C.; Golden, R.

    1993-01-01

    Global Systems Science is an interdisciplinary course for high school students that emphasizes how scientists from a wide variety of fields work together to understand problems of global impact. The ``big ideas`` of science are stressed, such as the concept of an interacting system, co-evolution of the atmosphere and life, and the important role that individuals can play in both affecting and protecting our vulnerable global environment. The target audience for this course encompasses the entire range of high school students from s nine tough twelve. The course involves students actively in learning. Global Systems Science is divided into five volumes. Each volume contains laboratory experiments; home investigations; descriptions of recent scientific work; historical background; and consideration of the political, economic, and ethical issues associated with each problem area. Collectively, these volumes constitute a unique combination of studies in the natural and social sciences from which high school students may view the global environmental problems that they will confront within their lifetimes. Collectively, they constitute a unique combination of studies in the natural and social sciences through which high school students may view the global environmental problems that they will confront within their lifetimes. The five volumes are: A Planet Under Siege: Are We Changing Earths Climate; A History of Fire and Ice: The Earth`s Climate System; Energy Paths: Use and Conservation of Energy; Ecological Systems: Evolution and Interdependence of Life; and, The Case of the Missing Ozone: Chemistry of the Earth`s Atmosphere.

  16. The balanced-force volume tracking algorithm and global embedded interface formulation for droplet dynamics with mass transfer

    SciTech Connect

    Francois, Marianne M; Carlson, Neil N

    2010-01-01

    Understanding the complex interaction of droplet dynamics with mass transfer and chemical reactions is of fundamental importance in liquid-liquid extraction. High-fidelity numerical simulation of droplet dynamics with interfacial mass transfer is particularly challenging because the position of the interface between the fluids and the interface physics need to be predicted as part of the solution of the flow equations. In addition, the discontinuity in fluid density, viscosity and species concentration at the interface present additional numerical challenges. In this work, we extend our balanced-force volume-tracking algorithm for modeling surface tension force (Francois et al., 2006) and we propose a global embedded interface formulation to model the interfacial conditions of an interface in thermodynamic equilibrium. To validate our formulation, we perform simulations of pure diffusion problems in one- and two-dimensions. Then we present two and three-dimensional simulations of a single droplet dynamics rising by buoyancy with mass transfer.

  17. Bibliography on tropical rain forests and the global carbon cycle: Volume 2, South Asia

    SciTech Connect

    Flint, E.P.; Richards, J.F.

    1989-02-01

    This bibliography covers the literature on tropical rain forests,tropical deforestation, land-use change, tropical forest conversion, and shifting cultivation in South Asia (predominantly India, Pakistan, and Bangladesh but also including contributions in Burma, Ceylon, Malaysia, and Sri Lanka). It covers not only rain-forest ecosystems but also other ecosystems that border, derive from, or influence rain forests. The literature included was selected because of its contribution to understanding the global carbon cycle, changes in that cycle, and rain forests' role in that cycle. Journal articles, books, and reports from 1880 to 1988 are included. The more than 4200 entries of this bibliography are ordered alphabetically by author and are indexed by subject and author.

  18. Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-12-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  19. Late-life obesity is associated with smaller global and regional gray matter volumes: a voxel-based morphometric study

    PubMed Central

    Brooks, S J; Benedict, C; Burgos, J; Kempton, M J; Kullberg, J; Nordenskjöld, R; Kilander, L; Nylander, R; Larsson, E-M; Johansson, L; Ahlström, H; Lind, L; Schiöth, H B

    2013-01-01

    OBJECTIVE: Obesity adversely affects frontal lobe brain structure and function. Here we sought to show that people who are obese versus those who are of normal weight over a 5-year period have differential global and regional brain volumes. DESIGN: Using voxel-based morphometry, contrasts were done between those who were recorded as being either obese or of normal weight over two time points in the 5 years prior to the brain scan. In a post-hoc preliminary analysis, we compared scores for obese and normal weight people who completed the trail-making task. SUBJECTS: A total of 292 subjects were examined following exclusions (for example, owing to dementia, stroke and cortical infarcts) from the Prospective Investigation of the Vasculature in Uppsala Seniors cohort with a body mass index of normal weight (<25 kg m−2) or obese (⩾30 kg m−2). RESULTS: People who were obese had significantly smaller total brain volumes and specifically, significantly reduced total gray matter (GM) volume (GMV) (with no difference in white matter or cerebrospinal fluid). Initial exploratory whole brain uncorrected analysis revealed that people who were obese had significantly smaller GMV in the bilateral supplementary motor area, bilateral dorsolateral prefrontal cortex (DLPFC), left inferior frontal gyrus and left postcentral gyrus. Secondary more stringent corrected analyses revealed a surviving cluster of GMV difference in the left DLPFC. Finally, post-hoc contrasts of scores on the trail-making task, which is linked to DLPFC function, revealed that obese people were significantly slower than those of normal weight. CONCLUSION: These findings suggest that in comparison with normal weight, people who are obese have smaller GMV, particularly in the left DLPFC. Our results may provide evidence for a potential working memory mechanism for the cognitive suppression of appetite that may lower the risk of developing obesity in later life. PMID:22290540

  20. Late-life obesity is associated with smaller global and regional gray matter volumes: a voxel-based morphometric study.

    PubMed

    Brooks, S J; Benedict, C; Burgos, J; Kempton, M J; Kullberg, J; Nordenskjöld, R; Kilander, L; Nylander, R; Larsson, E-M; Johansson, L; Ahlström, H; Lind, L; Schiöth, H B

    2013-02-01

    Obesity adversely affects frontal lobe brain structure and function. Here we sought to show that people who are obese versus those who are of normal weight over a 5-year period have differential global and regional brain volumes. Using voxel-based morphometry, contrasts were done between those who were recorded as being either obese or of normal weight over two time points in the 5 years prior to the brain scan. In a post-hoc preliminary analysis, we compared scores for obese and normal weight people who completed the trail-making task. A total of 292 subjects were examined following exclusions (for example, owing to dementia, stroke and cortical infarcts) from the Prospective Investigation of the Vasculature in Uppsala Seniors cohort with a body mass index of normal weight (<25 kg m(-2)) or obese (30 kg m(-2)). People who were obese had significantly smaller total brain volumes and specifically, significantly reduced total gray matter (GM) volume (GMV) (with no difference in white matter or cerebrospinal fluid). Initial exploratory whole brain uncorrected analysis revealed that people who were obese had significantly smaller GMV in the bilateral supplementary motor area, bilateral dorsolateral prefrontal cortex (DLPFC), left inferior frontal gyrus and left postcentral gyrus. Secondary more stringent corrected analyses revealed a surviving cluster of GMV difference in the left DLPFC. Finally, post-hoc contrasts of scores on the trail-making task, which is linked to DLPFC function, revealed that obese people were significantly slower than those of normal weight. These findings suggest that in comparison with normal weight, people who are obese have smaller GMV, particularly in the left DLPFC. Our results may provide evidence for a potential working memory mechanism for the cognitive suppression of appetite that may lower the risk of developing obesity in later life.

  1. Seasonal cycle of volume transport through Kerama Gap revealed by a 20-year global HYbrid Coordinate Ocean Model reanalysis

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Metzger, E. Joseph; Thoppil, Prasad; Hurlburt, Harley E.; Zamudio, Luis; Smedstad, Ole Martin; Na, Hanna; Nakamura, Hirohiko; Park, Jae-Hun

    2015-12-01

    The temporal variability of volume transport from the North Pacific Ocean to the East China Sea (ECS) through Kerama Gap (between Okinawa Island and Miyakojima Island - a part of Ryukyu Islands Arc) is investigated using a 20-year global HYbrid Coordinate Ocean Model (HYCOM) reanalysis with the Navy Coupled Ocean Data Assimilation from 1993 to 2012. The HYCOM mean transport is 2.1 Sv (positive into the ECS, 1 Sv = 106 m3/s) from June 2009 to June 2011, in good agreement with the observed 2.0 Sv transport during the same period. This is similar to the 20-year mean Kerama Gap transport of 1.95 ± 4.0 Sv. The 20-year monthly mean volume transport (transport seasonal cycle) is maximum in October (3.0 Sv) and minimum in November (0.5 Sv). The annual variation component (345-400 days), mesoscale eddy component (70-345 days), and Kuroshio meander component (< 70 days) are separated to determine their contributions to the transport seasonal cycle. The annual variation component has a close relation with the local wind field and increases (decreases) transport into the ECS through Kerama Gap in summer (winter). Most of the variations in the transport seasonal cycle come from the mesoscale eddy component. The impinging mesoscale eddies increase the transport into the ECS during January, February, May, and October, and decrease it in March, April, November, and December, but have little effect in summer (June-September). The Kuroshio meander components cause smaller transport variations in summer than in winter.

  2. Seasonal Cycle of Volume Transport through Kerama Gap Revealed by a 20-year Global HYbrid Coordinate Ocean Model Reanalysis

    NASA Astrophysics Data System (ADS)

    Yu, Z.; Metzger, E. J.; Thoppil, P.; Hurlburt, H. E.; Zamudio, L.; Smedstad, O. M.; Na, H.; Nakamura, H.; Park, J. H.

    2016-02-01

    The temporal variability of volume transport from the North Pacific Ocean to the East China Sea (ECS) through Kerama Gap (between Okinawa Island and Miyakojima Island - a part of Ryukyu Islands Arc) is investigated using a 20-year global HYbrid Coordinate Ocean Model (HYCOM) reanalysis with the Navy Coupled Ocean Data Assimilation from 1993 to 2012. The HYCOM mean transport is 2.1 Sv (positive into the ECS, 1 Sv = 106 m3/s) from June 2009 to June 2011, in good agreement with the observed 2.0 Sv transport during the same period. This is similar to the 20-year mean Kerama Gap transport of 1.95 ± 4.0 Sv. The 20-year monthly mean volume transport (transport seasonal cycle) is maximum in October (3.0 Sv) and minimum in November (0.5 Sv). The annual variation component (345-400 days), mesoscale eddy component (70 - 345 days), and Kuroshio meander component (< 70 days) are separated to determine their contributions to the transport seasonal cycle. The annual variation component has a close relation with the local wind field and increases (decreases) transport into the ECS through Kerama Gap in the summer (winter). Most of the variations in the transport seasonal cycle come from the mesoscale eddy component. The impinging mesoscale eddies cause an increase of the transport into the ECS in January, February, May, and October, and the decrease in March, April, November, and December, but not much change in summer from June to September. The Kuroshio meander components cause smaller transport variations in summer than in winter.

  3. Global ice volume during MIS 3 inferred from a sea-level analysis of sedimentary core records in the Yellow River Delta

    NASA Astrophysics Data System (ADS)

    Pico, Tamara; Mitrovica, Jerry X.; Ferrier, Ken L.; Braun, Jean

    2016-11-01

    Estimates of global ice volume during the glacial phase of the most recent ice age cycle are characterized by significant uncertainty, reflecting the relative paucity of geological constraints on sea level relevant to this time interval. For example, during the middle stages of Marine Isotope Stage 3, published estimates of peak global mean sea level (GMSL) relative to the present range from -25 m to -87 m. The large uncertainty in GMSL at MIS 3 has significant implications for estimates of the rate of ice growth in the period leading to the Last Glacial Maximum (∼26 ka). We refine estimates of global ice volume during MIS 3 by employing sediment cores in the Bohai and Yellow Sea that record a migration of the paleoshoreline at ∼50-37 ka through a transition from marine to brackish conditions. In particular, we correct relative sea level at these sites for contamination due to glacial isostatic adjustment using a sea-level calculation that includes a gravitationally self-consistent treatment of sediment redistribution and compaction, and estimate a peak global mean sea level of -38 ± 7 m during the interval 50-37 ka. With suitable sedimentary core records, the approach described herein can be extended to refine existing constraints on global ice volume across the entire glacial period.

  4. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function

    PubMed Central

    Yotti, Raquel; del Villar, Candelas Pérez; del Álamo, Juan C.; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Carlos Antoranz, J.; Mar Desco, M.; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-01-01

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart. PMID:23743396

  5. Diastolic chamber properties of the left ventricle assessed by global fitting of pressure-volume data: improving the gold standard of diastolic function.

    PubMed

    Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco

    2013-08-15

    In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.

  6. Global fractional anisotropy and mean diffusivity together with segmented brain volumes assemble a predictive discriminant model for young and elderly healthy brains: a pilot study at 3T

    PubMed Central

    Garcia-Lazaro, Haydee Guadalupe; Becerra-Laparra, Ivonne; Cortez-Conradis, David; Roldan-Valadez, Ernesto

    2016-01-01

    Summary Several parameters of brain integrity can be derived from diffusion tensor imaging. These include fractional anisotropy (FA) and mean diffusivity (MD). Combination of these variables using multivariate analysis might result in a predictive model able to detect the structural changes of human brain aging. Our aim was to discriminate between young and older healthy brains by combining structural and volumetric variables from brain MRI: FA, MD, and white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) volumes. This was a cross-sectional study in 21 young (mean age, 25.71±3.04 years; range, 21–34 years) and 10 elderly (mean age, 70.20±4.02 years; range, 66–80 years) healthy volunteers. Multivariate discriminant analysis, with age as the dependent variable and WM, GM and CSF volumes, global FA and MD, and gender as the independent variables, was used to assemble a predictive model. The resulting model was able to differentiate between young and older brains: Wilks’ λ = 0.235, χ2 (6) = 37.603, p = .000001. Only global FA, WM volume and CSF volume significantly discriminated between groups. The total accuracy was 93.5%; the sensitivity, specificity and positive and negative predictive values were 91.30%, 100%, 100% and 80%, respectively. Global FA, WM volume and CSF volume are parameters that, when combined, reliably discriminate between young and older brains. A decrease in FA is the strongest predictor of membership of the older brain group, followed by an increase in WM and CSF volumes. Brain assessment using a predictive model might allow the follow-up of selected cases that deviate from normal aging. PMID:27027893

  7. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  8. Relation between meandering of subpolar front of the Japan Sea and volume transport of the Tsushima Warm Current found in numerical experiments based on global warming

    NASA Astrophysics Data System (ADS)

    Igeta, Y.; Fukudome, K.; Kuga, M.; Watanabe, T.; Kidokoro, H.

    2014-12-01

    To understand relation between volume transport of the Tsushima Warm current (TWC) and meandering of sub polar front (SPF) of the Japan Sea (JS), numerical experiments were performed using ocean general circulation model based on global warming. The model domain included whole of the JS with 4 shallow straits and its resolution was 1/12 degree. Numerical integrations were performed from 2006 to 2100 using atmospheric conditions from the results of the CGCM3 global climate model under the IPCC-AR5 rcp4.5 scenario. We used monthly climatological volume transport of the TWC through the Tsushima strait based on observations as inflow and outflow conditions at 4 open boundaries. Model result showed that large meanderings of the SPF intermittently occurred around 133oE (the western region) and and 137oE (the eastern region) . Inter annual variations of the eastward volume transport of the TWC were characterized between three periods: period 1) Large volume transport (about 6Sv) (from 2006 to 2030); period 2) decreasing of the volume transport from 6Sv to 4Sv (from 2030 to 2060); period 3) small volume transport accompanied by large decadal fluctuations (from 2060 to 2100). During period 1, the SPF continued to meander with large amplitudes in western and eastern region of the JS. The meanderings in period 2 were smaller than those in period 1, which resulted in straight path of the SPF. Large meanderings were sometimes found only in the western region, while small meanderings frequently occurred anywhere during period 3. These results suggest that the large meanderings of the SPF are induced by the large volume transport and small meanderings are caused by inter-annual fluctuation of the volume transport of TWC.

  9. PCK and Average

    ERIC Educational Resources Information Center

    Watson, Jane; Callingham, Rosemary

    2013-01-01

    This paper considers the responses of 26 teachers to items exploring their pedagogical content knowledge (PCK) about the concept of average. The items explored teachers' knowledge of average, their planning of a unit on average, and their understanding of students as learners in devising remediation for two student responses to a problem. Results…

  10. Areal Average Albedo (AREALAVEALB)

    DOE Data Explorer

    Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni

    2008-01-01

    he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.

  11. Global and regional assessment of sustained inflation pressure-volume curves in patients with acute respiratory distress syndrome.

    PubMed

    Becher, Tobias; Rostalski, Philipp; Kott, Matthias; Adler, Andy; Schadler, Dirk; Weiler, Norbert; Frerichs, Inez

    2017-03-24

    Static or quasi-static pressure-volume (P-V) curves can be used to determine the lung mechanical properties of patients suffering from acute respiratory distress syndrome (ARDS). According to the traditional interpretation, lung recruitment occurs mainly below the lower point of maximum curvature (LPMC) of the inflation P-V curve. Although some studies have questioned this assumption, setting of positive end-expiratory pressure 2 cmH2O above the LPMC was part of a "lung-protective" ventilation strategy successfully applied in several clinical trials. The aim of our study was to quantify the amount of unrecruited lung at different clinically relevant points of the P-V curve. P-V curves and electrical impedance tomography (EIT) data from 30 ARDS patients were analysed. We determined the regional opening pressures for every EIT image pixel and fitted the global P-V curves to five sigmoid model equations to determine the LPMC, inflection point (IP) and upper point of maximal curvature (UPMC). Points of maximal curvature and IP were compared between the models by one-way analysis of variance (ANOVA). The percentages of lung pixels remaining closed ("unrecruited lung") at LPMC, IP and UPMC were calculated from the number of lung pixels exhibiting regional opening pressures higher than LPMC, IP and UPMC and were also compared by one-way ANOVA. As results, we found a high variability of LPMC values among the models, a smaller variability of IP and UPMC values. We found a high percentage of unrecruited lung at LPMC, a small percentage of unrecruited lung at IP and no unrecruited lung at UPMC. Our results confirm the notion of ongoing lung recruitment at pressure levels above LPMC for all investigated model equations and highlight the importance of a regional assessment of lung recruitment in patients with ARDS.

  12. Dependence on latitude of the relation between the diffuse fraction of solar radiation and the radiation and the ratio of global-to-extraterrestrial radiation for monthly average daily values

    SciTech Connect

    Soler, A. )

    1990-01-01

    An approach for the prediction of the monthly average daily diffuse radiation, {bar H}{sub d}, was proposed by Page in 1961. The Page method is based on the use of the linear correlation {bar H}{sub d}/{bar H} = c + d{bar H}/{bar H}{sub o}, where {bar H} and {bar H}{sub o} are, respectively, the monthly average daily values of global and extraterrestrial radiation, both on a horizontal surface. The values of c and d are a function of atmospheric conditions, cloud cover conditions/types, as well as latitude. In the present work the author studies the dependence on latitude of c and d for European locations with 36{degree}N < {gamma} < 61{degree}N (longitudes between 29{degree}E and 11{degree}W). The dependence is first studied using 28 values of c and d obtained using experimental values of {bar H} and {bar H}{sub d}. Next the dependence on {gamma} is studied using experimental values of {bar H} for 64 European locations, obtained for the period 1966-1975, and corresponding values of {bar H}{sub d} estimated using the European Community Solar Radiation Model (E.C.S.R.M.). In both cases a minimum for c vs. {gamma} and a maximum for d vs. {gamma} are obtained for similar values of {gamma}. Using the E.C.S.R.M. it is shown that both, the minimum and the maximum can be explained by the way {bar H}{sub d}/{bar H} and {bar H}/{bar H}{sub o} vary with {gamma}.

  13. Technical Papers Presented at the Defense Nuclear Agency Global Effects Review. Held at Moffett Field, California on 25-29 February 1986. Volume 2.

    DTIC Science & Technology

    1986-05-15

    A195 150 TECHICAL PRIERS PRESENTED AT THE DEFENSE NUCLEAR 1/4 AGENCY GLOBL EFFECTS R.. (U) DOD NUCLEAR INFOR ATION AND ANALYSIS CENTER SANTA BARBARA...341u1 IIIH Sll# I.. lul - III " iV AD-A 185 150 DASIAC-TN-86-29-V2 TECHNICAL PAPERS PRESENTED AT THE DEFENSE NUCLEAR AGENCY GLOBAL EFFECTS REVIEW...DH008684 11 TITLE (lIncludie Securit Classification) TECHNICAL PAPERS PRESENTED AT THE DEFENSE NUCLEAR AGENCY GLOBAL EFFECTS REVIEW Volume 11 12. P

  14. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  15. The Search for Eight Glacial Cycles of Deep-Water Temperatures and Global ice Volume From the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Ferretti, P.; Elderfield, H.; Greaves, M.; McCave, N.

    2007-12-01

    It has been recently suggested "a substantial portion of the marine 100-ky cycle that has been object of so much attention over the past quarter of a century is, in reality, a deep-water temperature signal and not an ice volume signal" (Shackleton, 2000). There are currently few records available of deep-water temperature variations during the Pleistocene and most of our understanding is inferred from the oxygen isotopic composition (δ18O) of benthic foraminifera from deep-sea sediments. However, variations in benthic δ18O reflect some combination of local to regional changes in water mass properties (largely deep- water temperature) as well as global changes in seawater δ18O (δ18Osw) resulting from the growth and decay of continental ice. Recent studies suggest that benthic foraminiferal Mg/Ca may be useful in reconstructing deep-water temperature changes, but the application of this method to benthic species has been hampered by a number of unresolved issues, such as uncertainties related to the calibration for benthic Mg at the coldest temperatures. Here we present deep-sea Mg/Ca and δ18O records for the past eight glacial cycles in benthic foraminiferal ( Uvigerina spp.) calcite from a marine sediment core recovered in the mid Southern latitudes. Ocean Drilling Program Site 1123 was retrieved from Chatham Rise, east of New Zealand in the Southwest Pacific Ocean (3290 m water depth). This site lies under the Deep Western Boundary Current (DWBC) that flows into the Pacific Ocean, and is responsible for most of the deep water in that ocean; DWBC strength is directly related to processes occurring around Antarctica. Temperatures derived via pore fluid modeling of the last glacial maximum are available from Site 1123 and represent an important tool to constrain deep-water temperatures estimates using Mg/Ca. In selected time slices, we measured B/Ca ratios in Uvigerina in order to gain information on the deep-water carbonate saturation state and have data of Mg

  16. Differential Associations of Socioeconomic Status With Global Brain Volumes and White Matter Lesions in African American and White Adults: the HANDLS SCAN Study.

    PubMed

    Waldstein, Shari R; Dore, Gregory A; Davatzikos, Christos; Katzel, Leslie I; Gullapalli, Rao; Seliger, Stephen L; Kouo, Theresa; Rosenberger, William F; Erus, Guray; Evans, Michele K; Zonderman, Alan B

    2017-04-01

    The aim of the study was to examine interactive relations of race and socioeconomic status (SES) to magnetic resonance imaging (MRI)-assessed global brain outcomes with previously demonstrated prognostic significance for stroke, dementia, and mortality. Participants were 147 African Americans (AAs) and whites (ages 33-71 years; 43% AA; 56% female; 26% below poverty) in the Healthy Aging in Neighborhoods of Diversity across the Life Span SCAN substudy. Cranial MRI was conducted using a 3.0 T unit. White matter (WM) lesion volumes and total brain, gray matter, and WM volumes were computed. An SES composite was derived from education and poverty status. Significant interactions of race and SES were observed for WM lesion volume (b = 1.38; η = 0.036; p = .028), total brain (b = 86.72; η = 0.042; p < .001), gray matter (b = 40.16; η = 0.032; p = .003), and WM (b = 46.56; η = 0.050; p < .001). AA participants with low SES exhibited significantly greater WM lesion volumes than white participants with low SES. White participants with higher SES had greater brain volumes than all other groups (albeit within normal range). Low SES was associated with greater WM pathology-a marker for increased stroke risk-in AAs. Higher SES was associated with greater total brain volume-a putative global indicator of brain health and predictor of mortality-in whites. Findings may reflect environmental and interpersonal stressors encountered by AAs and those of lower SES and could relate to disproportionate rates of stroke, dementia, and mortality.

  17. Genetic influences on individual differences in longitudinal changes in global and subcortical brain volumes: Results of the ENIGMA plasticity working group.

    PubMed

    Brouwer, Rachel M; Panizzon, Matthew S; Glahn, David C; Hibar, Derrek P; Hua, Xue; Jahanshad, Neda; Abramovic, Lucija; de Zubicaray, Greig I; Franz, Carol E; Hansell, Narelle K; Hickie, Ian B; Koenis, Marinka M G; Martin, Nicholas G; Mather, Karen A; McMahon, Katie L; Schnack, Hugo G; Strike, Lachlan T; Swagerman, Suzanne C; Thalamuthu, Anbupalam; Wen, Wei; Gilmore, John H; Gogtay, Nitin; Kahn, René S; Sachdev, Perminder S; Wright, Margaret J; Boomsma, Dorret I; Kremen, William S; Thompson, Paul M; Hulshoff Pol, Hilleke E

    2017-09-01

    Structural brain changes that occur during development and ageing are related to mental health and general cognitive functioning. Individuals differ in the extent to which their brain volumes change over time, but whether these differences can be attributed to differences in their genotypes has not been widely studied. Here we estimate heritability (h(2) ) of changes in global and subcortical brain volumes in five longitudinal twin cohorts from across the world and in different stages of the lifespan (N = 861). Heritability estimates of brain changes were significant and ranged from 16% (caudate) to 42% (cerebellar gray matter) for all global and most subcortical volumes (with the exception of thalamus and pallidum). Heritability estimates of change rates were generally higher in adults than in children suggesting an increasing influence of genetic factors explaining individual differences in brain structural changes with age. In children, environmental influences in part explained individual differences in developmental changes in brain structure. Multivariate genetic modeling showed that genetic influences of change rates and baseline volume significantly overlapped for many structures. The genetic influences explaining individual differences in the change rate for cerebellum, cerebellar gray matter and lateral ventricles were independent of the genetic influences explaining differences in their baseline volumes. These results imply the existence of genetic variants that are specific for brain plasticity, rather than brain volume itself. Identifying these genes may increase our understanding of brain development and ageing and possibly have implications for diseases that are characterized by deviant developmental trajectories of brain structure. Hum Brain Mapp 38:4444-4458, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Aggregation and Averaging.

    ERIC Educational Resources Information Center

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  19. Averaging Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Tegai, S. Ph.; Drobov, I. V.

    2017-07-01

    We tried to average the Schwarzschild solution for the gravitational point source by analogy with the same problem in Newtonian gravity or electrostatics. We expected to get a similar result, consisting of two parts: the smoothed interior part being a sphere filled with some matter content and an empty exterior part described by the original solution. We considered several variants of generally covariant averaging schemes. The averaging of the connection in the spirit of Zalaletdinov's macroscopic gravity gave unsatisfactory results. With the transport operators proposed in the literature it did not give the expected Schwarzschild solution in the exterior part of the averaged spacetime. We were able to construct a transport operator that preserves the Newtonian analogy for the outward region but such an operator does not have a clear geometrical meaning. In contrast, using the curvature as the primary averaged object instead of the connection does give the desired result for the exterior part of the problem in a fine way. However for the interior part, this curvature averaging does not work because the Schwarzschild curvature components diverge as 1 /r3 near the center and therefore are not integrable.

  20. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 6. Aerospace Ground Equipment Plan

    DTIC Science & Technology

    1976-03-01

    1, 2, and 3) Volume 3 - Classified Requirements Topics (Secret) Volume 4 - Systems Analysis and Trade Studies Volume 5 - System Description ...General 18 3.1.1 End Item Description and Types of Functions 18 3.1.2 Factors Affecting Operating AGE 19 3.2 Operational Complex Operating Functions... Description of System (Maintenance Aspects) . . . 54 3.1.2 Development of Maintenance Complex 57 1v ■MML ""’:"irfcMiirMiiii i k ■ ■ ■ ’ wnwmf

  1. Educational Policy Transfer in an Era of Globalization: Theory--History--Comparison. Comparative Studies Series. Volume 23

    ERIC Educational Resources Information Center

    Rappleye, Jeremy

    2012-01-01

    As education becomes increasingly global, the processes and politics of transfer have become a central focus of research. This study provides a comprehensive analysis of contemporary theoretical and analytical work aimed at exploring international educational reform and reveals the myriad ways that globalization is now fundamentally altering our…

  2. Educational Policy Transfer in an Era of Globalization: Theory--History--Comparison. Comparative Studies Series. Volume 23

    ERIC Educational Resources Information Center

    Rappleye, Jeremy

    2012-01-01

    As education becomes increasingly global, the processes and politics of transfer have become a central focus of research. This study provides a comprehensive analysis of contemporary theoretical and analytical work aimed at exploring international educational reform and reveals the myriad ways that globalization is now fundamentally altering our…

  3. Effects of critical coronary stenosis on global systolic left ventricular function quantified by pressure-volume relations during dobutamine stress in the canine heart.

    PubMed

    Steendijk, P; Baan, J; Van der Velde, E T; Baan, J

    1998-09-01

    In this study we quantified the effects of a critical coronary stenosis on global systolic function using pressure-volume relations at baseline and during incremental dobutamine stress. The effects of coronary stenosis have previously been analyzed mainly in terms of regional (dys)function. Global hemodynamics are generally considered normal until coronary flow is substantially reduced. However, pressure-volume analysis might reveal mechanisms not fully exposed by potentially load-dependent single-beat parameters. Moreover, no systematic analysis by pressure-volume relations of the effects of dobutamine over a wide dose range has previously been presented. In 14 dogs left ventricular volume and pressure were measured by conductance and micromanometer catheters, and left circumflex coronary flow by Doppler probes. Measurements in control and with left circumflex stenosis were performed at baseline and at five levels of dobutamine (2.5 to 20 microg/kg/min). The end-systolic pressure-volume relation (ESPVR) dP/dtMAX vs. end-diastolic volume (dP/dtMAX - V(ED)) and the relation between stroke work and end-diastolic volume (preload recruitable stroke work [PRSW]) were derived from data obtained during gradual caval occlusion. In control, dobutamine gradually increased heart rate up to 20 microg/kg/min, the inotropic effect blunted at 15 microg/kg/min. With stenosis, the chronotropic effect was similar, however, contractile state was optimal at approximately 10 microg/kg/min and tended to go down at higher levels. At baseline, the positions of ESPVR and PRSW, but not of dP/dtMAX - V(ED), showed a significant decrease in function with stenosis. No differences between control and stenosis were present at 2.5 microg/kg/min; the differences were largest at 15 microg/kg/min. Pressure-volume relations and incremental dobutamine may be used to quantify the effects of critical coronary stenosis. The positions of these relations are more consistent and more useful indices than the

  4. Threaded average temperature thermocouple

    NASA Technical Reports Server (NTRS)

    Ward, Stanley W. (Inventor)

    1990-01-01

    A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

  5. The average enzyme principle

    PubMed Central

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-01-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076

  6. Average Revisited in Context

    ERIC Educational Resources Information Center

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  7. Averaging of TNTC counts.

    PubMed Central

    Haas, C N; Heller, B

    1988-01-01

    When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211

  8. Determining average yarding distance.

    Treesearch

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  9. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  10. Technical Report Series on Global Modeling and Data Assimilation. Volume 31; Global Surface Ocean Carbon Estimates in a Model Forced by MERRA

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; Rousseaux, Cecile S.

    2013-01-01

    MERRA products were used to force an established ocean biogeochemical model to estimate surface carbon inventories and fluxes in the global oceans. The results were compared to public archives of in situ carbon data and estimates. The model exhibited skill for ocean dissolved inorganic carbon (DIC), partial pressure of ocean CO2 (pCO2) and air-sea fluxes (FCO2). The MERRA-forced model produced global mean differences of 0.02% (approximately 0.3 microns) for DIC, -0.3% (about -1.2 (micro) atm; model lower) for pCO2, and -2.3% (-0.003 mol C/sq m/y) for FCO2 compared to in situ estimates. Basin-scale distributions were significantly correlated with observations for all three variables (r=0.97, 0.76, and 0.73, P<0.05, respectively for DIC, pCO2, and FCO2). All major oceanographic basins were represented as sources to the atmosphere or sinks in agreement with in situ estimates. However, there were substantial basin-scale and local departures.

  11. Drilling and dating New Jersey oligocene-miocene sequences: Ice volume, global sea level, and Exxon records

    SciTech Connect

    Miller, K.G.; Mountain, G.S.

    1996-02-23

    Oligocene to middle Miocene sequence boundaries on the New Jersey coastal plain (Ocean Drilling Project Leg 150X) and continental slope (Ocean Drilling Project Leg 150) were dated by integrating strontium isotopic stratigraphy, magnetostratigraphy, and biostratigraphy (planktonic foraminifera, nannofossils, dinocysts, and diatoms). The ages of coastal plain unconformities and slope seismic reflectors (unconformities or stratal breaks with no discernible hiatuses) match the ages of global {delta}{sup 18}O increases (inferred glacioeustatic lowerings) measured in deep-sea sites. These correlations confirm a causal link between coastal plain and slope sequence boundaries: both formed during global sea-level lowerings. The ages of New Jersey sequence boundaries and global {delta}{sup 18}O increases also correlate well within the Exxon Production Research sea-level records of Haq et al. and Vail et al., validating and refining their compilations. 33 refs., 2 figs., 1 tab.

  12. Cell averaging Chebyshev methods for hyperbolic problems

    NASA Technical Reports Server (NTRS)

    Wei, Cai; Gottlieb, David; Harten, Ami

    1990-01-01

    A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.

  13. Educating American Students for Life in a Global Society. Policy Briefs: Education Reform. Volume 2, Number 4

    ERIC Educational Resources Information Center

    Lansford, Jennifer E.

    2002-01-01

    Progress in travel, technology, and other domains has contributed to the breaking down of barriers between countries and allowed for the development of an increasingly global society. International cooperation and competition are now pervasive in areas as diverse as business, science, arts, politics, and athletics. Educating students to navigate…

  14. Proceedings of the First National Workshop on the Global Weather Experiment: Current Achievements and Future Directions, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A summary of the proceedings in which the most important findings stemming from the Global Weather Experiment (GWE) are highlighted, additional key results and recommendations are comered, and the presentations and discussion are summarized. Detailed achievements, unresolved problems, and recommendations are included.

  15. Technical Report Series on Global Modeling and Data Assimilation, Volume 43. MERRA-2; Initial Evaluation of the Climate

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Bosilovich, Michael G.; Akella, Santha; Lawrence, Coy; Cullather, Richard; Draper, Clara; Gelaro, Ronald; Kovach, Robin; Liu, Qing; Molod, Andrea; Norris, Peter; Wargan, Krzysztof; Chao, Winston; Reichle, Rolf; Takacs, Lawrence; Todling, Ricardo; Vikhliaev, Yury; Bloom, Steve; Collow, Allison; Partyka, Gary; Labow, Gordon; Pawson, Steven; Reale, Oreste; Schubert, Siegfried; Suarez, Max

    2015-01-01

    The years since the introduction of MERRA have seen numerous advances in the GEOS-5 Data Assimilation System as well as a substantial decrease in the number of observations that can be assimilated into the MERRA system. To allow continued data processing into the future, and to take advantage of several important innovations that could improve system performance, a decision was made to produce MERRA-2, an updated retrospective analysis of the full modern satellite era. One of the many advances in MERRA-2 is a constraint on the global dry mass balance; this allows the global changes in water by the analysis increment to be near zero, thereby minimizing abrupt global interannual variations due to changes in the observing system. In addition, MERRA-2 includes the assimilation of interactive aerosols into the system, a feature of the Earth system absent from previous reanalyses. Also, in an effort to improve land surface hydrology, observations-corrected precipitation forcing is used instead of model-generated precipitation. Overall, MERRA-2 takes advantage of numerous updates to the global modeling and data assimilation system. In this document, we summarize an initial evaluation of the climate in MERRA-2, from the surface to the stratosphere and from the tropics to the poles. Strengths and weaknesses of the MERRA-2 climate are accordingly emphasized.

  16. Global end-diastolic volume index vs CVP goal-directed fluid resuscitation for COPD patients with septic shock: a randomized controlled trial.

    PubMed

    Yu, Jiangquan; Zheng, Ruiqiang; Lin, Hua; Chen, Qihong; Shao, Jun; Wang, Daxin

    2017-01-01

    This study aimed to investigate the clinical effects of early goal-directed therapy according to the global end-diastolic volume index (GEDI) on chronic obstructive pulmonary disease (COPD) patients with septic shock. A total of 71 COPD patients with septic shock were randomly assigned to 2 groups. In the control group (n = 37), fluid resuscitation was performed based on the central venous pressure. In the study group (n = 34), fluid resuscitation was performed until GEDI reached 800 mL/m(2). The following indices were observed for the 2 groups: 6- and 24-hour fluid volumes, norepinephrine dosage, 24-hour blood lactate clearance rate, duration of mechanical ventilation, intensive care unit (ICU) length of stay, ICU mortality, and 90-day survival rate. At both 6- and 24-hour measurements, the fluid volume was lower and norepinephrine dosage was higher in the control group than in the study group (P < .05). The blood lactate clearance rate was lower, the duration of mechanical ventilation was longer, and the length of stay in the ICU was longer in the control group than in the study group (P < .05). No significant difference in mortality or 90-day survival rate was found between the 2 groups. The GEDI goal-directed fluid resuscitation shows better clinical effects than that shown by central venous pressure for COPD patients with septic shock; however, it cannot reduce the mortality rate. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  18. Temperature averaging thermal probe

    NASA Technical Reports Server (NTRS)

    Kalil, L. F.; Reinhardt, V. (Inventor)

    1985-01-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  19. Temperature averaging thermal probe

    NASA Astrophysics Data System (ADS)

    Kalil, L. F.; Reinhardt, V.

    1985-12-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  20. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  1. Validation of Noninvasive Indices Of Global Systolic Function in Patients with Normal and Abnormal Loading Conditions: A Simultaneous Echocardiography Pressure-Volume Catheterization Study

    PubMed Central

    Yotti, Raquel; Bermejo, Javier; Benito, Yolanda; Sanz, Ricardo; Ripoll, Cristina; Martínez-Legazpi, Pablo; Péerez del Villar, Candela; Elízaga, Jaime; González-Mansilla, Ana; Barrio, Alicia; Bañares, Rafael; Fernández-Avilés, Francisco

    2014-01-01

    Background Noninvasive indices based on Doppler-echocardiography are increasingly used in clinical cardiovascular research to evaluate LV global systolic chamber function. Our objectives were 1) to clinically validate ultrasound-based methods of global systolic chamber function to account for differences between patients in conditions of abnormal load, and 2) to assess their sensitivity to load confounders. Methods and Results Twenty-seven patients (8 dilated cardiomyopathy, 10 normal ejection fraction [EF], and 9 end-stage liver disease) underwent simultaneous echocardiography and left heart catheterization with pressure-conductance instrumentation. The reference index, maximal elastance (Emax) was calculated from pressure-volume loop data obtained during acute inferior vena cava occlusion. A wide range of values was observed for LV systolic chamber function (Emax: 2.8 ± 1.0 mmHg/ml), preload, and afterload. Amongst the noninvasive indices tested, the peak ejection intraventricular pressure difference (peak-EIVPD) showed the best correlation with Emax (R=0.75). A significant but weaker correlation with Emax was observed for EF (R=0.41), mid-wall fractional shortening (R=0.51), global circumferential strain(R=−0.53), and strain-rate (R=−0.46). Longitudinal strain and strain-rate failed to correlate with Emax, as did noninvasive single-beat estimations of this index. Principal component and multiple regression analyses demonstrated that peak-EIVPD was less sensitive to load, whereas EF and longitudinal strain and strain-rate were heavily influenced by afterload. Conclusions Current ultrasound methods have limited accuracy to characterize global LV systolic chamber function in a given patient. The Doppler-derived peak-EIVPD should be preferred for this purpose because it best correlates with the reference index and is more robust in conditions of abnormal load. PMID:24173273

  2. Technical Report Series on Global Modeling and Data Assimilation. Volume 20; The Climate of the FVCCM-3 Model

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Chang, Yehui; Schubert, Siegfried D.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2001-01-01

    This document describes the climate of version 1 of the NASA-NCAR model developed at the Data Assimilation Office (DAO). The model consists of a new finite-volume dynamical core and an implementation of the NCAR climate community model (CCM-3) physical parameterizations. The version of the model examined here was integrated at a resolution of 2 degrees latitude by 2.5 degrees longitude and 32 levels. The results are based on assimilation that was forced with observed sea surface temperature and sea ice for the period 1979-1995, and are compared with NCEP/NCAR reanalyses and various other observational data sets. The results include an assessment of seasonal means, subseasonal transients including the Madden Julian Oscillation, and interannual variability. The quantities include zonal and meridional winds, temperature, specific humidity, geopotential height, stream function, velocity potential, precipitation, sea level pressure, and cloud radiative forcing.

  3. Technical Report Series on Global Modeling and Data Assimilation. Volume 13; Interannual Variability and Potential Predictability in Reanalysis Products

    NASA Technical Reports Server (NTRS)

    Min, Wei; Schubert, Siegfried D.; Suarez, Max J. (Editor)

    1997-01-01

    The Data Assimilation Office (DAO) at Goddard Space Flight Center and the National Center for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) have produced multi-year global assimilations of historical data employing fixed analysis systems. These "reanalysis" products are ideally suited for studying short-term climatic variations. The availability of multiple reanalysis products also provides the opportunity to examine the uncertainty in the reanalysis data. The purpose of this document is to provide an updated estimate of seasonal and interannual variability based on the DAO and NCEP/NCAR reanalyses for the 15-year period 1980-1995. Intercomparisons of the seasonal means and their interannual variations are presented for a variety of prognostic and diagnostic fields. In addition, atmospheric potential predictability is re-examined employing selected DAO reanalysis variables.

  4. Surgical volume and postoperative mortality rate at a referral hospital in Western Uganda: Measuring the Lancet Commission on Global Surgery indicators in low-resource settings.

    PubMed

    Anderson, Geoffrey A; Ilcisin, Lenka; Abesiga, Lenard; Mayanja, Ronald; Portal Benetiz, Noralis; Ngonzi, Joseph; Kayima, Peter; Shrime, Mark G

    2017-06-01

    The Lancet Commission on Global Surgery recommends that every country report its surgical volume and postoperative mortality rate. Little is known, however, about the numbers of operations performed and the associated postoperative mortality rate in low-income countries or how to best collect these data. For one month, every patient who underwent an operation at a referral hospital in western Uganda was observed. These patients and their outcomes were followed until discharge. Prospective data were compared with data obtained from logbooks and patient charts to determine the validity of using retrospective methods for collecting these metrics. Surgical volume at this regional hospital in Uganda is 8,515 operations/y, compared to 4,000 operations/y reported in the only other published data. The postoperative mortality rate at this hospital is 2.4%, similar to other hospitals in low-income countries. Finding patient files in the medical records department was time consuming and yielded only 62% of the files. Furthermore, a comparison of missing versus found charts revealed that the missing charts were significantly different from the found charts. Logbooks, on the other hand, captured 99% of the operations and 94% of the deaths. Our results describe a simple, reproducible, accurate, and inexpensive method for collection of the Lancet Commission on Global Surgery variables using logbooks that already exist in most hospitals in low-income countries. While some have suggested using risk-adjusted postoperative mortality rate as a more equitable variable, our data suggest that only a limited amount of risk adjustment is possible given the limited available data. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Variation of left ventricular outflow tract velocity and global end-diastolic volume index reliably predict fluid responsiveness in cardiac surgery patients.

    PubMed

    Broch, Ole; Renner, Jochen; Gruenewald, Matthias; Meybohm, Patrick; Höcker, Jan; Schöttler, Jan; Steinfath, Markus; Bein, Berthold

    2012-06-01

    The ability of the global end-diastolic volume index (GEDVI) and respiratory variations in left ventricular outflow tract velocity (ΔVTI(LVOT)) for prediction of fluid responsiveness is still under debate. The aim of the present study was to challenge the predictive power of GEDVI and ΔVTI(LVOT) compared with pulse pressure variation (PPV) and stroke volume variation (SVV) in a large patient population. Ninety-two patients were studied before coronary artery surgery. Each patient was monitored with central venous pressure (CVP), the PiCCO system (Pulsion Medical Systems, Munich, Germany), and transesophageal echocardiography. Responders were defined as those who increased their stroke volume index by greater than 15% (ΔSVI(TPTD) >15%) during passive leg raising. Central venous pressure showed no significant correlation with ΔSVI(TPTD) (r = -0.06, P = .58), in contrast to PPV (r = 0.71, P < .0001), SVV (r = 0.61, P < .0001), GEDVI (r = -0.54, P < .0001), and ΔVTI(LVOT) (r = 0.54, P < .0001). The best area under the receiver operating characteristic curve (AUC) predicting ΔSVI(TPTD) greater than 15% was found for PPV (AUC, 0.82; P < .0001) and SVV (AUC, 0.77; P < .0001), followed by ΔVTI(LVOT) (AUC, 0.74; P < .0001) and GEDVI (AUC, 0.71; P = .0006), whereas CVP was not able to predict fluid responsiveness (AUC, 0.58; P = .18). In contrast to CVP, GEDVI and ΔVTI(LVOT) reliably predicted fluid responsiveness under closed-chest conditions. Pulse pressure variation and SVV showed the highest accuracy. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Global transcriptomic profiling using small volumes of whole blood: a cost-effective method for translational genomic biomarker identification in small animals.

    PubMed

    Fricano, Meagan M; Ditewig, Amy C; Jung, Paul M; Liguori, Michael J; Blomme, Eric A G; Yang, Yi

    2011-01-01

    Blood is an ideal tissue for the identification of novel genomic biomarkers for toxicity or efficacy. However, using blood for transcriptomic profiling presents significant technical challenges due to the transcriptomic changes induced by ex vivo handling and the interference of highly abundant globin mRNA. Most whole blood RNA stabilization and isolation methods also require significant volumes of blood, limiting their effective use in small animal species, such as rodents. To overcome these challenges, a QIAzol-based RNA stabilization and isolation method (QSI) was developed to isolate sufficient amounts of high quality total RNA from 25 to 500 μL of rat whole blood. The method was compared to the standard PAXgene Blood RNA System using blood collected from rats exposed to saline or lipopolysaccharide (LPS). The QSI method yielded an average of 54 ng total RNA per μL of rat whole blood with an average RNA Integrity Number (RIN) of 9, a performance comparable with the standard PAXgene method. Total RNA samples were further processed using the NuGEN Ovation Whole Blood Solution system and cDNA was hybridized to Affymetrix Rat Genome 230 2.0 Arrays. The microarray QC parameters using RNA isolated with the QSI method were within the acceptable range for microarray analysis. The transcriptomic profiles were highly correlated with those using RNA isolated with the PAXgene method and were consistent with expected LPS-induced inflammatory responses. The present study demonstrated that the QSI method coupled with NuGEN Ovation Whole Blood Solution system is cost-effective and particularly suitable for transcriptomic profiling of minimal volumes of whole blood, typical of those obtained with small animal species.

  7. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  8. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP ENTITLED "GLOBAL ANALYSIS OF POLARIZED PARTON DESTRIBUTIONS IN THE RHIC ERA" (VOLUME 86).

    SciTech Connect

    DESHPANDE,A.; VOGELSANG, W.

    2007-10-08

    The determination of the polarized gluon distribution is a central goal of the RHIC spin program. Recent achievements in polarization and luminosity of the proton beams in RHIC, has enabled the RHIC experiments to acquire substantial amounts of high quality data with polarized proton beams at 200 and 62.4 GeV center of mass energy, allowing a first glimpse of the polarized gluon distribution at RHIC. Short test operation at 500 GeV center of mass energy has also been successful, indicating absence of any fundamental roadblocks for measurements of polarized quark and anti-quark distributions planned at that energy in a couple of years. With this background, it has now become high time to consider how all these data sets may be employed most effectively to determine the polarized parton distributions in the nucleon, in general, and the polarized gluon distribution, in particular. A global analysis of the polarized DIS data from the past and present fixed target experiments jointly with the present and anticipated RHIC Spin data is needed.

  9. GLOBECOM '87 - Global Telecommunications Conference, Tokyo, Japan, Nov. 15-18, 1987, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on global telecommunications discusses topics in the fields of Integrated Services Digital Network (ISDN) technology field trial planning and results to date, motion video coding, ISDN networking, future network communications security, flexible and intelligent voice/data networks, Asian and Pacific lightwave and radio systems, subscriber radio systems, the performance of distributed systems, signal processing theory, satellite communications modulation and coding, and terminals for the handicapped. Also discussed are knowledge-based technologies for communications systems, future satellite transmissions, high quality image services, novel digital signal processors, broadband network access interface, traffic engineering for ISDN design and planning, telecommunications software, coherent optical communications, multimedia terminal systems, advanced speed coding, portable and mobile radio communications, multi-Gbit/second lightwave transmission systems, enhanced capability digital terminals, communications network reliability, advanced antimultipath fading techniques, undersea lightwave transmission, image coding, modulation and synchronization, adaptive signal processing, integrated optical devices, VLSI technologies for ISDN, field performance of packet switching, CSMA protocols, optical transport system architectures for broadband ISDN, mobile satellite communications, indoor wireless communication, echo cancellation in communications, and distributed network algorithms.

  10. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  11. The ACS Nearby Galaxy Survey Treasury. VIII. The Global Star Formation Histories of 60 Dwarf Galaxies in the Local Volume

    NASA Astrophysics Data System (ADS)

    Weisz, Daniel R.; Dalcanton, Julianne J.; Williams, Benjamin F.; Gilbert, Karoline M.; Skillman, Evan D.; Seth, Anil C.; Dolphin, Andrew E.; McQuinn, Kristen B. W.; Gogarten, Stephanie M.; Holtzman, Jon; Rosema, Keith; Cole, Andrew; Karachentsev, Igor D.; Zaritsky, Dennis

    2011-09-01

    We present uniformly measured star formation histories (SFHs) of 60 nearby (D <~ 4 Mpc) dwarf galaxies based on color-magnitude diagrams of resolved stellar populations from images taken with the Hubble Space Telescope and analyzed as part of the ACS Nearby Galaxy Survey Treasury program (ANGST). This volume-limited sample contains 12 dwarf spheroidal (dSph)/dwarf elliptical (dE), 5 dwarf spiral, 28 dwarf irregular (dI), 12 dSph/dI (transition), and 3 tidal dwarf galaxies. The sample spans a range of ~10 mag in MB and covers a wide range of environments, from highly interacting to truly isolated. From the best-fit SFHs, we find three significant results for dwarf galaxies in the ANGST volume: (1) the majority of dwarf galaxies formed the bulk of their mass prior to z ~ 1, regardless of current morphological type; (2) the mean SFHs of dIs, transition dwarf galaxies (dTrans), and dSphs are similar over most of cosmic time, and only begin to diverge a few Gyr ago, with the clearest differences between the three appearing during the most recent 1 Gyr and (3) the SFHs are complex and the mean values are inconsistent with simple SFH models, e.g., single bursts, constant star formation rates (SFRs), or smooth, exponentially declining SFRs. The mean SFHs show clear divergence from the cosmic SFH at z <~ 0.7, which could be evidence that low-mass systems have experienced delayed star formation relative to more massive galaxies. The sample shows a strong density-morphology relationship, such that the dSphs in the sample are less isolated than the dIs. We find that the transition from a gas-rich to gas-poor galaxy cannot be solely due to internal mechanisms such as stellar feedback, and instead is likely the result of external mechanisms, e.g., ram pressure and tidal stripping and tidal forces. In terms of their environments, SFHs, and gas fractions, the majority of the dTrans appear to be low-mass dIs that simply lack Hα emission, similar to Local Group (LG) dTrans DDO 210

  12. Technical Report Series on Global Modeling and Data Assimilation. Volume 14; A Comparison of GEOS Assimilated Data with FIFE Observations

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Suarez, Max J. (Editor); Schubert, Siegfried D.

    1998-01-01

    First ISLSCP Field Experiment (FIFE) observations have been used to validate the near-surface proper- ties of various versions of the Goddard Earth Observing System (GEOS) Data Assimilation System. The site- averaged FIFE data set extends from May 1987 through November 1989, allowing the investigation of several time scales, including the annual cycle, daily means and diurnal cycles. Furthermore, the development of the daytime convective planetary boundary layer is presented for several days. Monthly variations of the surface energy budget during the summer of 1988 demonstrate the affect of the prescribed surface soil wetness boundary conditions. GEOS data comes from the first frozen version of the assimilation system (GEOS-1 DAS) and two experimental versions of GEOS (v. 2.0 and 2.1) with substantially greater vertical resolution and other changes that influence the boundary layer. This report provides a baseline for future versions of the GEOS data assimilation system that will incorporate a state-of-the-art land surface parameterization. Several suggestions are proposed to improve the generality of future comparisons. These include the use of more diverse field experiment observations and an estimate of gridpoint heterogeneity from the new land surface parameterization.

  13. Incorporating global warming risks in power sector planning: A case study of the New England region. Volume 1

    SciTech Connect

    Krause, F.; Busch, J.; Koomey, J.

    1992-11-01

    Growing international concern over the threat of global climate change has led to proposals to buy insurance against this threat by reducing emissions of carbon (short for carbon dioxide) and other greenhouse gases below current levels. Concern over these and other, non-climatic environmental effects of electricity generation has led a number of states to adopt or explore new mechanisms for incorporating environmental externalities in utility resource planning. For example, the New York and Massachusetts utility commissions have adopted monetized surcharges (or adders) to induce emission reductions of federally regulated air pollutants (notably, SO{sub 2}, NO{sub x}, and particulates) beyond federally mandated levels. These regulations also include preliminary estimates of the cost of reducing carbon emissions, for which no federal regulations exist at this time. Within New England, regulators and utilities have also held several workshops and meetings to discuss alternative methods of incorporating externalities as well as the feasibility of regional approaches. This study examines the potential for reduced carbon emissions in the New England power sector as well as the cost and rate impacts of two policy approaches: environmental externality surcharges and a target- based approach. We analyze the following questions: Does New England have sufficient low-carbon resources to achieve significant reductions (10% to 20% below current levels) in fossil carbon emissions in its utility sector? What reductions could be achieved at a maximum? What is the expected cost of carbon reductions as a function of the reduction goal? How would carbon reduction strategies affect electricity rates? How effective are environmental externality cost surcharges as an instrument in bringing about carbon reductions? To what extent could the minimization of total electricity costs alone result in carbon reductions relative to conventional resource plans?

  14. Impact of the seasonal cycle on the decadal predictability of the North Atlantic volume and heat transport under global warming

    NASA Astrophysics Data System (ADS)

    Fischer, Matthias; Müller, Wolfgang A.; Domeisen, Daniela I. V.; Baehr, Johanna

    2016-04-01

    latitude dependence is similar to changes in the seasonal cycle that shows continuous and more robust changes until the 23rd century in RCP8.5. Longterm changes in the seasonal cycle can be related to changes in the surface wind stress and the associated Ekman transport, that is the main driver of the AMOC's and OHT's seasonal variability. Overall, the results show an impact of changes in the seasonal cycle on the decadal predictability of the AMOC and the OHT under global warming.

  15. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  16. Improvements in Dynamic GPS Positions Using Track Averaging

    DTIC Science & Technology

    1999-08-01

    Global Positioning System ( GPS ), Precise Positioning System (PPS) solution under dynamic...SUBJECT TERMS 15. NUMBER OF GPS , Global Positioning System , Dynamic Positioning PAGES 31 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY... Global Positioning System ( GPS ), Precise Positioning System (PPS) solution under dynamic conditions through averaging is investigated. Static

  17. Averaging and Globalising Quotients of Informetric and Scientometric Data.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1996-01-01

    Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…

  18. Averaging and Globalising Quotients of Informetric and Scientometric Data.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1996-01-01

    Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…

  19. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 2. Requirements Compilation and Analysis. Part 2. Functional Description

    DTIC Science & Technology

    1976-03-01

    FINAL SYSTEM/SUBSYSTEM SUMMARY REPORT, Z & & ^ $ VOLUME 2. Requirements Compilation and Analysis Part 2, Functional Description « ^N» mvr^1𔃺...Compilation and Analysis Part 2 - Functional Description RKAD INSTRUCTIONS BEFORE COMPLETING KORM 3. RECIPIENT’S CATALOG NUMBER 7. AUTHORfJj 9...Secret) Volume 4 - Systems Analysis and Trade Studies Volume 5 - System Description Volume 6 - Aerospace Ground Equipment Plan Volume 7

  20. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  1. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  2. Volcanoes and global catastrophes

    NASA Technical Reports Server (NTRS)

    Simkin, Tom

    1988-01-01

    The search for a single explanation for global mass extinctions has let to polarization and the controversies that are often fueled by widespread media attention. The historic record shows a roughly linear log-log relation between the frequency of explosive volcanic eruptions and the volume of their products. Eruptions such as Mt. St. Helens 1980 produce on the order of 1 cu km of tephra, destroying life over areas in the 10 to 100 sq km range, and take place, on the average, once or twice a decade. Eruptions producing 10 cu km take place several times a century and, like Krakatau 1883, destroy life over 100 to 1000 sq km areas while producing clear global atmospheric effects. Eruptions producting 10,000 cu km are known from the Quaternary record, and extrapolation from the historic record suggests that they occur perhaps once in 20,000 years, but none has occurred in historic time and little is known of their biologic effects. Even larger eruptions must also exist in the geologic record, but documentation of their volume becomes increasingly difficult as their age increases. The conclusion is inescapable that prehistoric eruptions have produced catastrophes on a global scale: only the magnitude of the associated mortality is in question. Differentiation of large magma chambers is on a time scale of thousands to millions of years, and explosive volcanoes are clearly concentrated in narrow belts near converging plate margins. Volcanism cannot be dismissed as a producer of global catastrophes. Its role in major extinctions is likely to be at least contributory and may well be large. More attention should be paid to global effects of the many huge eruptions in the geologic record that dwarf those known in historic time.

  3. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  4. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  5. Global Health Observatory (GHO): Life Expectancy

    MedlinePlus

    ... globally in 2015 Life tables The global population aged 60 years could expect to live another 20 years on average in 2015 MORE MORTALITY AND GLOBAL HEALTH ESTIMATES DATA PRODUCTS Maps Country profiles About Global Health Estimates Global Health Estimates ...

  6. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  7. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  8. Influence of Type 2 Diabetes on Brain Volumes and Changes in Brain Volumes

    PubMed Central

    Espeland, Mark A.; Bryan, R. Nick; Goveas, Joseph S.; Robinson, Jennifer G.; Siddiqui, Mustafa S.; Liu, Simin; Hogan, Patricia E.; Casanova, Ramon; Coker, Laura H.; Yaffe, Kristine; Masaki, Kamal; Rossom, Rebecca; Resnick, Susan M.

    2013-01-01

    OBJECTIVE To study how type 2 diabetes adversely affects brain volumes, changes in volume, and cognitive function. RESEARCH DESIGN AND METHODS Regional brain volumes and ischemic lesion volumes in 1,366 women, aged 72–89 years, were measured with structural brain magnetic resonance imaging (MRI). Repeat scans were collected an average of 4.7 years later in 698 women. Cross-sectional differences and changes with time between women with and without diabetes were compared. Relationships that cognitive function test scores had with these measures and diabetes were examined. RESULTS The 145 women with diabetes (10.6%) at the first MRI had smaller total brain volumes (0.6% less; P = 0.05) and smaller gray matter volumes (1.5% less; P = 0.01) but not white matter volumes, both overall and within major lobes. They also had larger ischemic lesion volumes (21.8% greater; P = 0.02), both overall and in gray matter (27.5% greater; P = 0.06), in white matter (18.8% greater; P = 0.02), and across major lobes. Overall, women with diabetes had slightly (nonsignificant) greater loss of total brain volumes (3.02 cc; P = 0.11) and significant increases in total ischemic lesion volumes (9.7% more; P = 0.05) with time relative to those without diabetes. Diabetes was associated with lower scores in global cognitive function and its subdomains. These relative deficits were only partially accounted for by brain volumes and risk factors for cognitive deficits. CONCLUSIONS Diabetes is associated with smaller brain volumes in gray but not white matter and increasing ischemic lesion volumes throughout the brain. These markers are associated with but do not fully account for diabetes-related deficits in cognitive function. PMID:22933440

  9. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  10. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  12. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  13. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  14. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  15. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  16. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  17. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  18. Infusing a Global Perspective into the Study of Agriculture: Student Activities. Volume 1. Developed by the National Task Force on International Agricultural Education.

    ERIC Educational Resources Information Center

    Martin, Robert A., Ed.

    The need to develop an awareness of the global nature of the agriculture industry is one of the major issues that students must begin to understand. A packet of instructional materials was developed to help teachers infuse a global perspective into units of instruction about agriculture and related topics. This document offers a series of…

  19. Determining water reservoir characteristics with global elevation data

    NASA Astrophysics Data System (ADS)

    Bemmelen, C. W. T.; Mann, M.; Ridder, M. P.; Rutten, M. M.; Giesen, N. C.

    2016-11-01

    Quantification of human impact on water, sediment, and nutrient fluxes at the global scale demands characterization of reservoirs with an accuracy that is presently unavailable. This letter presents a new method, based on virtual dam placement, to make accurate estimations of area-volume relationships of large reservoirs, using solely readily available elevation data. The new method is based on regional similarity of area-volume relationships. The essence of the method is that virtual reservoirs are created in the vicinity of an existing reservoir to derive area-volume relationships for the existing reservoir. The derived area-volume relationships reproduced in situ bathymetric data well. An intercomparison for twelve reservoirs resulted in an average R2 = 0.93. This is a significant improvement on estimates using the best existing global regression model, which gives R2 = 0.54 for the same set of reservoirs.

  20. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  1. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  2. Disk-Averaged Synthetic Spectra of Mars

    NASA Astrophysics Data System (ADS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong ,William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  3. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  4. Spatial moving average risk smoothing.

    PubMed

    Botella-Rocamora, P; López-Quílez, A; Martinez-Beneito, M A

    2013-07-10

    This paper introduces spatial moving average risk smoothing (SMARS) as a new way of carrying out disease mapping. This proposal applies the moving average ideas of time series theory to the spatial domain, making use of a spatial moving average process of unknown order to define dependence on the risk of a disease occurring. Correlation of the risks for different locations will be a function of m values (m being unknown), providing a rich class of correlation functions that may be reproduced by SMARS. Moreover, the distance (in terms of neighborhoods) that should be covered for two units to be found to make the correlation of their risks 0 is a quantity to be fitted by the model. This way, we reproduce patterns that range from spatially independent to long-range spatially dependent. We will also show a theoretical study of the correlation structure induced by SMARS, illustrating the wide variety of correlation functions that this proposal is able to reproduce. We will also present three applications of SMARS to both simulated and real datasets. These applications will show SMARS to be a competitive disease mapping model when compared with alternative proposals that have already appeared in the literature. Finally, the application of SMARS to the study of mortality for 21 causes of death in the Comunitat Valenciana will allow us to identify some qualitative differences in the patterns of those diseases. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Volcanic Signatures in Estimates of Stratospheric Aerosol Size, Distribution Width, Surface Area, and Volume Deduced from Global Satellite-Based Observations

    NASA Technical Reports Server (NTRS)

    Bauman, J. J.; Russell, P. B.

    2000-01-01

    Volcanic signatures in the stratospheric aerosol layer are revealed by two independent techniques which retrieve aerosol information from global satellite-based observations of particulate extinction. Both techniques combine the 4-wavelength Stratospheric Aerosol and Gas Experiment (SAGE) II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument. The algorithms use the SAGE II/CLAES composite extinction spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub R). The first technique is a multi-wavelength Look-Up-Table (LUT) algorithm which retrieves values and uncertainties of R(sub eff) by comparing ratios of extinctions from SAGE II and CLAES (e.g., E(sub lambda)/E(sub 1.02) to pre-computed extinction ratios which are based on a range of unimodal lognormal size distributions. The pre-computed ratios are presented as a function of R(sub eff) for a given sigma(sub g); thus the comparisons establish the range of R(sub eff) consistent with the measured spectra for that sigma(sub g). The fact that no solutions are found for certain sigma(sub g) values provides information on the acceptable range of sigma(sub g), which is found to evolve in response to volcanic injections and removal periods. Analogous comparisons using absolute extinction spectra and error bars establish the range of S and V. The second technique is a Parameter Search Technique (PST) which estimates R(sub eff) and sigma(sub g) within a month-latitude-altitude bin by minimizing the chi-squared values obtained by comparing the SAGE II/CLAES extinction spectra and error bars with spectra calculated by varying the lognormal fitting parameters: R(sub eff), sigma(sub g), and the total number of particles N(sub 0). For both techniques, possible biases in

  6. Volcanic Signatures in Estimates of Stratospheric Aerosol Size, Distribution Width, Surface Area, and Volume Deduced from Global Satellite-Based Observations

    NASA Technical Reports Server (NTRS)

    Bauman, J. J.; Russell, P. B.

    2000-01-01

    Volcanic signatures in the stratospheric aerosol layer are revealed by two independent techniques which retrieve aerosol information from global satellite-based observations of particulate extinction. Both techniques combine the 4-wavelength Stratospheric Aerosol and Gas Experiment (SAGE) II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument. The algorithms use the SAGE II/CLAES composite extinction spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub R). The first technique is a multi-wavelength Look-Up-Table (LUT) algorithm which retrieves values and uncertainties of R(sub eff) by comparing ratios of extinctions from SAGE II and CLAES (e.g., E(sub lambda)/E(sub 1.02) to pre-computed extinction ratios which are based on a range of unimodal lognormal size distributions. The pre-computed ratios are presented as a function of R(sub eff) for a given sigma(sub g); thus the comparisons establish the range of R(sub eff) consistent with the measured spectra for that sigma(sub g). The fact that no solutions are found for certain sigma(sub g) values provides information on the acceptable range of sigma(sub g), which is found to evolve in response to volcanic injections and removal periods. Analogous comparisons using absolute extinction spectra and error bars establish the range of S and V. The second technique is a Parameter Search Technique (PST) which estimates R(sub eff) and sigma(sub g) within a month-latitude-altitude bin by minimizing the chi-squared values obtained by comparing the SAGE II/CLAES extinction spectra and error bars with spectra calculated by varying the lognormal fitting parameters: R(sub eff), sigma(sub g), and the total number of particles N(sub 0). For both techniques, possible biases in

  7. Global distributions of CO2 volume mixing ratio in the middle and upper atmosphere from daytime MIPAS high-resolution spectra

    NASA Astrophysics Data System (ADS)

    Aythami Jurado-Navarro, Á.; López-Puertas, Manuel; Funke, Bernd; García-Comas, Maya; Gardini, Angela; González-Galindo, Francisco; Stiller, Gabriele P.; von Clarmann, Thomas; Grabowski, Udo; Linden, Andrea

    2016-12-01

    Global distributions of the CO2 vmr (volume mixing ratio) in the mesosphere and lower thermosphere (from 70 up to ˜ 140 km) have been derived from high-resolution limb emission daytime MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) spectra in the 4.3 µm region. This is the first time that the CO2 vmr has been retrieved in the 120-140 km range. The data set spans from January 2005 to March 2012. The retrieval of CO2 has been performed jointly with the elevation pointing of the line of sight (LOS) by using a non-local thermodynamic equilibrium (non-LTE) retrieval scheme. The non-LTE model incorporates the new vibrational-vibrational and vibrational-translational collisional rates recently derived from the MIPAS spectra by [Jurado-Navarro et al.(2015)]. It also takes advantage of simultaneous MIPAS measurements of other atmospheric parameters (retrieved in previous steps), such as the kinetic temperature (derived up to ˜ 100 km from the CO2 15 µm region of MIPAS spectra and from 100 up to 170 km from the NO 5.3 µm emission of the same MIPAS spectra) and the O3 measurements (up to ˜ 100 km). The latter is very important for calculations of the non-LTE populations because it strongly constrains the O(3P) and O(1D) concentrations below ˜ 100 km. The estimated precision of the retrieved CO2 vmr profiles varies with altitude ranging from ˜ 1 % below 90 km to 5 % around 120 km and larger than 10 % above 130 km. There are some latitudinal and seasonal variations of the precision, which are mainly driven by the solar illumination conditions. The retrieved CO2 profiles have a vertical resolution of about 5-7 km below 120 km and between 10 and 20 km at 120-140 km. We have shown that the inclusion of the LOS as joint fit parameter improves the retrieval of CO2, allowing for a clear discrimination between the information on CO2 concentration and the LOS and also leading to significantly smaller systematic errors. The retrieved CO2 has an improved

  8. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  9. Subdiffusion in time-averaged, confined random walks

    NASA Astrophysics Data System (ADS)

    Neusius, Thomas; Sokolov, Igor M.; Smith, Jeremy C.

    2009-07-01

    Certain techniques characterizing diffusive processes, such as single-particle tracking or molecular dynamics simulation, provide time averages rather than ensemble averages. Whereas the ensemble-averaged mean-squared displacement (MSD) of an unbounded continuous time random walk (CTRW) with a broad distribution of waiting times exhibits subdiffusion, the time-averaged MSD, δ2¯ , does not. We demonstrate that, in contrast to the unbounded CTRW, in which δ2¯ is linear in the lag time Δ , the time-averaged MSD of the CTRW of a walker confined to a finite volume is sublinear in Δ , i.e., for long lag times δ2¯˜Δ1-α . The present results permit the application of CTRW to interpret time-averaged experimental quantities.

  10. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  11. Achronal averaged null energy condition

    SciTech Connect

    Graham, Noah; Olum, Ken D.

    2007-09-15

    The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

  12. Root Cause Analyses of Nunn-McCurdy Breaches, Volume 1: Zumwalt-Class Destroyer, Joint Strike Fighter, Longbow Apache and Wideband Global Satellite

    DTIC Science & Technology

    2011-01-01

    4 1.2. PARCA Root Cause Matrix Framework ...Volume 1 MFR multifunction radar MS B Milestone B MSE mission system equipment NAVAIR Naval Air Systems Command NAVSEA Naval Sea Systems Command NGSS...portrayed in a chart similar to that of Table 1.2, which illustrates the framework provided by the PARCA office. For each program under RAND’s purview, this

  13. The German Skills Machine: Sustaining Comparative Advantage in a Global Economy. Policies and Institutions: Germany, Europe, and Transatlantic Relations, Volume 3.

    ERIC Educational Resources Information Center

    Culpepper, Pepper D., Ed.; Finegold, David, Ed.

    This book examines the effectiveness and distributive ramifications of the institutions of German skill provision as they functioned at home in the 1990s and as they served as a template for reform in other industrialized countries. The volume relies on multiple sources of data, including in-firm case studies, larger-scale surveys of companies,…

  14. The German Skills Machine: Sustaining Comparative Advantage in a Global Economy. Policies and Institutions: Germany, Europe, and Transatlantic Relations, Volume 3.

    ERIC Educational Resources Information Center

    Culpepper, Pepper D., Ed.; Finegold, David, Ed.

    This book examines the effectiveness and distributive ramifications of the institutions of German skill provision as they functioned at home in the 1990s and as they served as a template for reform in other industrialized countries. The volume relies on multiple sources of data, including in-firm case studies, larger-scale surveys of companies,…

  15. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  16. Sea Level Change: Is the Volume of the Ocean Changing or Is It Redistributing?

    NASA Astrophysics Data System (ADS)

    Mitchum, G. T.; Thompson, P. R.; Merrifield, M. A.

    2012-12-01

    Global sea level change is due to changes in the ocean volume, which are in turn primarily due to changes in the globally averaged density of the ocean and ice melt from the land. Regional to local sea level changes reflect global changes as well as redistributions of volume due to ocean dynamics and land motions. Determining whether global sea level change is accelerating requires that we disentangle these regional and local signals from the true global volume changes. Given the current length of our time series determining acceleration is problematic, largely because of substantial spatial and temporal changes in the global sea level field due to ocean-atmosphere dynamics. We will review our work showing that global sea level reconstructions are sensitive to the weightings applied to the tide gauge data. We will also review basin-scale changes in sea level in the North Atlantic and the Tropical Pacific that are clearly wind-driven. These are important for two reasons. First, these signals mask the (presently) small global signal, and second, these signals present a statistical challenge for determining the acceleration of the global volume change rate. The obvious question is how well we can expect to estimate the sea level rise acceleration rate given the observed red noise character, in time and space, of the volume redistribution signals. We will end the presentation with various simulations of our ability to determine global sea level change acceleration that take into account reasonable estimates of decadal redistributions of ocean volume. The net result is that most recent attempts to determine acceleration are seriously flawed. On the positive side, we will provide estimates of how long it might take to make more reliable estimates.

  17. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  18. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  19. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  20. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  1. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  2. Higher Education in a Global Society Achieving Diversity, Equity and Excellence (Advances in Education in Diverse Communities: Research Policy and Praxis, Volume 5)

    ERIC Educational Resources Information Center

    Elsevier, 2006

    2006-01-01

    The "problem of the 21st century" is rapidly expanding diversity alongside stubbornly persistent status and power inequities by race, ethnicity, gender, class, language, citizenship and region. Extensive technological, economic, political and social changes, along with immigration, combine to produce a global community of great diversity…

  3. A Salzburg Global Seminar: "Optimizing Talent: Closing Education and Social Mobility Gaps Worldwide." Policy Notes. Volume 20, Number 3, Fall 2012

    ERIC Educational Resources Information Center

    Schwartz, Robert

    2012-01-01

    This issue of ETS Policy Notes (Vol. 20, No. 3) provides highlights from the Salzburg Global Seminar in December 2011. The seminar focused on bettering the educational and life prospects of students up to age 18 worldwide. [This article was written with the assistance of Beth Brody.

  4. Higher Education in a Global Society Achieving Diversity, Equity and Excellence (Advances in Education in Diverse Communities: Research Policy and Praxis, Volume 5)

    ERIC Educational Resources Information Center

    Elsevier, 2006

    2006-01-01

    The "problem of the 21st century" is rapidly expanding diversity alongside stubbornly persistent status and power inequities by race, ethnicity, gender, class, language, citizenship and region. Extensive technological, economic, political and social changes, along with immigration, combine to produce a global community of great diversity…

  5. The average Indian female nose.

    PubMed

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  6. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

  7. OAST Space Theme Workshop. Volume 2: Theme summary. 5: Global service (no. 11). A. Statement. B. 26 April 1976 presentation. C. Summary

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The benefits to be obtained from cost-effective global observation of the earth, its environment, and its natural and man-made features are examined using typical spacecraft and missions which could enhance the benefits of space operations. The technology needs and areas of interest include: (1) a ten-fold increase in the dimensions of deployable and erectable structures to provide booms, antennas, and platforms for global sensor systems; (2) control and stabilization systems capable of pointing accuracies of 1 arc second or less to locate targets of interest and maintain platform or sensor orientation during operations; (3) a factor of five improvements in spacecraft power capacity to support payloads and supporting electronics; (4) auxiliary propulsion systems capable of 5 to 10 years on orbit operation; (5) multipurpose sensors; and (6) end-to-end data management and an information system configured to accept new components or concepts as they develop.

  8. Morphometry and average temperature affect lake stratification responses to climate change

    NASA Astrophysics Data System (ADS)

    Kraemer, Benjamin M.; Anneville, Orlane; Chandra, Sudeep; Dix, Margaret; Kuusisto, Esko; Livingstone, David M.; Rimmer, Alon; Schladow, S. Geoffrey; Silow, Eugene; Sitoki, Lewis M.; Tamatamah, Rashid; Vadeboncoeur, Yvonne; McIntyre, Peter B.

    2015-06-01

    Climate change is affecting lake stratification with consequences for water quality and the benefits that lakes provide to society. Here we use long-term temperature data (1970-2010) from 26 lakes around the world to show that climate change has altered lake stratification globally and that the magnitudes of lake stratification changes are primarily controlled by lake morphometry (mean depth, surface area, and volume) and mean lake temperature. Deep lakes and lakes with high average temperatures have experienced the largest changes in lake stratification even though their surface temperatures tend to be warming more slowly. These results confirm that the nonlinear relationship between water density and water temperature and the strong dependence of lake stratification on lake morphometry makes lake temperature trends relatively poor predictors of lake stratification trends.

  9. NASA University Research Centers Technical Advances in Aeronautics, Space Sciences and Technology, Earth Systems Sciences, Global Hydrology, and Education. Volumes 2 and 3

    NASA Technical Reports Server (NTRS)

    Coleman, Tommy L. (Editor); White, Bettie (Editor); Goodman, Steven (Editor); Sakimoto, P. (Editor); Randolph, Lynwood (Editor); Rickman, Doug (Editor)

    1998-01-01

    This volume chronicles the proceedings of the 1998 NASA University Research Centers Technical Conference (URC-TC '98), held on February 22-25, 1998, in Huntsville, Alabama. The University Research Centers (URCS) are multidisciplinary research units established by NASA at 11 Historically Black Colleges or Universities (HBCU's) and 3 Other Minority Universities (OMU's) to conduct research work in areas of interest to NASA. The URC Technical Conferences bring together the faculty members and students from the URC's with representatives from other universities, NASA, and the aerospace industry to discuss recent advances in their fields.

  10. Technical report series on global modeling and data assimilation. Volume 2: Direct solution of the implicit formulation of fourth order horizontal diffusion for gridpoint models on the sphere

    NASA Technical Reports Server (NTRS)

    Li, Yong; Moorthi, S.; Bates, J. Ray; Suarez, Max J.

    1994-01-01

    High order horizontal diffusion of the form K Delta(exp 2m) is widely used in spectral models as a means of preventing energy accumulation at the shortest resolved scales. In the spectral context, an implicit formation of such diffusion is trivial to implement. The present note describes an efficient method of implementing implicit high order diffusion in global finite difference models. The method expresses the high order diffusion equation as a sequence of equations involving Delta(exp 2). The solution is obtained by combining fast Fourier transforms in longitude with a finite difference solver for the second order ordinary differential equation in latitude. The implicit diffusion routine is suitable for use in any finite difference global model that uses a regular latitude/longitude grid. The absence of a restriction on the timestep makes it particularly suitable for use in semi-Lagrangian models. The scale selectivity of the high order diffusion gives it an advantage over the uncentering method that has been used to control computational noise in two-time-level semi-Lagrangian models.

  11. Technical report series on global modeling and data assimilation. Volume 6: A multiyear assimilation with the GEOS-1 system: Overview and results

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Schubert, Siegfried; Rood, Richard; Park, Chung-Kyu; Wu, Chung-Yu; Kondratyeva, Yelena; Molod, Andrea; Takacs, Lawrence; Seablom, Michael; Higgins, Wayne

    1995-01-01

    The Data Assimilation Office (DAO) at Goddard Space Flight Center has produced a multiyear global assimilated data set with version 1 of the Goddard Earth Observing System Data Assimilation System (GEOS-1 DAS). One of the main goals of this project, in addition to benchmarking the GEOS-1 system, was to produce a research quality data set suitable for the study of short-term climate variability. The output, which is global and gridded, includes all prognostic fields and a large number of diagnostic quantities such as precipitation, latent heating, and surface fluxes. Output is provided four times daily with selected quantities available eight times per day. Information about the observations input to the GEOS-1 DAS is provided in terms of maps of spatial coverage, bar graphs of data counts, and tables of all time periods with significant data gaps. The purpose of this document is to serve as a users' guide to NASA's first multiyear assimilated data set and to provide an early look at the quality of the output. Documentation is provided on all the data archives, including sample read programs and methods of data access. Extensive comparisons are made with the corresponding operational European Center for Medium-Range Weather Forecasts analyses, as well as various in situ and satellite observations. This document is also intended to alert users of the data about potential limitations of assimilated data, in general, and the GEOS-1 data, in particular. Results are presented for the period March 1985-February 1990.

  12. The global frequency-wave number spectrum of oceanic variability estimated from TOPEX/POSEIDON altimetric measurements. Volume 100, No. C12; The Journal of Geophysical Research

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1995-01-01

    Two years of altimetric data from the TOPEX/POSEIDON spacecraft have been used to produce preliminary estimates of the space and time spectra of global variability for both sea surface height and slope. The results are expressed in terms of both degree variances from spherical harmonic expansions and in along-track wavenumbers. Simple analytic approximations both in terms of piece-wise power laws and Pade fractions are provided for comparison with independent measurements and for easy use of the results. A number of uses of such spectra exist, including the possibility of combining the altimetric data with other observations, predictions of spatial coherences, and the estimation of the accuracy of apparent secular trends in sea level.

  13. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  14. Compendium of NASA Data Base for the Global Tropospheric Experiment's Transport and Chemical Evolution Over the Pacific (TRACE-P). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.; Scott, A. Donald, Jr.

    2003-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Transport and Chemical Evolution over the Pacific (TRACE-P) Mission. The broad goal of TRACE-P was to characterize the transit and evolution of the Asian outflow over the western Pacific. Conducted from February 24 through April 10, 2001, TRACE-P integrated airborne, satellite- and ground-based observations, as well as forecasts from aerosol and chemistry models. The format of this compendium utilizes data plots (time series) of selected data acquired aboard the NASA/Dryden DC-8 (vol. 1) and NASA/Wallops P-3B (vol. 2) aircraft during TRACE-P. The purpose of this document is to provide a representation of aircraft data that are available in archived format via NASA Langley s Distributed Active Archive Center (DAAC) and through the GTE Project Office archive. The data format is not intended to support original research/analyses, but to assist the reader in identifying data that are of interest.

  15. Technical Report Series on Global Modeling and Data Assimilation. Volume 32; Estimates of AOD Trends (2002 - 2012) Over the World's Major Cities Based on the MERRA Aerosol Reanalysis

    NASA Technical Reports Server (NTRS)

    Provencal, Simon; Kishcha, Pavel; Elhacham, Emily; daSilva, Arlindo M.; Alpert, Pinhas; Suarez, Max J.

    2014-01-01

    NASA's Global Modeling and Assimilation Office has extended the Modern-Era Retrospective Analysis for Research and Application (MERRA) tool with five atmospheric aerosol species (sulfates, organic carbon, black carbon, mineral dust and sea salt). This inclusion of aerosol reanalysis data is now known as MERRAero. This study analyses a ten-year period (July 2002 - June 2012) MERRAero aerosol reanalysis applied to the study of aerosol optical depth (AOD) and its trends for the aforementioned aerosol species over the world's major cities (with a population of over 2 million inhabitants). We found that a proportion of various aerosol species in total AOD exhibited a geographical dependence. Cities in industrialized regions (North America, Europe, central and eastern Asia) are characterized by a strong proportion of sulfate aerosols. Organic carbon aerosols are dominant over cities which are located in regions where biomass burning frequently occurs (South America and southern Africa). Mineral dust dominates other aerosol species in cities located in proximity to the major deserts (northern Africa and western Asia). Sea salt aerosols are prominent in coastal cities but are dominant aerosol species in very few of them. AOD trends are declining over cities in North America, Europe and Japan, as a result of effective air quality regulation. By contrast, the economic boom in China and India has led to increasing AOD trends over most cities in these two highly-populated countries. Increasing AOD trends over cities in the Middle East are caused by increasing desert dust.

  16. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  17. Advance of East Antarctic outlet glaciers during the Hypsithermal: Implications for the volume state of the Antarctic ice sheet under global warming

    SciTech Connect

    Domack, E.W. ); Jull, A.J.T. ); Nakao, Seizo )

    1991-11-01

    The authors present the first circum-East Antarctic chronology for the Holocene, based on 17 radiocarbon dates generated by the accelerator method. Marine sediments form around East Antarctica contain a consistent, high-resolution record of terrigenous (ice-proximal) and biogenic (open-marine) sedimentation during Holocene time. This record demonstrates that biogenic sedimentation beneath the open-marine environment on the continental shelf has been restricted to approximately the past 4 ka, whereas a period of terrigenous sedimentation related to grounding line advance of ice tongues and ice shelves took place between 7 and 4 ka. An earlier period of open-marine (biogenic sedimentation) conditions following the late Pleistocene glacial maximum is recognized from the Prydz Bay (Ocean Drilling Program) record between 10.7 and 7.3 ka. Clearly, the response of outlet systems along the periphery of the East Antarctic ice sheet during the mid-Holocene was expansion. This may have been a direct consequence of climate warming during an Antarctic Hypsithermal. Temperature-accumulation relations for the Antarctic indicate that warming will cause a significant increase in accumulation rather than in ablation. Models that predict a positive mass balance (growth) of the Antarctic ice sheet under global warming are supported by the mid-Holocene data presented herein.

  18. Compendium of NASA Data Base for the Global Tropospheric Experiment's Pacific Exploratory Mission - Tropics B (PEM-Tropics B). Volume 2; P-3B

    NASA Technical Reports Server (NTRS)

    Scott, A. Donald, Jr.; Kleb, Mary M.; Raper, James L.

    2000-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) conducted in March and April 1999. PEM-Tropics B was conducted during the southern-tropical wet season when the influence from biomass burning observed in PEM-Tropics A was minimal. Major deployment sites were Hawaii, Kiritimati (Christmas Island), Tahiti, Fiji, and Easter Island. The broad goals of PEM-Tropics B were to improved understanding of the oxidizing power of the atmosphere and the processes controlling sulfur aerosol formation and to establish baseline values for chemical species that are directly coupled to the oxidizing power and aerosol loading of the troposphere. The purpose of this document is to provide a representation of aircraft data that will be available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) or are available through the GTE Project Office archive. The data format is not intended to support original research/analysis, but to assist the reader in identifying data that are of interest.

  19. Compendium of NASA Data Base for the Global Tropospheric Experiment's Pacific Exploratory Mission-Tropics B (PEM-Tropics B). Volume 1; DC-8

    NASA Technical Reports Server (NTRS)

    Scott, A. Donald, Jr.; Kleb, Mary M.; Raper, James L.

    2000-01-01

    This report provides a compendium of NASA aircraft data that are available from NASA's Global Tropospheric Experiment's (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) conducted in March and April 1999. PEM-Tropics B was conducted during the southern-tropical wet season when the influence from biomass burning observed in PEM-Tropics A was minimal. Major deployment sites were Hawaii, Kiritimati (Christmas Island), Tahiti, Fiji, and Easter Island. The broad goals of PEM-Tropics B were to improved understanding of the oxidizing power of the atmosphere and the processes controlling sulfur aerosol formation and to establish baseline values for chemical species that are directly coupled to the oxidizing power and aerosol loading of the troposphere. The purpose of this document is to provide a representation of aircraft data that will be available in archived format via NASA Langley's Distributed Active Archive Center (DAAC) or are available through the GTE Project Office archive. The data format is not intended to support original research/analysis, but to assist the reader in identifying data that are of interest.

  20. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  1. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  2. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  3. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  4. Implementation of the NCAR Community Land Model (CLM) in the NASA/NCAR finite-volume Global Climate Model (fvGCM)

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon D.; Wang, Guiling; Chern, Jiundar; Bosilovich, Michael G.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2002-01-01

    In this study, the NCAR CLM version 2.0 land-surface model was integrated into the NASA/NCAR fvGCM. The CLM was developed collaboratively by an open interagency/university group of scientists and based on well-proven physical parameterizations and numerical schemes that combine the best features of BATS, NCAR-LSM, and IAP94. The CLM design is a one-dimensional point model with 1 vegetation layer, along with sub-grid scale tiles. The features of the CLM include 10-uneven soil layers with water, ice, and temperature states in each soil layer, and five snow layers, with water flow, refreezing, compaction, and aging allowed. In addition, the CLM utilizes two-stream canopy radiative transfer, the Bonan lake model and topographic enhanced streamflow based on TOPMODEL. The DAO fvGCM uses a genuinely conservative Flux-Form Semi-Lagrangian transport algorithm along with terrain- following Lagrangian control-volume vertical coordinates. The physical parameterizations are based on the NCAR Community Atmosphere Model (CAM-2). For our purposes, the fvGCM was run at 2 deg x 2.5 deg horizontal resolution with 55 vertical levels. The 10-year climate from the fvGCM with CLM2 was intercompared with the climate from fvGCM with LSM, ECMWF and NCEP. We concluded that the incorporation of CLM2 did not significantly impact the fvGCM climate from that of LSM. The most striking difference was the warm bias in the CLM2 surface skin temperature over desert regions. We determined that the warm bias can be partially attributed to the value of the drag coefficient for the soil under the canopy, which was too small resulting in a decoupling between the ground surface and the canopy. We also discovered that the canopy interception was high compared to observations in the Amazon region. A number of experiments were then performed focused on implementing model improvements. In order to correct the warm bias, the drag coefficient for the soil under the canopy was considered a function of LAI (Leaf

  5. Implementation of the NCAR Community Land Model (CLM) in the NASA/NCAR finite-volume Global Climate Model (fvGCM)

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon D.; Wang, Guiling; Chern, Jiundar; Bosilovich, Michael G.; Lin, Shian-Jiann; Nebuda, Sharon; Shen, Bo-Wen

    2002-01-01

    In this study, the NCAR CLM version 2.0 land-surface model was integrated into the NASA/NCAR fvGCM. The CLM was developed collaboratively by an open interagency/university group of scientists and based on well-proven physical parameterizations and numerical schemes that combine the best features of BATS, NCAR-LSM, and IAP94. The CLM design is a one-dimensional point model with 1 vegetation layer, along with sub-grid scale tiles. The features of the CLM include 10-uneven soil layers with water, ice, and temperature states in each soil layer, and five snow layers, with water flow, refreezing, compaction, and aging allowed. In addition, the CLM utilizes two-stream canopy radiative transfer, the Bonan lake model and topographic enhanced streamflow based on TOPMODEL. The DAO fvGCM uses a genuinely conservative Flux-Form Semi-Lagrangian transport algorithm along with terrain- following Lagrangian control-volume vertical coordinates. The physical parameterizations are based on the NCAR Community Atmosphere Model (CAM-2). For our purposes, the fvGCM was run at 2 deg x 2.5 deg horizontal resolution with 55 vertical levels. The 10-year climate from the fvGCM with CLM2 was intercompared with the climate from fvGCM with LSM, ECMWF and NCEP. We concluded that the incorporation of CLM2 did not significantly impact the fvGCM climate from that of LSM. The most striking difference was the warm bias in the CLM2 surface skin temperature over desert regions. We determined that the warm bias can be partially attributed to the value of the drag coefficient for the soil under the canopy, which was too small resulting in a decoupling between the ground surface and the canopy. We also discovered that the canopy interception was high compared to observations in the Amazon region. A number of experiments were then performed focused on implementing model improvements. In order to correct the warm bias, the drag coefficient for the soil under the canopy was considered a function of LAI (Leaf

  6. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if the...

  7. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if the...

  8. RHIC BPM system average orbit calculations

    SciTech Connect

    Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

    2009-05-04

    RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

  9. NASA Global Hawk Overview

    NASA Technical Reports Server (NTRS)

    Naftel, Chris

    2014-01-01

    The NASA Global Hawk Project is supporting Earth Science research customers. These customers include: US Government agencies, civilian organizations, and universities. The combination of the Global Hawks range, endurance, altitude, payload power, payload volume and payload weight capabilities separates the Global Hawk platform from all other platforms available to the science community. This presentation includes an overview of the concept of operations and an overview of the completed science campaigns. In addition, the future science plans, using the NASA Global Hawk System, will be presented.

  10. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  11. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  12. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... family's deficit by the due date for the final report required in § 1037.730. The emission credits used... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... Averaging. (a) Averaging is the exchange of emission credits among your vehicle families. You may average...

  13. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... family's deficit by the due date for the final report required in § 1037.730. The emission credits used... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... Averaging. (a) Averaging is the exchange of emission credits among your vehicle families. You may average...

  14. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  15. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  16. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  17. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  18. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  19. Designing Digital Control Systems With Averaged Measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1990-01-01

    Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.

  20. Spatial and frequency averaging techniques for a polarimetric scatterometer system

    SciTech Connect

    Monakov, A.A.; Stjernman, A.S.; Nystroem, A.K. ); Vivekanandan, J. )

    1994-01-01

    An accurate estimation of backscattering coefficients for various types of rough surfaces is the main theme of remote sensing. Radar scattering signals from distributed targets exhibit fading due to interference associated with coherent scattering from individual scatterers within the resolution volume. Uncertainty in radar measurements which arises as a result of fading is reduced by averaging independent samples. Independent samples are obtained by collecting the radar returns from nonoverlapping footprints (spatial averaging) and/or nonoverlapping frequencies (frequency agility techniques). An improved formulation of fading characteristics for the spatial averaging and frequency agility technique is derived by taking into account the rough surface scattering process. Kirchhoff's approximation is used to describe rough surface scattering. Expressions for fading decorrelation distance and decorrelation bandwidth are derived. Rough surface scattering measurements are performed between L and X bands. Measured frequency and spatial correlation coefficients show good agreement with theoretical results.

  1. Exact solution to the averaging problem in cosmology.

    PubMed

    Wiltshire, David L

    2007-12-21

    The exact solution of a two-scale Buchert average of the Einstein equations is derived for an inhomogeneous universe that represents a close approximation to the observed universe. The two scales represent voids, and the bubble walls surrounding them within which clusters of galaxies are located. As described elsewhere [New J. Phys. 9, 377 (2007)10.1088/1367-2630/9/10/377], apparent cosmic acceleration can be recognized as a consequence of quasilocal gravitational energy gradients between observers in bound systems and the volume-average position in freely expanding space. With this interpretation, the new solution presented here replaces the Friedmann solutions, in representing the average evolution of a matter-dominated universe without exotic dark energy, while being observationally viable.

  2. Cosmological measures without volume weighting

    SciTech Connect

    Page, Don N

    2008-10-15

    Many cosmologists (myself included) have advocated volume weighting for the cosmological measure problem, weighting spatial hypersurfaces by their volume. However, this often leads to the Boltzmann brain problem, that almost all observations would be by momentary Boltzmann brains that arise very briefly as quantum fluctuations in the late universe when it has expanded to a huge size, so that our observations (too ordered for Boltzmann brains) would be highly atypical and unlikely. Here it is suggested that volume weighting may be a mistake. Volume averaging is advocated as an alternative. One consequence may be a loss of the argument that eternal inflation gives a nonzero probability that our universe now has infinite volume.

  3. Bayesian Model Averaging for Propensity Score Analysis.

    PubMed

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  4. A database of age-appropriate average MRI templates.

    PubMed

    Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze

    2016-01-01

    This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/). Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  6. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  7. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  8. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  9. Contribution of small glaciers to global sea level

    USGS Publications Warehouse

    Meier, M.F.

    1984-01-01

    Observed long-term changes in glacier volume and hydrometeorological mass balance models yield data on the transfer of water from glaciers, excluding those in Greenland and Antarctica, to the oceans, The average observed volume change for the period 1900 to 1961 is scaled to a global average by use of the seasonal amplitude of the mass balance. These data are used to calibrate the models to estimate the changing contribution of glaciers to sea level for the period 1884 to 1975. Although the error band is large, these glaciers appear to accountfor a third to half of observed rise in sea level, approximately that fraction not explained by thermal expansion of the ocean.

  10. Is Global Warming Accelerating?

    NASA Astrophysics Data System (ADS)

    Shukla, J.; Delsole, T. M.; Tippett, M. K.

    2009-12-01

    A global pattern that fluctuates naturally on decadal time scales is identified in climate simulations and observations. This newly discovered component, called the Global Multidecadal Oscillation (GMO), is related to the Atlantic Meridional Oscillation and shown to account for a substantial fraction of decadal fluctuations in the observed global average sea surface temperature. IPCC-class climate models generally underestimate the variance of the GMO, and hence underestimate the decadal fluctuations due to this component of natural variability. Decomposing observed sea surface temperature into a component due to anthropogenic and natural radiative forcing plus the GMO, reveals that most multidecadal fluctuations in the observed global average sea surface temperature can be accounted for by these two components alone. The fact that the GMO varies naturally on multidecadal time scales implies that it can be predicted with some skill on decadal time scales, which provides a scientific rationale for decadal predictions. Furthermore, the GMO is shown to account for about half of the warming in the last 25 years and hence a substantial fraction of the recent acceleration in the rate of increase in global average sea surface temperature. Nevertheless, in terms of the global average “well-observed” sea surface temperature, the GMO can account for only about 0.1° C in transient, decadal-scale fluctuations, not the century-long 1° C warming that has been observed during the twentieth century.

  11. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  12. 40 CFR 1036.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...'s deficit by the due date for the final report required in § 1036.730. The emission credits used to... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... § 1036.710 Averaging. (a) Averaging is the exchange of emission credits among your engine families. You...

  13. 40 CFR 1036.710 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...'s deficit by the due date for the final report required in § 1036.730. The emission credits used to... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... § 1036.710 Averaging. (a) Averaging is the exchange of emission credits among your engine families. You...

  14. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  15. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM emissions...

  16. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM emissions...

  17. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM emissions...

  18. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  19. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  20. 40 CFR 90.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging. 90.204 Section 90.204... Trading Provisions § 90.204 Averaging. (a) Negative credits from engine families with FELs above the... manner is used to determine compliance under § 90.207(b). A manufacturer may have a negative balance of...

  1. 40 CFR 90.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 90.204 Section 90.204... Trading Provisions § 90.204 Averaging. (a) Negative credits from engine families with FELs above the... manner is used to determine compliance under § 90.207(b). A manufacturer may have a negative balance of...

  2. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  3. 40 CFR 90.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... credits as allowed under § 90.207(c)(2). (b) Cross-class averaging of credits is allowed across all...

  4. 40 CFR 90.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... credits as allowed under § 90.207(c)(2). (b) Cross-class averaging of credits is allowed across all...

  5. Analogue Divider by Averaging a Triangular Wave

    NASA Astrophysics Data System (ADS)

    Selvam, Krishnagiri Chinnathambi

    2017-03-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  6. Analogue Divider by Averaging a Triangular Wave

    NASA Astrophysics Data System (ADS)

    Selvam, Krishnagiri Chinnathambi

    2017-08-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  7. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  8. 18 CFR 301.4 - Exchange Period Average System Cost determination.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... Accounts in the Appendix 1. Bonneville will use Global Insight as the source of data for the...

  9. On the determination of local instantaneous averages in particulate flow measurements

    NASA Technical Reports Server (NTRS)

    Vandewall, R. E.; Soo, S. L.

    1993-01-01

    Determination of instantaneous local average particle density of a gas-particle suspension requires satisfying both the time scale relation and the volume scale relation or its continuum counter part of time averaging. This procedure was validated by comparing simultaneous velocity and mass flux measurements and the laser phase Doppler measurements.

  10. Calculating areal average thickness of rigid gas-permeable contact lenses.

    PubMed

    Weissman, B A

    1986-11-01

    A method to calculate areal average thickness of rigid contact lenses is shown. The method involves division of lens volume, which is determined from lens design specifications or derived from measured lens weight, by the area of the lens back surface. Areal average thickness may then be used with known oxygen permeability to generate oxygen transmissibility values.

  11. 2006 Global Demilitarization Symposium Volume 1 Presentations

    DTIC Science & Technology

    2006-05-04

    Bldg. # 15 Hill AFB UT 84056 terry.bone@hill.af.mil (801)777-1539 Dr. Renato Bonora University od Padua Italy Dept. of Industrial...9898 Mr. Kevin Kennedy Chemical Compliance Systems, Inc. 706 State Route 15 S Ste 207 Lake Hopatcong NJ 07849-2250 kevinkennedy@chemply.com...Wardleigh Road Bldg. 15 Hill AFB UT 84056-5008 roger.montoya@hill.af.mil (801)777-1546 Mr. Majid Moosavi Defense Ammunition Center 1 C Tree

  12. Role of spatial averaging in multicellular gradient sensing

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  13. Variations in Nimbus-7 cloud estimates. Part I: Zonal averages

    SciTech Connect

    Weare, B.C. )

    1992-12-01

    Zonal averages of low, middle, high, and total cloud amount estimates derived from measurements from Nimbus-7 have been analyzed for the six-year period April 1979 through March 1985. The globally and zonally averaged valued of six-year annual means and standard deviations of total cloud amount and a proxy of cloudtop height are illustrated. Separate means for day and night and land and sea are also shown. The globally averaged value of intra-annual variability of total cloud amount is greater than 7%, and that for cloud height is greater than 0.3 km. Those of interannual variability are more than one-third of these values. Important latitudinal differences in variability are illustrated. The dominant empirical orthogonal analyses of the intra-annual variations of total cloud amount and heights show strong annual cycles, indicating that in the tropics increases in total cloud amount of up to about 30% are often accompanied by increases in cloud height of up to 1.2 km. This positive link is also evident in the dominant empirical orthogonal function of interannual variations of a total cloud/cloud height complex. This function shows a large coherent variation in total cloud cover of about 10% coupled with changes in cloud height of about 1.1 km associated with the 1982-83 El Ni[tilde n]o-Southern Oscillation event. 14 refs. 12 figs., 2 tabs.

  14. Chesapeake Bay Hypoxic Volume Forecasts and Results

    USGS Publications Warehouse

    Evans, Mary Anne; Scavia, Donald

    2013-01-01

    Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer's hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  15. Chesapeake Bay hypoxic volume forecasts and results

    USGS Publications Warehouse

    Scavia, Donald; Evans, Mary Anne

    2013-01-01

    The 2013 Forecast - Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer’s hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.

  16. Properties and applications of the average interparticle distance.

    PubMed

    Hollett, J W; Poirier, R A

    2009-06-01

    The first and second moment operators are used to define the origin invariant shape and size of a molecule or functional group, as well as expressions for the distance between two electrons and the distance between an electron and a nucleus. The measure of molecular size correlates quite well with an existing theoretical measure of molecular volume calculated from isodensity contours. Also, the measure of size is effective in predicting steric effects of substituents which have been measured experimentally. The electron-electron and electron-nuclear distances are related to components of the Hartree-Fock energy. The average distance between two-electrons can model the Coulomb energy quite well, especially in the case of localized molecular orbitals. The average distance between an electron and a nucleus is closely related to the electron-nuclear attraction energy of a molecule.

  17. Averaged model for momentum and dispersion in hierarchical porous media.

    PubMed

    Chabanon, Morgan; David, Bertrand; Goyeau, Benoît

    2015-08-01

    Hierarchical porous media are multiscale systems, where different characteristic pore sizes and structures are encountered at each scale. Focusing the analysis to three pore scales, an upscaling procedure based on the volume-averaging method is applied twice, in order to obtain a macroscopic model for momentum and diffusion-dispersion. The effective transport properties at the macroscopic scale (permeability and dispersion tensors) are found to be explicitly dependent on the mesoscopic ones. Closure problems associated to these averaged properties are numerically solved at the different scales for two types of bidisperse porous media. Results show a strong influence of the lower-scale porous structures and flow intensity on the macroscopic effective transport properties.

  18. Multiple-level defect species evaluation from average carrier decay

    NASA Astrophysics Data System (ADS)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  19. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  20. Cosmic inhomogeneities and averaged cosmological dynamics.

    PubMed

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  1. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  2. GROUP ACTION INDUCED AVERAGING FOR HARDI PROCESSING

    PubMed Central

    Çetingül, H. Ertan; Afsari, Bijan; Wright, Margaret J.; Thompson, Paul M.; Vidal, Rene

    2012-01-01

    We consider the problem of processing high angular resolution diffusion images described by orientation distribution functions (ODFs). Prior work showed that several processing operations, e.g., averaging, interpolation and filtering, can be reduced to averaging in the space of ODFs. However, this approach leads to anatomically erroneous results when the ODFs to be processed have very different orientations. To address this issue, we propose a group action induced distance for averaging ODFs, which leads to a novel processing framework on the spaces of orientation (the space of 3D rotations) and shape (the space of ODFs with the same orientation). Experiments demonstrate that our framework produces anatomically meaningful results. PMID:22903055

  3. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  4. A thermochemically derived global reaction mechanism for detonation application

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Yang, J.; Sun, M.

    2012-07-01

    A 4-species 4-step global reaction mechanism for detonation calculations is derived from detailed chemistry through thermochemical approach. Reaction species involved in the mechanism and their corresponding molecular weight and enthalpy data are derived from the real equilibrium properties. By substituting these global species into the results of constant volume explosion and examining the evolution process of these global species under varied conditions, reaction paths and corresponding rates are summarized and formulated. The proposed mechanism is first validated to the original chemistry through calculations of the CJ detonation wave, adiabatic constant volume explosion, and the steady reaction structure after a strong shock wave. Good agreement in both reaction scales and averaged thermodynamic properties has been achieved. Two sets of reaction rates based on different detailed chemistry are then examined and applied for numerical simulations of two-dimensional cellular detonations. Preliminary results and a brief comparison between the two mechanisms are presented. The proposed global mechanism is found to be economic in computation and also competent in description of the overall characteristics of detonation wave. Though only stoichiometric acetylene-oxygen mixture is investigated in this study, the method to derive such a global reaction mechanism possesses a certain generality for premixed reactions of most lean hydrocarbon mixtures.

  5. Rotational averaging of multiphoton absorption cross sections

    NASA Astrophysics Data System (ADS)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  6. Rotational averaging of multiphoton absorption cross sections.

    PubMed

    Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  7. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  8. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  9. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  10. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  11. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  12. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  13. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  14. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  15. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  16. Spacetime Average Density (SAD) cosmological measures

    NASA Astrophysics Data System (ADS)

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  17. Bimetal sensor averages temperature of nonuniform profile

    NASA Technical Reports Server (NTRS)

    Dittrich, R. T.

    1968-01-01

    Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.

  18. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  19. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  20. Radial averages of astigmatic TEM images.

    PubMed

    Fernando, K Vince

    2008-10-01

    The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.

  1. Symmetric Euler orientation representations for orientational averaging.

    PubMed

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  2. Heuristic approach to capillary pressures averaging

    SciTech Connect

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  3. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  4. Average variograms to guide soil sampling

    NASA Astrophysics Data System (ADS)

    Kerry, R.; Oliver, M. A.

    2004-10-01

    To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.

  5. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  6. Scaling registration of multiview range scans via motion averaging

    NASA Astrophysics Data System (ADS)

    Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan

    2016-07-01

    Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.

  7. Projection-Based Volume Alignment

    PubMed Central

    Yu, Lingbo; Snapp, Robert R.; Ruiz, Teresa; Radermacher, Michael

    2013-01-01

    When heterogeneous samples of macromolecular assemblies are being examined by 3D electron microscopy (3DEM), often multiple reconstructions are obtained. For example, subtomograms of individual particles can be acquired from tomography, or volumes of multiple 2D classes can be obtained by random conical tilt reconstruction. Of these, similar volumes can be averaged to achieve higher resolution. Volume alignment is an essential step before 3D classification and averaging. Here we present a projection-based volume alignment (PBVA) algorithm. We select a set of projections to represent the reference volume and align them to a second volume. Projection alignment is achieved by maximizing the cross-correlation function with respect to rotation and translation parameters. If data are missing, the cross-correlation functions are normalized accordingly. Accurate alignments are obtained by averaging and quadratic interpolation of the cross-correlation maximum. Comparisons of the computation time between PBVA and traditional 3D cross-correlation methods demonstrate that PBVA outperforms the traditional methods. Performance tests were carried out with different signal-to-noise ratios using modeled noise and with different percentages of missing data using a cryo-EM dataset. All tests show that the algorithm is robust and highly accurate. PBVA was applied to align the reconstructions of a subcomplex of the NADH: ubiquinone oxidoreductase (Complex I) from the yeast Yarrowia lipolytica, followed by classification and averaging. PMID:23410725

  8. The generic modeling fallacy: Average biomechanical models often produce non-average results!

    PubMed

    Cook, Douglas D; Robertson, Daniel J

    2016-11-07

    Computational biomechanics models constructed using nominal or average input parameters are often assumed to produce average results that are representative of a target population of interest. To investigate this assumption a stochastic Monte Carlo analysis of two common biomechanical models was conducted. Consistent discrepancies were found between the behavior of average models and the average behavior of the population from which the average models׳ input parameters were derived. More interestingly, broadly distributed sets of non-average input parameters were found to produce average or near average model behaviors. In other words, average models did not produce average results, and models that did produce average results possessed non-average input parameters. These findings have implications on the prevalent practice of employing average input parameters in computational models. To facilitate further discussion on the topic, the authors have termed this phenomenon the "Generic Modeling Fallacy". The mathematical explanation of the Generic Modeling Fallacy is presented and suggestions for avoiding it are provided. Analytical and empirical examples of the Generic Modeling Fallacy are also given. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Making the Grade? Globalisation and the Training Market in Australia. Volume 1 [and] Volume 2.

    ERIC Educational Resources Information Center

    Hall, Richard; Buchanan, John; Bretherton, Tanya; van Barneveld, Kristin; Pickersgill, Richard

    This two-volume document reports on a study of globalization and Australia's training market. Volume 1 begins by examining debate on globalization and industry training in Australia. Discussed next is the study methodology, which involved field studies of the metals and engineering industry in South West Sydney and the Hunter and the information…

  10. Schooling for a Global Age.

    ERIC Educational Resources Information Center

    Becker, James M., Ed.

    The book explores objectives, needs, and practices in the area of global education in elementary and secondary schools. Major purposes of the volume are to present a comprehensive, up-to-date examination of existing programs, characterize components of an ideal global education program, and provide advice to educators as they develop and implement…

  11. Averaged controllability of parameter dependent conservative semigroups

    NASA Astrophysics Data System (ADS)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  12. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  13. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  14. Average length of stay in hospitals.

    PubMed

    Egawa, H

    1984-03-01

    The average length of stay is essentially an important and appropriate index for hospital bed administration. However, from the position that it is not necessarily an appropriate index in Japan, an analysis is made of the difference in the health care facility system between the United States and Japan. Concerning the length of stay in Japanese hospitals, the median appeared to better represent the situation. It is emphasized that in order for the average length of stay to become an appropriate index, there is need to promote regional health, especially facility planning.

  15. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  16. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  17. Average power meter for laser radiation

    NASA Astrophysics Data System (ADS)

    Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.

    2016-04-01

    Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.

  18. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  19. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  20. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  1. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    SciTech Connect

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  2. Investigation of Arterial Waves using Phase Averaging.

    PubMed

    Johnston, Clifton; Martinuzzi, Robert; Schaefer, Matthew

    2005-01-01

    In this paper the development of objective criteria for data reduction, parameter estimations and phenomenological description of arterial pressure pulses are presented. The additional challenge of distinguishing between the cyclical and incoherent contributions to the wave form is also considered. By applying the technique of phase averaging to a series of heart beats, a characteristic pulse was determined. It was shown that the beats from a paced heart are very similar and while beats from an unpaced heart will vary significantly in time and amplitude. The appropriate choice of a reference point is critical in generating phase averages that embody the characteristic behaviour.

  3. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  4. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  5. Average Annual Rainfall Over the Globe

    NASA Astrophysics Data System (ADS)

    Agrawal, D. C.

    2013-12-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74×1017 J of solar radiation per second and it is divided over various channels as given in Table 1. It keeps our planet warm and maintains its average temperature2 of 288 K with the help of the atmosphere in such a way that life can survive. It also recycles the water in the oceans/rivers/ lakes by initial evaporation and subsequent precipitation; the average annual rainfall over the globe is around one meter. According to M. King Hubbert the amount of solar power going into the evaporation and precipitation channel is 4.0×1016 W. Students can verify the value of average annual rainfall over the globe by utilizing this part of solar energy. This activity is described in the next section.

  6. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  7. Attendance and Grade Point Average: A Study.

    ERIC Educational Resources Information Center

    Strickland, Vinnie P.

    This study investigated the correlation between attendance and grade point average among high school juniors, hypothesizing that there would not be a significant correlation between the two. The sample consisted of 32 students randomly selected from among 172 high school students enrolled in a Chicago public school during school years 1995-1996…

  8. Average thermal characteristics of solar wind electrons

    NASA Technical Reports Server (NTRS)

    Montgomery, M. D.

    1972-01-01

    Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.

  9. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  10. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  11. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  12. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  13. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  14. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  15. Conceptual Analysis of System Average Water Stability

    NASA Astrophysics Data System (ADS)

    Zhang, H.

    2016-12-01

    Averaging over time and area, the precipitation in an ecosystem (SAP - system average precipitation) depends on the average surface temperature and relative humidity (RH) in the system if uniform convection is assumed. RH depends on the evapotranspiration of the system (SAE - system average evapotranspiration). There is a non-linear relationship between SAP and SAE. Studying this relationship can lead mechanistic understanding of the ecosystem health status and trend under different setups. If SAP is higher than SAE, the system will have a water runoff which flows out through rivers. If SAP is lower than SAE, irrigation is needed to maintain the vegetation status. This presentation will give a conceptual analysis of the stability in this relationship under different assumed areas, water or forest coverages, elevations and latitudes. This analysis shows that desert is a stable system. Water circulation in basins is also stabilized at a specific SAP based on the basin profile. It further shows that deforestation will reduce SAP, and can flip the system to an irrigation required status. If no irrigation is provided, the system will automatically reduce to its stable point - desert, which is extremely difficult to turn around.

  16. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  17. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EMISSIONS FROM MARINE SPARK-IGNITION ENGINES In-Use Credit Program for New Marine Engines § 91.1304... credit balance for a model year. Positive credits to be used in averaging may be obtained from credits... positive credit balance must be used at a rate of 1.1 to 1. ...

  18. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  19. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  20. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  1. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  2. How Young Is Standard Average European?

    ERIC Educational Resources Information Center

    Haspelmath, Martin

    1998-01-01

    An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…

  3. Orbit Averaging in Perturbed Planetary Rings

    NASA Astrophysics Data System (ADS)

    Stewart, Glen R.

    2015-11-01

    The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.

  4. New applications for high average power beams

    SciTech Connect

    Neau, E.L.; Turman, B.N.; Patterson, E.L.

    1993-08-01

    The technology base formed by the development of high peak power simulators, laser drivers, FEL`s, and ICF drivers from the early 60`s through the late 80`s is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

  5. Time-average dynamic speckle interferometry

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2014-05-01

    For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.

  6. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  7. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  8. Models of space averaged energetics of plates

    NASA Technical Reports Server (NTRS)

    Bouthier, O. M.; Bernhard, R. J.

    1990-01-01

    The analysis of high frequency vibrations in plates is of particular interest in the study of structure borne noise in aircrafts. The current methods of analysis are either too expensive (finite element method) or may have a confidence band wider than desirable (Statistical Energy Analysis). An alternative technique to model the space and time averaged response of structural acoustics problems with enough detail to include all significant mechanisms of energy generation, transmission, and absorption is highly desirable. The focus of this paper is the development of a set of equations which govern the space and time averaged energy density in plates. To solve this equation, a new type of boundary value problem must be treated in terms of energy density variables using energy and intensity boundary conditions. A computer simulation verification study of the energy governing equation is performed. A finite element formulation of the new equations is also implemented and several test cases are analyzed and compared to analytical solutions.

  9. Area-average rainfall and lightning activity

    NASA Astrophysics Data System (ADS)

    ChèZe, Jean-Luc; Sauvageot, Henri

    1997-01-01

    The aim of the paper is to discuss the statistical relations between the average rainfall parameters for an ensemble of convective cells and the mean lightning occurrence inside this area. The study is based on a data set gathered in France using C and S band radars and two different lightning detection networks. One of them is used to measure L, the cloud to ground flash number. The other one is used to measure Sa, the total cloud to ground and intracloud flash number. The rainfall parameters are and F(τ), the mean areal rain rate and the fractional rainfall area above a threshold τ, respectively. The results demonstrate that for a single event (i.e., on the spatiotemporal scale of a rain system), significant correlations (1) between the average rainfall parameters, (2) between L and Sa, and (3) between the lightning and the average rainfall parameters are observed. As expected, the correlation between and F(τ) (1) is very tight for the individual cases as well as for the cluster of the data. The correlations between L and Sa and between the lightning and the average rainfall parameters (i.e., 2 and 3) can be tight in convectively and electrically very active systems (the correlation coefficient between L and F(τ) reaches 0.96). Yet, the parameters of the relations change from one case to another. So, when considered on a climatological scale (i.e., all the data for a season processed together), the 2 and 3 types of correlation diminish or even vanish. However, it is suggested that for homogeneous climatological conditions associated with very active convective clouds, such as, for example, the conditions observed in some tropical areas, and for a homogeneous regime of convection, a tight relation with stable parameters could be expected.

  10. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  11. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  12. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  13. Average chemical composition of the lunar surface

    NASA Technical Reports Server (NTRS)

    Turkevich, A. L.

    1973-01-01

    The available data on the chemical composition of the lunar surface at eleven sites (3 Surveyor, 5 Apollo and 3 Luna) are used to estimate the amounts of principal chemical elements (those present in more than about 0.5% by atom) in average lunar surface material. The terrae of the moon differ from the maria in having much less iron and titanium and appreciably more aluminum and calcium.

  14. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  15. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  16. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  17. On the ensemble averaging of PIC simulations

    NASA Astrophysics Data System (ADS)

    Codur, R. J. B.; Tsung, F. S.; Mori, W. B.

    2016-10-01

    Particle-in-cell simulations are used ubiquitously in plasma physics to study a variety of phenomena. They can be an efficient tool for modeling the Vlasov or Vlasov Fokker Planck equations in multi-dimensions. However, the PIC method actually models the Klimontovich equation for finite size particles. The Vlasov Fokker Planck equation can be derived as the ensemble average of the Klimontovich equation. We present results of studying Landau damping and Stimulated Raman Scattering using PIC simulations where we use identical ``drivers'' but change the random number generator seeds. We show that even for cases where a plasma wave is excited below the noise in a single simulation that the plasma wave can clearly be seen and studied if an ensemble average over O(10) simulations is made. Comparison between the results from an ensemble average and the subtraction technique are also presented. In the subtraction technique two simulations, one with the other without the ``driver'' are conducted with the same random number generator seed and the results are subtracted. This work is supported by DOE, NSF, and ENSC (France).

  18. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  19. BAEPs averaging analysis using autoregressive modelling.

    PubMed

    Vannier, E; Naït-Ali, A

    2004-06-01

    The present paper introduces a new perspective on the classical ensemble averaging which can be useful to analyse the Brainstem Auditory Evoked Potentials (BAEPs). The analysis of the dynamics, related to the BAEP, is performed directly after its acquisition from the electroencephalogram (EEG). The method primarily consists of dynamically modelling the averaged potential, obtained during the acquisition mode. Each averaging of signal at a given instant is considered as an autoregressive (AR) process. It has been shown that the predicting error power of AR modelling can be useful to provide an efficient tool to analyse the BAEPs. It has also been shown that the method is capable of taking the non-stationarities of both the BAEP and the EEG into account. In order to validate our approach, the proposed technique has been implemented for both simulated and real signals. This approach can also be employed in the context of estimating other evoked potentials and shows rich promise for potential clinical applications in future.

  20. A simple algorithm for averaging spike trains.

    PubMed

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.