Science.gov

Sample records for estimated average requirement

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  3. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  4. Dynamic consensus estimation of weighted average on directed graphs

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Guo, Yi

    2015-07-01

    Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.

  5. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  6. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  7. Estimating Health Services Requirements

    NASA Technical Reports Server (NTRS)

    Alexander, H. M.

    1985-01-01

    In computer program NOROCA populations statistics from National Center for Health Statistics used with computational procedure to estimate health service utilization rates, physician demands (by specialty) and hospital bed demands (by type of service). Computational procedure applicable to health service area of any size and even used to estimate statewide demands for health services.

  8. Doubly robust estimation of the local average treatment effect curve

    PubMed Central

    Ogburn, Elizabeth L.; Rotnitzky, Andrea; Robins, James M.

    2014-01-01

    Summary We consider estimation of the causal effect of a binary treatment on an outcome, conditionally on covariates, from observational studies or natural experiments in which there is a binary instrument for treatment. We describe a doubly robust, locally efficient estimator of the parameters indexing a model for the local average treatment effect conditionally on covariates V when randomization of the instrument is only true conditionally on a high dimensional vector of covariates X, possibly bigger than V. We discuss the surprising result that inference is identical to inference for the parameters of a model for an additive treatment effect on the treated conditionally on V that assumes no treatment–instrument interaction. We illustrate our methods with the estimation of the local average effect of participating in 401(k) retirement programs on savings by using data from the US Census Bureau's 1991 Survey of Income and Program Participation. PMID:25663814

  9. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  10. Geodesic estimation for large deformation anatomical shape averaging and interpolation.

    PubMed

    Avants, Brian; Gee, James C

    2004-01-01

    The goal of this research is to promote variational methods for anatomical averaging that operate within the space of the underlying image registration problem. This approach is effective when using the large deformation viscous framework, where linear averaging is not valid, or in the elastic case. The theory behind this novel atlas building algorithm is similar to the traditional pairwise registration problem, but with single image forces replaced by average forces. These group forces drive an average transport ordinary differential equation allowing one to estimate the geodesic that moves an image toward the mean shape configuration. This model gives large deformation atlases that are optimal with respect to the shape manifold as defined by the data and the image registration assumptions. We use the techniques in the large deformation context here, but they also pertain to small deformation atlas construction. Furthermore, a natural, inherently inverse consistent image registration is gained for free, as is a tool for constant arc length geodesic shape interpolation. The geodesic atlas creation algorithm is quantitatively compared to the Euclidean anatomical average to elucidate the need for optimized atlases. The procedures generate improved average representations of highly variable anatomy from distinct populations. PMID:15501083

  11. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  12. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  13. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  14. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Estimating storm areal average rainfall intensity in field experiments

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, Christa D.; Wood, Eric F.

    1994-07-01

    Estimates of areal mean precipitation intensity derived from rain gages are commonly used to assess the performance of rainfall radars and satellite rainfall retrieval algorithms. Areal mean precipitation time series collected during short-duration climate field studies are also used as inputs to water and energy balance models which simulate land-atmosphere interactions during the experiments. In two recent field experiments (1987 First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) and the Multisensor Airborne Campaign for Hydrology 1990 (MAC-HYDRO '90)) designed to investigate the climatic signatures of land-surface forcings and to test airborne sensors, rain gages were placed over the watersheds of interest. These gages provide the sole means for estimating storm precipitation over these areas, and the gage densities present during these experiments indicate that there is a large uncertainty in estimating areal mean precipitation intensity for single storm events. Using a theoretical model of time- and area-averaged space- time rainfall and a model rainfall generator, the error structure of areal mean precipitation intensity is studied for storms statistically similar to those observed in the FIFE and MAC-HYDRO field experiments. Comparisons of the error versus gage density trade-off curves to those calculated using the storm observations show that the rainfall simulator can provide good estimates of the expected measurement error given only the expected intensity, coefficient of variation, and rain cell diameter or correlation length scale, and that these errors can quickly become very large (in excess of 20%) for certain storms measured with a network whose size is below a "critical" gage density. Because the mean storm rainfall error is particularly sensitive to the correlation length, it is important that future field experiments include radar and/or dense rain gage networks capable of accurately characterizing the

  16. Rainfall Estimation Over Tropical Oceans. 1; Area Average Rain Rate

    NASA Technical Reports Server (NTRS)

    Cuddapah, Prabhakara; Cadeddu, Maria; Meneghini, R.; Short, David A.; Yoo, Jung-Moon; Dalu, G.; Schols, J. L.; Weinman, J. A.

    1997-01-01

    Multichannel dual polarization microwave radiometer SSM/I observations over oceans do not contain sufficient information to differentiate quantitatively the rain from other hydrometeors on a scale comparable to the radiometer field of view (approx. 30 km). For this reason we have developed a method to retrieve average rain rate over a mesoscale grid box of approx. 300 x 300 sq km area over the TOGA COARE region where simultaneous radiometer and radar observations are available for four months (Nov. 92 to Feb. 93). The rain area in the grid box, inferred from the scattering depression due to hydrometeors in the 85 Ghz brightness temperature, constitutes a key parameter in this method. Then the spectral and polarization information contained in all the channels of the SSM/I is utilized to deduce a second parameter. This is the ratio S/E of scattering index S, and emission index E calculated from the SSM/I data. The rain rate retrieved from this method over the mesoscale area can reproduce the radar observed rain rate with a correlation coefficient of about 0.85. Furthermore monthly total rainfall estimated from this method over that area has an average error of about 15%.

  17. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time. PMID:26093410

  18. Fringe-Orientation Estimation by use of a Gaussian Gradient Filter and Neighboring-Direction Averaging

    NASA Astrophysics Data System (ADS)

    Zhou, Xiang; Baird, John P.; Arnold, John F.

    1999-02-01

    We analyze the effect of image noise on the estimation of fringe orientation in principle and interpret the application of a texture-analysis technique to the problem of estimating fringe orientation in interferograms. The gradient of a Gaussian filter and neighboring-direction averaging are shown to meet the requirements of fringe-orientation estimation by reduction of the effects of low-frequency background and contrast variances as well as high-frequency random image noise. The technique also improves inaccurate orientation estimation at low-modulation points, such as fringe centers and broken fringes. Experiments demonstrate that the scales of the Gaussian gradient filter and the direction averaging should be chosen according to the fringe spacings of the interferograms.

  19. Effect of wind averaging time on wind erosivity estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...

  20. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and

  1. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  2. Experimental estimation of average fidelity of a Clifford gate on a 7-qubit quantum processor.

    PubMed

    Lu, Dawei; Li, Hang; Trottier, Denis-Alexandre; Li, Jun; Brodutch, Aharon; Krismanich, Anthony P; Ghavami, Ahmad; Dmitrienko, Gary I; Long, Guilu; Baugh, Jonathan; Laflamme, Raymond

    2015-04-10

    One of the major experimental achievements in the past decades is the ability to control quantum systems to high levels of precision. To quantify the level of control we need to characterize the dynamical evolution. Full characterization via quantum process tomography is impractical and often unnecessary. For most practical purposes, it is enough to estimate more general quantities such as the average fidelity. Here we use a unitary 2-design and twirling protocol for efficiently estimating the average fidelity of Clifford gates, to certify a 7-qubit entangling gate in a nuclear magnetic resonance quantum processor. Compared with more than 10^{8} experiments required by full process tomography, we conducted 1656 experiments to satisfy a statistical confidence level of 99%. The average fidelity of this Clifford gate in experiment is 55.1%, and rises to at least 87.5% if the signal's decay due to decoherence is taken into account. The entire protocol of certifying Clifford gates is efficient and scalable, and can easily be extended to any general quantum information processor with minor modifications. PMID:25910102

  3. Estimates of zonally averaged tropical diabatic heating in AMIP GCM simulations. PCMDI report No. 25

    SciTech Connect

    Boyle, J.S.

    1995-07-01

    An understanding of the processess that generate the atmospheric diabatic heating rates is basic to an understanding of the time averaged general circulation of the atmosphere and also circulation anomalies. Knowledge of the sources and sinks of atmospheric heating enables a fuller understanding of the nature of the atmospheric circulation. An actual assesment of the diabatic heating rates in the atmosphere is a difficult problem that has been approached in a number of ways. One way is to estimate the total diabatic heating by estimating individual components associated with the radiative fluxes, the latent heat release, and sensible heat fluxes. An example of this approach is provided by Newell. Another approach is to estimate the net heating rates from consideration of the balance required of the mass and wind variables as routinely observed and analyzed. This budget computation has been done using the thermodynamic equation and more recently done by using the vorticity and thermodynamic equations. Schaak and Johnson compute the heating rates through the integration of the isentropic mass continuity equation. The estimates of heating arrived at all these methods are severely handicapped by the uncertainties in the observational data and analyses. In addition the estimates of the individual heating components suffer an additional source of error from the parameterizations used to approximate these quantities.

  4. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    SciTech Connect

    Verdin, Kristine L.

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  5. Double robust estimator of average causal treatment effect for censored medical cost data.

    PubMed

    Wang, Xuan; Beste, Lauren A; Maier, Marissa M; Zhou, Xiao-Hua

    2016-08-15

    In observational studies, estimation of average causal treatment effect on a patient's response should adjust for confounders that are associated with both treatment exposure and response. In addition, the response, such as medical cost, may have incomplete follow-up. In this article, a double robust estimator is proposed for average causal treatment effect for right censored medical cost data. The estimator is double robust in the sense that it remains consistent when either the model for the treatment assignment or the regression model for the response is correctly specified. Double robust estimators increase the likelihood the results will represent a valid inference. Asymptotic normality is obtained for the proposed estimator, and an estimator for the asymptotic variance is also derived. Simulation studies show good finite sample performance of the proposed estimator and a real data analysis using the proposed method is provided as illustration. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26818601

  6. A comparison of spatial averaging and Cadzow's method for array wavenumber estimation

    SciTech Connect

    Harris, D.B.; Clark, G.A.

    1989-10-31

    We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.

  7. Weighted interframe averaging-based channel estimation for orthogonal frequency division multiplexing passive optical network

    NASA Astrophysics Data System (ADS)

    Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan

    2015-10-01

    Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.

  8. Estimation of the exertion requirements of coal mining work

    SciTech Connect

    Harber, P.; Tamimie, J.; Emory, J.

    1984-02-01

    The work requirements of coal mining work were estimated by studying a group of 12 underground coal miners. A two level (rest, 300 kg X m/min) test was performed to estimate the linear relationship between each subject's heart rate and oxygen consumption. Then, heart rates were recorded during coal mining work with a Holter type recorder. From these data, the distributions of oxygen consumptions during work were estimated, allowing characterization of the range of exertion throughout the work day. The average median estimated oxygen consumption was 3.3 METS, the average 70th percentile was 4.3 METS, and the average 90th percentile was 6.3 METS. These results should be considered when assessing an individual's occupational fitness.

  9. Esophageal pressure as an estimate of average pleural pressure with lung or chest distortion in rats.

    PubMed

    Pecchiari, Matteo; Loring, Stephen H; D'Angelo, Edgardo

    2013-04-01

    Pressure-volume curves of the lungs and chest wall require knowledge of an effective 'average' pleural pressure (Pplav), and are usually estimated using esophageal pressure as Ples-V and Pwes-V curves. Such estimates could be misleading when Ppl becomes spatially non-uniform with lung lavage or shape distortion of the chest. We therefore measured Ples-V and Pwes-V curves in conditions causing spatial non-uniformity of Ppl in rats. Ples-V curves of normal lungs were unchanged by chest removal. Lung lavage depressed PLes-V but not Pwes-V curves to lower volumes, and chest removal after lavage increased volumes at PL≥15cmH2O by relieving distortion of the mechanically heterogeneous lungs. Chest wall distortion by ribcage compression or abdominal distension depressed Pwes-V curves and Ples-V curves of normal lungs only at Pl≥3cmH2O. In conclusion, Pes reflects Pplav with normal and mechanically heterogeneous lungs. With chest wall distortion and dependent deformation of the normal lung, changes of Ples-V curves are qualitatively consistent with greater work of inflation. PMID:23416404

  10. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  11. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  12. Estimation of average treatment effect with incompletely observed longitudinal data: Application to a smoking cessation study

    PubMed Central

    Chen, Hua Yun; Gao, Shasha

    2010-01-01

    We study the problem of estimation and inference on the average treatment effect in a smoking cessation trial where an outcome and some auxiliary information were measured longitudinally, and both were subject to missing values. Dynamic generalized linear mixed effects models linking the outcome, the auxiliary information, and the covariates are proposed. The maximum likelihood approach is applied to the estimation and inference on the model parameters. The average treatment effect is estimated by the G-computation approach, and the sensitivity of the treatment effect estimate to the nonignorable missing data mechanisms is investigated through the local sensitivity analysis approach. The proposed approach can handle missing data that form arbitrary missing patterns over time. We applied the proposed method to the analysis of the smoking cessation trial. PMID:19462416

  13. Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory

    ERIC Educational Resources Information Center

    Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena

    2013-01-01

    This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…

  14. Estimating average cellular turnover from 5-bromo-2'-deoxyuridine (BrdU) measurements.

    PubMed Central

    De Boer, Rob J; Mohri, Hiroshi; Ho, David D; Perelson, Alan S

    2003-01-01

    Cellular turnover rates in the immune system can be determined by labelling dividing cells with 5-bromo-2'-deoxyuridine (BrdU) or deuterated glucose ((2)H-glucose). To estimate the turnover rate from such measurements one has to fit a particular mathematical model to the data. The biological assumptions underlying various models developed for this purpose are controversial. Here, we fit a series of different models to BrdU data on CD4(+) T cells from SIV(-) and SIV(+) rhesus macaques. We first show that the parameter estimates obtained using these models depend strongly on the details of the model. To resolve this lack of generality we introduce a new parameter for each model, the 'average turnover rate', defined as the cellular death rate averaged over all subpopulations in the model. We show that very different models yield similar estimates of the average turnover rate, i.e. ca. 1% day(-1) in uninfected monkeys and ca. 2% day(-1) in SIV-infected monkeys. Thus, we show that one can use BrdU data from a possibly heterogeneous population of cells to estimate the average turnover rate of that population in a robust manner. PMID:12737664

  15. Estimation of genetic parameters for average daily gain using models with competition effects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Components of variance for ADG with models including competition effects were estimated from data provided by Pig Improvement Company on 11,235 pigs from 4 selected lines of swine. Fifteen pigs with average age of 71 d were randomly assigned to a pen by line and sex and taken off test after approxi...

  16. How ants use quorum sensing to estimate the average quality of a fluctuating resource

    PubMed Central

    Franks, Nigel R.; Stuttard, Jonathan P.; Doran, Carolina; Esposito, Julian C.; Master, Maximillian C.; Sendova-Franks, Ana B.; Masuda, Naoki; Britton, Nicholas F.

    2015-01-01

    We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures. PMID:26153535

  17. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  18. Optimal estimators and asymptotic variances for nonequilibrium path-ensemble averages

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Chodera, John D.

    2009-10-01

    Existing optimal estimators of nonequilibrium path-ensemble averages are shown to fall within the framework of extended bridge sampling. Using this framework, we derive a general minimal-variance estimator that can combine nonequilibrium trajectory data sampled from multiple path-ensembles to estimate arbitrary functions of nonequilibrium expectations. The framework is also applied to obtain asymptotic variance estimates, which are a useful measure of statistical uncertainty. In particular, we develop asymptotic variance estimates pertaining to Jarzynski's equality for free energies and the Hummer-Szabo expressions for the potential of mean force, calculated from uni- or bidirectional path samples. These estimators are demonstrated on a model single-molecule pulling experiment. In these simulations, the asymptotic variance expression is found to accurately characterize the confidence intervals around estimators when the bias is small. Hence, the confidence intervals are inaccurately described for unidirectional estimates with large bias, but for this model it largely reflects the true error in a bidirectional estimator derived by Minh and Adib.

  19. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  20. Inverse groundwater modeling for hydraulic conductivity estimation using Bayesian model averaging and variance window

    NASA Astrophysics Data System (ADS)

    Tsai, Frank T.-C.; Li, Xiaobao

    2008-09-01

    This study proposes a Bayesian model averaging (BMA) method to address parameter estimation uncertainty arising from nonuniqueness in parameterization methods. BMA is able to incorporate multiple parameterization methods for prediction through the law of total probability and to obtain an ensemble average of hydraulic conductivity estimates. Two major issues in applying BMA to hydraulic conductivity estimation are discussed. The first problem is using Occam's window in usual BMA applications to measure approximated posterior model probabilities. Occam's window only accepts models in a very narrow range, tending to single out the best method and discard other good methods. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the Kashyap information criterion (KIC) in the approximated posterior model probabilities, which tends to prefer highly uncertain parameterization methods by considering the Fisher information matrix. With sufficient amounts of observation data, the Bayesian information criterion (BIC) is a good approximation and is able to avoid controversial results from using KIC. This study adopts multiple generalized parameterization (GP) methods such as the BMA models to estimate spatially correlated hydraulic conductivity. Numerical examples illustrate the issues of using KIC and Occam's window and show the advantages of using BIC and the variance window in BMA application. Finally, we apply BMA to the hydraulic conductivity estimation of the "1500-foot" sand in East Baton Rouge Parish, Louisiana.

  1. Estimating the path-average rainwater content and updraft speed along a microwave link

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  2. Estimation of annual average daily traffic for off-system roads in Florida. Final report

    SciTech Connect

    Shen, L.D.; Zhao, F.; Ospina, D.I.

    1999-07-28

    Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination of roadway geometry, congestion management, pavement design, safety considerations, etc. AADT is also used to estimate state wide vehicle miles traveled on all the roads and is used by local governments and the environmental protection agencies to determine compliance with the 1990 Clean Air Act Amendment. Additionally, AADT is reported annually by the Florida Department of transportation (FDOT) to the Federal Highway Administration. In the past, considerable efforts have been made in obtaining traffic counts to estimate AADT on state roads. However, traffic counts are often not available on off-system roads, and less attention has been paid to the estimation of AADT in the absence of counts. Current estimates rely on comparisons with roads that are subjectively considered to be similar. Such comparisons are inherently subject to large errors, and also may not be repeated often enough to remain current. Therefore, a better method is needed for estimating AADT for off-system roads in Florida. This study investigates the possibility of establishing one or more models for estimating AADT for off-system roads in Florida.

  3. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  4. Model-averaged benchmark concentration estimates for continuous response data arising from epidemiological studies

    SciTech Connect

    Noble, R.B.; Bailer, A.J.; Park, R.

    2009-04-15

    Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.

  5. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528

  6. Performance and production requirements for the optical components in a high-average-power laser system

    SciTech Connect

    Chow, R.; Doss, F.W.; Taylor, J.R.; Wong, J.N.

    1999-07-02

    Optical components needed for high-average-power lasers, such as those developed for Atomic Vapor Laser Isotope Separation (AVLIS), require high levels of performance and reliability. Over the past two decades, optical component requirements for this purpose have been optimized and performance and reliability have been demonstrated. Many of the optical components that are exposed to the high power laser light affect the quality of the beam as it is transported through the system. The specifications for these optics are described including a few parameters not previously reported and some component manufacturing and testing experience. Key words: High-average-power laser, coating efficiency, absorption, optical components

  7. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    SciTech Connect

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell.

  8. An Estimate of the Average Number of Recessive Lethal Mutations Carried by Humans

    PubMed Central

    Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly

    2015-01-01

    The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes. PMID:25697177

  9. Unmanned Aerial Vehicles unique cost estimating requirements

    NASA Astrophysics Data System (ADS)

    Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.

    Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.

  10. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  11. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  12. Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.

    PubMed

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  13. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  14. [Estimation of average traffic emission factor based on synchronized incremental traffic flow and air pollutant concentration].

    PubMed

    Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng

    2014-04-01

    On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors. PMID:24946571

  15. Estimating ensemble average power delivered by a piezoelectric patch actuator to a non-deterministic subsystem

    NASA Astrophysics Data System (ADS)

    Muthalif, Asan G. A.; Wahid, Azni N.; Nor, Khairul A. M.

    2014-02-01

    Engineering systems such as aircraft, ships and automotive are considered built-up structures. Dynamically they are taught of as being fabricated from many components that are classified as 'deterministic subsystems' (DS) and 'non-deterministic subsystems' (Non-DS). Structures' response of the DS is deterministic in nature and analysed using deterministic modelling methods such as finite element (FE) method. The response of Non-DS is statistical in nature and estimated using statistical modelling technique such as statistical energy analysis (SEA). SEA method uses power balance equation, in which any external input to the subsystem must be represented in terms of power. Often, input force is taken as point force and ensemble average power delivered by point force is already well-established. However, the external input can also be applied in the form of moments exerted by a piezoelectric (PZT) patch actuator. In order to be able to apply SEA method for input moments, a mathematical representation for moment generated by PZT patch in the form of average power is needed, which is attempted in this paper. A simply-supported plate with attached PZT patch is taken as a benchmark model. Analytical solution to estimate average power is derived using mobility approach. Ensemble average of power given by the PZT patch actuator to the benchmark model when subjected to structural uncertainties is also simulated using Lagrangian method and FEA software. The analytical estimation is compared with the Lagrangian model and FE method for validation. The effects of size and location of the PZT actuators on the power delivered to the plate are later investigated.

  16. Estimates of average annual tributary inflow to the lower Colorado River, Hoover Dam to Mexico

    USGS Publications Warehouse

    Owen-Joyce, Sandra J.

    1987-01-01

    Estimates of tributary inflow by basin or area and by surface water or groundwater are presented in this report and itemized by subreaches in tabular form. Total estimated average annual tributary inflow to the Colorado River between Hoover Dam and Mexico, excluding the measured tributaries, is 96,000 acre-ft or about 1% of the 7.5 million acre-ft/yr of Colorado River water apportioned to the States in the lower Colorado River basin. About 62% of the tributary inflow originates in Arizona, 30% in California, and 8% in Nevada. Tributary inflow is a small component in the water budget for the river. Most of the quantities of unmeasured tributary inflow were estimated in previous studies and were based on mean annual precipitation for 1931-60. Because mean annual precipitation for 1951-80 did not differ significantly from that of 1931-60, these tributary inflow estimates are assumed to be valid for use in 1984. Measured average annual runoff per unit drainage area on the Bill Williams River has remained the same. Surface water inflow from unmeasured tributaries is infrequent and is not captured in surface reservoirs in any of the States; it flows to the Colorado River gaging stations. Estimates of groundwater inflow to the Colorad River valley. Average annual runoff can be used in a water budget; although in wet years, runoff may be large enough to affect the calculation of consumptive use and to be estimated from hydrographs for the Colorado River valley are based on groundwater recharge estimates in the bordering areas, which have not significantly changed through time. In most areas adjacent to the Colorado River valley, groundwater pumpage is small and pumping has not significantly affected the quantity of groundwater discharged to the Colorado River valley. In some areas where groundwater pumpage exceeds the quantity of groundwater discharge and water levels have declined, the quantity of discharge probably has decreased and groundwater inflow to the Colorado

  17. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  18. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  19. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  20. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  1. Planning and Estimation of Operations Support Requirements

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon

    2010-01-01

    Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D

  2. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  3. Nonlinear models for estimating GSFC travel requirements

    NASA Technical Reports Server (NTRS)

    Buffalano, C.; Hagan, F. J.

    1974-01-01

    A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.

  4. Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.

    2015-05-01

    A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.

  5. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition.

    PubMed

    Taylor, Brian A; Hwang, Ken-Pin; Hazle, John D; Stafford, R Jason

    2009-03-01

    The authors investigated the performance of the iterative Steiglitz-McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (< or equal 16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer-Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR) > or =5 for echo train lengths (ETLs) > or =4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and/or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with > or =4 echoes and for T2*(<1.0%) with > or =7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire < or =16 echoes for one- and two-peak systems. Preliminary ex vivo

  6. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition

    PubMed Central

    Taylor, Brian A.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason

    2009-01-01

    The authors investigated the performance of the iterative Steiglitz–McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (≤16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer–Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR)≥5 for echo train lengths (ETLs)≥4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and∕or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with ≥4 echoes and for T2* (<1.0%) with ≥7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire ≤16 echoes for one- and two-peak systems. Preliminary ex vivo and in vivo

  7. A Method for the Estimation of p-Mode Parameters from Averaged Solar Oscillation Power Spectra

    NASA Astrophysics Data System (ADS)

    Reiter, J.; Rhodes, E. J., Jr.; Kosovichev, A. G.; Schou, J.; Scherrer, P. H.; Larson, T. P.

    2015-04-01

    A new fitting methodology is presented that is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from m-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the “Windowed, MuLTiple-Peak, averaged-spectrum” or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run, using weights from a leakage matrix that takes into account observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method, which employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure, which is based upon 6366 modes that we computed using the WMLTP method on the 66 day 2010 Solar and Heliospheric Observatory/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion, we developed a new procedure for the identification and correction of outliers in a frequency dataset. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle 24 during mid-2010.

  8. Radiometric Approach for Estimating Relative Changes in Intra-Glacier Average Temperature

    NASA Astrophysics Data System (ADS)

    Jezek, K. C.; Johnson, J.; Aksoy, M.

    2012-12-01

    NASA's IceBridge Project uses a suite of airborne instruments to characterize most of the important variables necessary to understand current ice sheet behavior and to predict future changes in ice sheet volume. Derived geophysical quantities include: ice sheet surface elevation; ice sheet thickness; surface accumulation rate; internal layer stratigraphy; ocean bathymetry; basal geology. At present, internal ice sheet temperature is absent from the parameters list, yet temperature is a primary factor in determining the ease at which ice deforms internally and also the rate at which the ice flows across the base. In this paper, we present calculations to show that radiometry may provide clues to relative and perhaps absolute variations in ice sheet internal temperatures. We assume the Debye dielectric dispersion model driven by temperatures estimated using the Robin model to compute radio frequency loss through the ice. We discretely layer the ice sheet to compute local emission, estimate interference effects and also take into account reflectivity at the surface and the base of the ice sheet. At this stage, we ignore scattering in the firn and we also ignore higher frequency dielectric dispersions along with direct current resistivities. We find some sensitivity between the depth-integrated brightness temperature and average internal temperature depending on the ice thickness and surface accumulation rate. Further, we observe that changing from a frozen to a water based ice sheet alters the measured brightness temperature again to a degree depending on the modeled ice sheet configuration. We go on to present SMOS satellite data acquired over Lake Vostok, Antarctica. The SMOS data suggest a relationship between relatively cool brightness temperatures and the location of the lake. We conclude with comments concerning the practicality and advantage of adding radiometry to the IceBridge instrument suite.

  9. Estimate of effective recombination rate and average selection coefficient for HIV in chronic infection

    PubMed Central

    Batorsky, Rebecca; Kearney, Mary F.; Palmer, Sarah E.; Maldarelli, Frank; Rouzine, Igor M.; Coffin, John M.

    2011-01-01

    HIV adaptation to a host in chronic infection is simulated by means of a Monte-Carlo algorithm that includes the evolutionary factors of mutation, positive selection with varying strength among sites, random genetic drift, linkage, and recombination. By comparing two sensitive measures of linkage disequilibrium (LD) and the number of diverse sites measured in simulation to patient data from one-time samples of pol gene obtained by single-genome sequencing from representative untreated patients, we estimate the effective recombination rate and the average selection coefficient to be on the order of 1% per genome per generation (10−5 per base per generation) and 0.5%, respectively. The adaptation rate is twofold higher and fourfold lower than predicted in the absence of recombination and in the limit of very frequent recombination, respectively. The level of LD and the number of diverse sites observed in data also range between the values predicted in simulation for these two limiting cases. These results demonstrate the critical importance of finite population size, linkage, and recombination in HIV evolution. PMID:21436045

  10. 31 CFR 205.23 - What requirements apply to estimates?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What requirements apply to estimates... Treasury-State Agreement § 205.23 What requirements apply to estimates? The following requirements apply when we and a State negotiate a mutually agreed upon funds transfer procedure based on an estimate...

  11. Homology-based prediction of interactions between proteins using Averaged One-Dependence Estimators

    PubMed Central

    2014-01-01

    Background Identification of protein-protein interactions (PPIs) is essential for a better understanding of biological processes, pathways and functions. However, experimental identification of the complete set of PPIs in a cell/organism (“an interactome”) is still a difficult task. To circumvent limitations of current high-throughput experimental techniques, it is necessary to develop high-performance computational methods for predicting PPIs. Results In this article, we propose a new computational method to predict interaction between a given pair of protein sequences using features derived from known homologous PPIs. The proposed method is capable of predicting interaction between two proteins (of unknown structure) using Averaged One-Dependence Estimators (AODE) and three features calculated for the protein pair: (a) sequence similarities to a known interacting protein pair (FSeq), (b) statistical propensities of domain pairs observed in interacting proteins (FDom) and (c) a sum of edge weights along the shortest path between homologous proteins in a PPI network (FNet). Feature vectors were defined to lie in a half-space of the symmetrical high-dimensional feature space to make them independent of the protein order. The predictability of the method was assessed by a 10-fold cross validation on a recently created human PPI dataset with randomly sampled negative data, and the best model achieved an Area Under the Curve of 0.79 (pAUC0.5% = 0.16). In addition, the AODE trained on all three features (named PSOPIA) showed better prediction performance on a separate independent data set than a recently reported homology-based method. Conclusions Our results suggest that FNet, a feature representing proximity in a known PPI network between two proteins that are homologous to a target protein pair, contributes to the prediction of whether the target proteins interact or not. PSOPIA will help identify novel PPIs and estimate complete PPI networks. The method

  12. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  13. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  14. Sharp spherically averaged Strichartz estimates for the Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Guo, Zihua

    2016-05-01

    We prove generalized Strichartz estimates with weaker angular integrability for the Schrödinger equation. Our estimates are sharp except some endpoints. Then we apply these new estimates to prove scattering for the 3D Zakharov system with small data in the energy space with low angular regularity. Our results improve the results obtained recently in Guo Z et al (2014 Generalized Strichartz estimates and scattering for 3D Zakharov system Commun. Math. Phys. 331 239–59).

  15. The Average Distance between Item Values: A Novel Approach for Estimating Internal Consistency

    ERIC Educational Resources Information Center

    Sturman, Edward D.; Cribbie, Robert A.; Flett, Gordon L.

    2009-01-01

    This article presents a method for assessing the internal consistency of scales that works equally well with short and long scales, namely, the average proportional distance. The method provides information on the average distance between item scores for a particular scale. In this article, we sought to demonstrate how this relatively simple…

  16. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2012) (a)...

  17. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2012) (a)...

  18. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (FEB 2012) (a)...

  19. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2006) (a)...

  20. Estimates of Adequate School Spending by State Based on National Average Service Levels.

    ERIC Educational Resources Information Center

    Miner, Jerry

    1983-01-01

    Proposes a method for estimating expenditures per student needed to provide educational adequacy in each state. Illustrates the method using U.S., Arkansas, New York, Texas, and Washington State data, covering instruction, special needs, operations and maintenance, administration, and other costs. Estimates ratios of "adequate" to actual spending…

  1. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    SciTech Connect

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.

  2. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGESBeta

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  3. Average fetal depth in utero: data for estimation of fetal absorbed radiation dose

    SciTech Connect

    Ragozzino, M.W.; Breckle, R.; Hill, L.M.; Gray, J.E.

    1986-02-01

    To estimate fetal absorbed dose from radiographic examinations, the depth from the anterior maternal surface to the midline of the fetal skull and abdomen was measured by ultrasound in 97 pregnant women. The relationships between fetal depth, fetal presentation, and maternal parameters of height, weight, anteroposterior (AP) thickness, gestational age, placental location, and bladder volume were analyzed. Maternal AP thickness (MAP) can be estimated from gestational age, maternal height, and maternal weight. Fetal midskull and abdominal depths were nearly equal. Fetal depth normalized to MAP was independent or nearly independent of maternal parameters and fetal presentation. These data enable a reasonable estimation of absorbed dose to fetal brain, abdomen, and whole body.

  4. Estimation of the average surface heat flux over an inhomogeneous terrain from the vertical velocity variance

    NASA Technical Reports Server (NTRS)

    Eilts, M. D.; Sundara-Rajan, A.; Evans, R. J.

    1987-01-01

    An indirect method of estimating the surface heat flux from observations of vertical velocity variance at the lower mid-levels of the convective atmospheric boundary layer is described. Comparison of surface heat flux estimates with those from boundary-layer heating rates is good, and this method seems to be especially suitable for inhomogeneous terrain for which the surface-layer profile method cannot be used.

  5. Another Failure to Replicate Lynn's Estimate of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    In his comment on our literature review of data on the performance of sub-Saharan Africans on Raven's Progressive Matrices, Lynn (this issue) criticized our selection of samples of primary and secondary school students. On the basis of the samples he deemed representative, Lynn concluded that the average IQ of sub-Saharan Africans stands at 67…

  6. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  7. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  8. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine. PMID:25693855

  9. Does the orbit-averaged theory require a scale separation between periodic orbit size and perturbation correlation length?

    SciTech Connect

    Zhang, Wenlu; Lin, Zhihong

    2013-10-15

    Using the canonical perturbation theory, we show that the orbit-averaged theory only requires a time-scale separation between equilibrium and perturbed motions and verifies the widely accepted notion that orbit averaging effects greatly reduce the microturbulent transport of energetic particles in a tokamak. Therefore, a recent claim [Hauff and Jenko, Phys. Rev. Lett. 102, 075004 (2009); Jenko et al., ibid. 107, 239502 (2011)] stating that the orbit-averaged theory requires a scale separation between equilibrium orbit size and perturbation correlation length is erroneous.

  10. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  11. The effect of antagonistic pleiotropy on the estimation of the average coefficient of dominance of deleterious mutations.

    PubMed

    Fernández, B; García-Dorado, A; Caballero, A

    2005-12-01

    We investigate the impact of antagonistic pleiotropy on the most widely used methods of estimation of the average coefficient of dominance of deleterious mutations from segregating populations. A proportion of the deleterious mutations affecting a given studied fitness component are assumed to have an advantageous effect on another one, generating overdominance on global fitness. Using diffusion approximations and transition matrix methods, we obtain the distribution of gene frequencies for nonpleiotropic and pleiotropic mutations in populations at the mutation-selection-drift balance. From these distributions we build homozygous and heterozygous chromosomes and assess the behavior of the estimators of dominance. A very small number of deleterious mutations with antagonistic pleiotropy produces substantial increases on the estimate of the average degree of dominance of mutations affecting the fitness component under study. For example, estimates are increased three- to fivefold when 2% of segregating loci are over-dominant for fitness. In contrast, strengthening pleiotropy, where pleiotropic effects are assumed to be also deleterious, has little effect on the estimates of the average degree of dominance, supporting previous results. The antagonistic pleiotropy model considered, applied under mutational parameters described in the literature, produces patterns for the distribution of chromosomal viabilities, levels of genetic variance, and homozygous mutation load generally consistent with those observed empirically for viability in Drosophila melanogaster. PMID:16118193

  12. Areally averaged estimates of surface heat flux from ARM field studies

    SciTech Connect

    Coulter, R.L.; Martin, T.J.; Cook, D.R.

    1993-08-01

    The determination of areally averaged surface fluxes is a problem of fundamental interest to the Atmospheric Radiation Measurement (ARM) program. The Cloud And Radiation Testbed (CART) sites central to the ARM program will provide high-quality data for input to and verification of General Circulation Models (GCMs). The extension of several point measurements of surface fluxes within the heterogeneous CART sites to an accurate representation of the areally averaged surface fluxes is not straightforward. Two field studies designed to investigate these problems, implemented by ARM science team members, took place near Boardman, Oregon, during June of 1991 and 1992. The site was chosen to provide strong contrasts in surface moisture while minimizing the differences in topography. The region consists of a substantial dry steppe (desert) upwind of an extensive area of heavily irrigated farm land, 15 km in width and divided into 800-m-diameter circular fields in a close packed array, in which wheat, alfalfa, corn, or potatoes were grown. This region provides marked contrasts, not only on the scale of farm-desert (10--20 km) but also within the farm (0.1--1 km), because different crops transpire at different rates, and the pivoting irrigation arms provide an ever-changing pattern of heavy surface moisture throughout the farm area. This paper primarily discusses results from the 1992 field study.

  13. Estimation of heat load in waste tanks using average vapor space temperatures

    SciTech Connect

    Crowe, R.D.; Kummerer, M.; Postma, A.K.

    1993-12-01

    This report describes a method for estimating the total heat load in a high-level waste tank with passive ventilation. This method relates the total heat load in the tank to the vapor space temperature and the depth of waste in the tank. Q{sub total} = C{sub f} (T{sub vapor space {minus}} T{sub air}) where: C{sub f} = Conversion factor = (R{sub o}k{sub soil}{sup *}area)/(z{sub tank} {minus} z{sub surface}); R{sub o} = Ratio of total heat load to heat out the top of the tank (function of waste height); Area = cross sectional area of the tank; k{sub soil} = thermal conductivity of soil; (z{sub tank} {minus} z{sub surface}) = effective depth of soil covering the top of tank; and (T{sub vapor space} {minus} T{sub air}) = mean temperature difference between vapor space and the ambient air at the surface. Three terms -- depth, area and ratio -- can be developed from geometrical considerations. The temperature difference is measured for each individual tank. The remaining term, the thermal conductivity, is estimated from the time-dependent component of the temperature signals coming from the periodic oscillations in the vapor space temperatures. Finally, using this equation, the total heat load for each of the ferrocyanide Watch List tanks is estimated. This provides a consistent way to rank ferrocyanide tanks according to heat load.

  14. Estimation of the diffuse radiation fraction for hourly, daily and monthly-average global radiation

    NASA Astrophysics Data System (ADS)

    Erbs, D. G.; Klein, S. A.; Duffie, J. A.

    1982-01-01

    Hourly pyrheliometer and pyranometer data from four U.S. locations are used to establish a relationship between the hourly diffuse fraction and the hourly clearness index. This relationship is compared to the relationship established by Orgill and Hollands (1977) and to a set of data from Highett, Australia, and agreement is within a few percent in both cases. The transient simulation program TRNSYS is used to calculate the annual performance of solar energy systems using several correlations. For the systems investigated, the effect of simulating the random distribution of the hourly diffuse fraction is negligible. A seasonally dependent daily diffuse correlation is developed from the data, and this daily relationship is used to derive a correlation for the monthly-average diffuse fraction.

  15. Generalized propensity score for estimating the average treatment effect of multiple treatments.

    PubMed

    Feng, Ping; Zhou, Xiao-Hua; Zou, Qing-Ming; Fan, Ming-Yu; Li, Xiao-Song

    2012-03-30

    The propensity score method is widely used in clinical studies to estimate the effect of a treatment with two levels on patient's outcomes. However, due to the complexity of many diseases, an effective treatment often involves multiple components. For example, in the practice of Traditional Chinese Medicine (TCM), an effective treatment may include multiple components, e.g. Chinese herbs, acupuncture, and massage therapy. In clinical trials involving TCM, patients could be randomly assigned to either the treatment or control group, but they or their doctors may make different choices about which treatment component to use. As a result, treatment components are not randomly assigned. Rosenbaum and Rubin proposed the propensity score method for binary treatments, and Imbens extended their work to multiple treatments. These authors defined the generalized propensity score as the conditional probability of receiving a particular level of the treatment given the pre-treatment variables. In the present work, we adopted this approach and developed a statistical methodology based on the generalized propensity score in order to estimate treatment effects in the case of multiple treatments. Two methods were discussed and compared: propensity score regression adjustment and propensity score weighting. We used these methods to assess the relative effectiveness of individual treatments in the multiple-treatment IMPACT clinical trial. The results reveal that both methods perform well when the sample size is moderate or large. PMID:21351291

  16. Approximate sample sizes required to estimate length distributions

    USGS Publications Warehouse

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  17. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  18. Accounting for Uncertainty in Confounder and Effect Modifier Selection when Estimating Average Causal Effects in Generalized Linear Models

    PubMed Central

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-01-01

    Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155

  19. Estimation of Average Shear Strength Parameters along the Slip Surface Based on the Shear Strength Diagram of Landslide Soils

    NASA Astrophysics Data System (ADS)

    Kimura, Sho; Gibo, Seiichi; Nakamura, Shinya

    The average shear strength parameters along the slip surface (c´, φ´) of the four Shimajiri-mudstone landslides having different slide patterns have been obtained by two methods involving an estimation method using the shear strength diagram of landslide soils and an ordinary method using the results of laboratory shear tests of soil samples. The deference of the two average shear strengths was small in the case of the landslides where the residual and fractured-mudstone peak strengths had been mobilized, while the two methods produced close agreement in case of the landslides where the residual and fully softened strengths had been mobilized. Although, the determination of appropriate c´, φ´ is done using the measured shear strength of slip surface soil as a fundamental rule, when it is difficult to do it due to certain restrictions, c´, φ´ can be effectively estimated using the shear strength diagram.

  20. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  1. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    PubMed

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  2. Application of Network-averaged Teleseismic P-wave Spectra to Seismic Yield Estimation of Underground Nuclear Explosions

    NASA Astrophysics Data System (ADS)

    Murphy, J. R.; Barker, B. W.

    - A set of procedures is described for estimating network-averaged teleseismic P-wave spectra for underground nuclear explosions and for analytically inverting these spectra to obtain estimates of mb/yield relations and individual yields for explosions at previously uncalibrated test sites. These procedures are then applied to the analyses of explosions at the former Soviet test sites at Shagan River, Degelen Mountain, Novaya Zemlya and Azgir, as well as at the French Sahara, U.S. Amchitka and Chinese Lop Nor test sites. It is demonstrated that the resulting seismic estimates of explosion yield and mb/yield relations are remarkably consistent with a variety of other available information for a number of these test sites. These results lead us to conclude that the network-averaged teleseismic P-wave spectra provide considerably more diagnostic information regarding the explosion seismic source than do the corresponding narrowband magnitude measures such as mb, Ms and mb(Lg), and, therefore, that they are to be preferred for applications to seismic yield estimation for explosions at previously uncalibrated test sites.

  3. The GAAS metagenomic tool and its estimations of viral and microbial average genome size in four major biomes.

    PubMed

    Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest

    2009-12-01

    Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions

  4. Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching.

    PubMed

    Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J

    2016-09-20

    In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27087478

  5. Estimating average dissolved-solids yield from basins drained by ephemeral and intermittent streams, Green River basin, Wyoming

    USGS Publications Warehouse

    DeLong, L.L.; Wells, D.K.

    1988-01-01

    A method was developed to determine the average dissolved-solids yield contributed by small basins characterized by ephemeral and intermittent streams in the Green River basin in Wyoming. The method is different from that commonly used for perennial streams. Estimates of dissolved-solids discharge at eight water quality sampling stations operated by the U.S. Geological Survey in cooperation with the U.S. Bureau of Land Management range from less than 2 to 95 tons/day. The dissolved-solids yield upstream from the sampling stations ranges from 0.023 to 0.107 tons/day/sq mi. However, estimates of dissolved solids yield contributed by drainage areas between paired stations on Bitter, Salt Wells, Little Muddy, and Muddy creeks, based on dissolved-solids discharge versus drainage area, range only from 0.081 to 0.092 tons/day/sq mi. (USGS)

  6. Estimated average annual ground-water pumpage in the Portland Basin, Oregon and Washington 1987-88

    USGS Publications Warehouse

    Collins, C.A.; Broad, T.M.

    1993-01-01

    Data for ground-water pumpage were collected during an inventory of wells in 1987-88 in the Portland Basin located in northwestern Oregon and southwestern Washington. Estimates of annual ground-water pumpage were made for the three major categories of use: public supply, industry, and irrigation. A large rapidly expanding metropolitan area is situated within the Portland Basin, along with several large industries that use significant quantities of ground water. The estimated total average annual ground-water pumpage for 1987 was about 127,800 acre-feet. Of this quantity, about 50 percent was pumped for industrial use, about 40 percent for public supply and about 10 percent for irrigation. Domestic use from individual wells is a small part of the total and is not included.

  7. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    PubMed

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-01

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective. PMID:23548030

  8. Estimating Watershed-Averaged Precipitation and Evapotranspiration Fluxes using Streamflow Measurements in a Semi-Arid, High Altitude Montane Catchment

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.

    2014-12-01

    Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.

  9. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. PMID:27494960

  10. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  11. Is the Whole Really More than the Sum of Its Parts? Estimates of Average Size and Orientation Are Susceptible to Object Substitution Masking

    ERIC Educational Resources Information Center

    Jacoby, Oscar; Kamke, Marc R.; Mattingley, Jason B.

    2013-01-01

    We have a remarkable ability to accurately estimate average featural information across groups of objects, such as their average size or orientation. It has been suggested that, unlike individual object processing, this process of "feature averaging" occurs automatically and relatively early in the course of perceptual processing, without the need…

  12. Estimation of the average exchanges in momentum and latent heat between the atmosphere and the oceans with Seasat observations

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1983-01-01

    Ocean-surface momentum flux and latent heat flux are determined from Seasat-A data from 1978 and compared with ship observations. Momentum flux was measured using the Seasat-A scatterometer system (SASS) heat flux, with the scanning multichannel MW radiometer (SMMR). Ship measurements were quality selected and averaged to increase their reliability. The fluxes were computed using a bulk parameterization technique. It is found that although SASS effectively measures momentum flux, variations in atmospheric stability and sea-surface temperature cause deviations which are not accounted for by the present data-processing algorithm. The SMMR-latent-heat-flux algorithm, while needing refinement, is shown to given estimations to within 35 W/sq m in its present form, which removes systematic error and uses an empirically determined transfer coefficient.

  13. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  14. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    PubMed

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape. PMID:23202273

  15. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  16. Estimated water requirements for gold heap-leach operations

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water necessary for conventional gold heap-leach operations. Water is required for drilling and dust suppression during mining, for agglomeration and as leachate during ore processing, to support the workforce (requires water in potable form and for sanitation), for minesite reclamation, and to compensate for water lost to evaporation and leakage. Maintaining an adequate water balance is especially critical in areas where surface and groundwater are difficult to acquire because of unfavorable climatic conditions [arid conditions and (or) a high evaporation rate]; where there is competition with other uses, such as for agriculture, industry, and use by municipalities; and where compliance with regulatory requirements may restrict water usage. Estimating the water consumption of heap-leach operations requires an understanding of the heap-leach process itself. The task is fairly complex because, although they all share some common features, each gold heap-leach operation is unique. Also, estimating the water consumption requires a synthesis of several fields of science, including chemistry, ecology, geology, hydrology, and meteorology, as well as consideration of economic factors.

  17. Calcium requirement: new estimations for men and women by cross-sectional statistical analyses of metabolic calcium balance data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To provide new estimates of the average Ca requirement for men and women, we determined the dietary Ca intake required to maintain neutral Ca balance. Ca balance data (Ca intake - [fecal Ca + urinary Ca]) were collected from 154 subjects (females: n=73, weight=77.1±18.5 kg, age=47.0±18.5 y [range: 2...

  18. Estimates of galactic cosmic ray shielding requirements during solar minimum

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.

    1990-01-01

    Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.

  19. Estimation of Rate of Strain Magnitude and Average Viscosity in Turbulent Flow of Shear Thinning and Yield Stress Fluids

    NASA Astrophysics Data System (ADS)

    Sawko, Robert; Thompson, Chris P.

    2010-09-01

    This paper presents a series of numerical simulations of non-Newtonian fluids in high Reynolds number flows in circular pipes. The fluids studied in the computations have shear-thinning and yield stress properties. Turbulence is described using the Reynolds-Averaged Navier-Stokes (RANS) equations with the Boussinesq eddy viscosity hypothesis. The evaluation of standard, two-equation models led to some observations regarding the order of magnitude as well as probabilistic information about the rate of strain. We argue that an accurate estimate of the rate of strain tensor is essential in capturing important flow features. It is first recognised that an apparent viscosity comprises two flow dependant components: one originating from rheology and the other from the turbulence model. To establish the relative significance of the terms involved, an order of magnitude analysis has been performed. The main observation supporting further discussion is that in high Reynolds number regimes the magnitudes of fluctuating rates of strain and fluctuating vorticity dominate the magnitudes of their respective averages. Since these quantities are included in the rheological law, the values of viscosity obtained from the fluctuating and mean velocity fields are different. Validation against Direct Numerical Simulation data shows at least an order of magnitude discrepancy in some regions of the flow. Moreover, the predictions of the probabilistic analysis show a favourable agreement with statistics computed from DNS data. A variety of experimental, as well as computational data has been collected. Data come from the latest experiments by Escudier et al. [1], DNS from Rudman et al. [2] and zeroth-order turbulence models of Pinho [3]. The fluid rheologies are described by standard power-law and Herschel-Bulkley models which make them suitable for steady state calculations of shear flows. Suitable regularisations are utilised to secure numerical stability. Two new models have been

  20. Estimating resource costs of compliance with EU WFD ecological status requirements at the river basin scale

    NASA Astrophysics Data System (ADS)

    Riegels, Niels; Jensen, Roar; Bensasson, Lisa; Banou, Stella; Møller, Flemming; Bauer-Gottwein, Peter

    2011-01-01

    SummaryResource costs of meeting EU WFD ecological status requirements at the river basin scale are estimated by comparing net benefits of water use given ecological status constraints to baseline water use values. Resource costs are interpreted as opportunity costs of water use arising from water scarcity. An optimization approach is used to identify economically efficient ways to meet WFD requirements. The approach is implemented using a river basin simulation model coupled to an economic post-processor; the simulation model and post-processor are run from a central controller that iterates until an allocation is found that maximizes net benefits given WFD requirements. Water use values are estimated for urban/domestic, agricultural, industrial, livestock, and tourism water users. Ecological status is estimated using metrics that relate average monthly river flow volumes to the natural hydrologic regime. Ecological status is only estimated with respect to hydrologic regime; other indicators are ignored in this analysis. The decision variable in the optimization is the price of water, which is used to vary demands using consumer and producer water demand functions. The price-based optimization approach minimizes the number of decision variables in the optimization problem and provides guidance for pricing policies that meet WFD objectives. Results from a real-world application in northern Greece show the suitability of the approach for use in complex, water-stressed basins. The impact of uncertain input values on model outcomes is estimated using the Info-Gap decision analysis framework.

  1. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    NASA Astrophysics Data System (ADS)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  2. Estimates of the best approximations of periodic functions by trigonometric polynomials in terms of averaged differences and the multidimensional Jackson's theorem

    NASA Astrophysics Data System (ADS)

    Pustovoitov, N. N.

    1997-10-01

    In the first section the best approximations of periodic functions of one real variable by trigonometric polynomials are studied. Estimates of these approximations in terms of averaged differences are obtained. A multidimensional generalization of these estimates is presented in the second section. As a consequence. The multidimensional Jackson's theorem is proved.

  3. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  4. A Combined Approach for Estimating Health Staff Requirements

    PubMed Central

    FAKHRI, Ali; SEYEDIN, Hesam; DAVIAUD, Emmanuelle

    2014-01-01

    Abstract Background Many studies have been carried out and many methods have been used for estimating health staff re-quirements in health facilities or system, each have different advantages and disadvantages. Differences in the extent to which utilization matches needs in different conditions intensify the limitations of each approach when used in iso-lation. Is the utilization-based approach efficient in a situation of over servicing? Is it sufficient in a situation of under-utilization? These questions can be similarly asked about the needs-based approach. This study is looking for a flexible approach to estimate the health staff requirements efficiently in these different conditions. Method This study was carried out in 2011 in some stages: It was conducted in order to identify the formula used in the different approaches. The basic formulas used in the utilization-based approach and the needs-based approach were identified and then combined using simple mathematical principles to develop a new formula. Finally, the new formula was piloted by assessing family health staff requirements in the health posts in Kashan City, Iran. Results Comparison of the two formulas showed that the basic formulas used in the two approaches can be com-bined by including the variable ‘Coverage’. The pilot study confirmed the role of coverage in the suggested combined approach. Conclusions The variables in the developed formula allow combining needs-based, target-based and utilization-based approaches. A limitation of this approach is applicability to a given service package. PMID:26060687

  5. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    SciTech Connect

    Supanich, MP

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.

  6. Estimates of the maximum time required to originate life

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Fogleman, Guy

    1989-01-01

    Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.

  7. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy. PMID:24420554

  8. Maps to estimate average streamflow and headwater limits for streams in U.S. Army Corps of Engineers, Mobile District, Alabama and adjacent states

    USGS Publications Warehouse

    Nelson, George H., Jr.

    1984-01-01

    U.S. Army Corps of Engineers permits are required for discharges of dredged or fill-material downstream from the ' headwaters ' of specified streams. The term ' headwaters ' is defined as the point of a freshwater (non-tidal) stream above which the average flow is less than 5 cu ft/s. Maps of the Mobile District area showing (1) lines of equal average streamflow, and (2) lines of equal drainage areas required to produce an average flow of 5 cu ft/s are contained in this report. These maps are for use by the Corps of Engineers in their permitting program. (USGS)

  9. Estimation of the path-averaged wind velocity by cross-correlation of the received power and the shift of laser beam centroid

    NASA Astrophysics Data System (ADS)

    Marakasov, Dmitri A.; Tsvyk, Ruvim S.

    2015-11-01

    We consider the problem of estimation of the average wind speed on atmospheric path from measurements of time series of average power of the laser radiation detected through the receiving aperture and the position of the centroid of the image of the laser beam. It is shown that the mutual correlation function of these series has a maximum, whose position characterizes the average speed of the cross wind on the path. The dependence of the coordinates and magnitude of the maximum of the correlation function from the size of the receiving aperture and the distribution of turbulence along the atmospheric path.

  10. Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA.

    PubMed

    Wagner, Todd H; Chen, Shuo; Barnett, Paul G

    2003-09-01

    The U.S. Department of Veterans Affairs (VA) maintains discharge abstracts, but these do not include cost information. This article describes the methods the authors used to estimate the costs of VA medical-surgical hospitalizations in fiscal years 1998 to 2000. They estimated a cost regression with 1996 Medicare data restricted to veterans receiving VA care in an earlier year. The regression accounted for approximately 74 percent of the variance in cost-adjusted charges, and it proved to be robust to outliers and the year of input data. The beta coefficients from the cost regression were used to impute costs of VA medical-surgical hospital discharges. The estimated aggregate costs were reconciled with VA budget allocations. In addition to the direct medical costs, their cost estimates include indirect costs and physician services; both of these were allocated in proportion to direct costs. They discuss the method's limitations and application in other health care systems. PMID:15095543

  11. Technical Methods Report: Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs. NCEE 2009-4040

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley

    2009-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This report uses a causal inference and instrumental variables framework to examine the…

  12. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Contractor's policies, procedures, and practices for budgeting and planning controls, and generating...) Flow of work, coordination, and communication; and (5) Budgeting, planning, estimating methods... personnel have sufficient training, experience, and guidance to perform estimating and budgeting tasks...

  13. Estimating reach-averaged discharge for the River Severn from measurements of river water surface elevation and slope

    NASA Astrophysics Data System (ADS)

    Durand, Michael; Neal, Jeffrey; Rodríguez, Ernesto; Andreadis, Konstantinos M.; Smith, Laurence C.; Yoon, Yeosang

    2014-04-01

    An algorithm is presented that calculates a best estimate of river bathymetry, roughness coefficient, and discharge based on input measurements of river water surface elevation (h) and slope (S) using the Metropolis algorithm in a Bayesian Markov Chain Monte Carlo scheme, providing an inverse solution to the diffusive approximation to the shallow water equations. This algorithm has potential application to river h and S measurements from the forthcoming Surface Water and Ocean Topography (SWOT) satellite mission. The algorithm was tested using in situ data as a proxy for satellite measurements along a 22.4 km reach of the River Severn, UK. First, the algorithm was run with gage measurements of h and S during a small, in-bank event in June 2007. Second, the algorithm was run with measurements of h and S estimated from four remote sensing images during a major out-of-bank flood event in July 2007. River width was assumed to be known for both events. Algorithm-derived estimates of river bathymetry were validated using in situ measurements, and estimates of roughness coefficient were compared to those used in an operational hydraulic model. Algorithm-derived estimates of river discharge were evaluated using gaged discharge. For the in-bank event, when lateral inflows from smaller tributaries were assumed to be known, the method provided an accurate discharge estimate (10% RMSE). When lateral inflows were assumed unknown, discharge RMSE increased to 36%. Finally, if just one of the three river reaches was assumed to be have known bathymetry, solutions for bathymetry, roughness and discharge for all three reaches were accurately retrieved, with a corresponding discharge RMSE of 15.6%. For the out-of-bank flood event, the lateral inflows were unknown, and the final discharge RMSE was 19%. These results suggest that it should be possible to estimate river discharge via SWOT observations of river water surface elevation, slope and width.

  14. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    NASA Astrophysics Data System (ADS)

    Ishida, Hideshi

    2014-06-01

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. These deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.

  15. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    SciTech Connect

    Ishida, Hideshi

    2014-06-15

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. These deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.

  16. Towards the estimation of reach-averaged discharge from SWOT data using a Manning's equation derived algorithm. Application to the Garonne River between Tonneins-La Reole

    NASA Astrophysics Data System (ADS)

    Berthon, Lucie; Biancamaria, Sylvain; Goutal, Nicole; Ricci, Sophie; Durand, Michael

    2014-05-01

    The future NASA-CNES-CSA Surface Water and Ocean Topogragraphy (SWOT) satellite mission will be launched in 2020 and will deliver maps of water surface elevation, slope and extent with an un-precedented resolution of 100m. A river discharge algorithm was proposed by Durand et al. 2013, based on Manning's equation to estimate reach-averaged discharge from SWOT data. In the present study, this algorithm was applied to a 50-km reach on the Garonne River with an averaged slope of 2.8m per 10000m, averaged width of 180m between Tonneins and La Reole. The dynamics of this reach is satisfyingly represented by the 1D model MASCARET and validated against in-situ water level observations in Marmande. Major assumptions of permanent flow and uniform conditions lie under the Manning's equation choice. Here, we aim at highlighting the limits of validity of these assumptions for the Garonne River during a typical flood event in order to estimate the applicability of the discharge algorithm over averaged reach. Manning-estimated and MASCARET discharges are compared for non-permanent and permanent flow for different reach averaging (100m to 10 km). It was shown that the Manning equation increasingly over-estimates the MASCARET discharge as the reach averaging length increases. It is shown that the Manning overestimate is due to the effect of the sub-reach parameter covariances. In order to further explain these results, this comparison was carried out for a simplified case study with a parametric bathymetry described either by a flat bottom ; constant slope or local slope variations.

  17. Technical Methods Report: The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions. NCEE 2009-0061 rev.

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2009-01-01

    This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…

  18. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-01-01

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  19. Estimating average base flow at low-flow partial-record stations on the south shore of Long Island, New York

    USGS Publications Warehouse

    Buxton, H.T.

    1985-01-01

    Base flows of the 29 major streams in southeast Nassau and southwest Suffolk Counties, New York, were statistically analyzed to discern the correlation among flows of adjacent streams. Concurrent base-flow data from a partial-record and a nearby continuous-record station were related; the data were from 1968-75, a period near hydrologic equilibrium on Long Island. The average base flow at each partial-record station was estimated from a regression equation and average measured base flow for the period at the continuous-record stations. Regression analyses are presented for the 20 streams with partial-record stations. Average base flow of the nine streams with a continuous record totaled 90 cu ft/sec; the predicted average base flow for the 20 streams with a partial record was 73 cu ft/sec (with a 95% confidence interval of 63 to 84 cu ft/sec.) Results indicate that this method provides reliable estimates of average low flow for streams such as those on Long Island, which consist mostly of base flow and are geomorphically similar. (USGS)

  20. How well can we estimate areal-averaged spectral surface albedo from ground-based transmission in the Atlantic coastal area?

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina

    2015-10-01

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  1. How Well Can We Estimate Areal-Averaged Spectral Surface Albedo from Ground-Based Transmission in an Atlantic Coastal Area?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.

    2015-10-15

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  2. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation

    PubMed Central

    Lee, Eugenia E.; Stewart, Barclay; Zha, Yuanting A.; Groen, Thomas A.; Burkle, Frederick M.; Kushner, Adam L.

    2016-01-01

    Background: Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   Methods: The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Results: Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People’s Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. Conclusion: As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies. PMID:27617165

  3. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps. PMID:26243476

  4. FIRST ORDER ESTIMATES OF ENERGY REQUIREMENTS FOR POLLUTION CONTROL

    EPA Science Inventory

    This report presents estimates of the energy demand attributable to environmental control of pollution from 'stationary point sources.' This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes 'mobile s...

  5. Estimates of Average Glandular Dose with Auto-modes of X-ray Exposures in Digital Breast Tomosynthesis

    PubMed Central

    Kamal, Izdihar; Chelliah, Kanaga K.; Mustafa, Nawal

    2015-01-01

    Objectives: The aim of this research was to examine the average glandular dose (AGD) of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. Methods: This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50) and 20% glandular and 80% adipose tissue (20/80) commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA) with auto-time, auto-filter and auto-kilovolt modes. Results: The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy) for two dimension (2D) and 2.48 mGy for three dimensional (3D) images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. Conclusion: The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error. PMID:26052465

  6. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  7. Estimating the Average Diameter of a Population of Spheres from Observed Diameters of Random Two-Dimensional Sections

    NASA Technical Reports Server (NTRS)

    Kong, Maiying; Bhattacharya, Rabi N.; James, Christina; Basu, Abhijit

    2003-01-01

    Size distributions of chondrules, volcanic fire-fountain or impact glass spherules, or of immiscible globules in silicate melts (e.g., in basaltic mesostasis, agglutinitic glass, impact melt sheets) are imperfectly known because the spherical objects are usually so strongly embedded in the bulk samples that they are nearly impossible to separate. Hence, measurements are confined to two-dimensional sections, e.g. polished thin sections that are commonly examined under reflected light optical or backscattered electron microscopy. Three kinds of approaches exist in the geologic literature for estimating the mean real diameter of a population of 3D spheres from 2D observations: (1) a stereological approach with complicated calculations; (2) an empirical approach in which independent 3D size measurements of a population of spheres separated from their parent sample and their 2D cross sectional diameters in thin sections have produced an array of somewhat contested conversion equations; and (3) measuring pairs of 2D diameters of upper and lower surfaces of cross sections each sphere in thin sections using transmitted light microscopy. We describe an entirely probabilistic approach and propose a simple factor of 4/x (approximately equal to 1.27) to convert the 2D mean size to 3D mean size.

  8. Using cone-beam CT projection images to estimate the average and complete trajectory of a fiducial marker moving with respiration

    NASA Astrophysics Data System (ADS)

    Becker, N.; Smith, W. L.; Quirk, S.; Kay, I.

    2010-12-01

    Stereotactic body radiotherapy of lung cancer often makes use of a static cone-beam CT (CBCT) image to localize a tumor that moves during the respiratory cycle. In this work, we developed an algorithm to estimate the average and complete trajectory of an implanted fiducial marker from the raw CBCT projection data. After labeling the CBCT projection images based on the breathing phase of the fiducial marker, the average trajectory was determined by backprojecting the fiducial position from images of similar phase. To approximate the complete trajectory, a 3D fiducial position is estimated from its position in each CBCT project image as the point on the source-image ray closest to the average position at the same phase. The algorithm was tested with computer simulations as well as phantom experiments using a gold seed implanted in a programmable phantom capable of variable motion. Simulation testing was done on 120 realistic breathing patterns, half of which contained hysteresis. The average trajectory was reconstructed with an average root mean square (rms) error of less than 0.1 mm in all three directions, and a maximum error of 0.5 mm. The complete trajectory reconstruction had a mean rms error of less than 0.2 mm, with a maximum error of 4.07 mm. The phantom study was conducted using five different respiratory patterns with the amplitudes of 1.3 and 2.6 cm programmed into the motion phantom. These complete trajectories were reconstructed with an average rms error of 0.4 mm. There is motion information present in the raw CBCT dataset that can be exploited with the use of an implanted fiducial marker to sub-millimeter accuracy. This algorithm could ultimately supply the internal motion of a lung tumor at the treatment unit from the same dataset currently used for patient setup.

  9. ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING

    EPA Science Inventory

    A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...

  10. FIELD INFORMATION-BASED SYSTEM FOR ESTIMATING FISH TEMPERATURE REQUIREMENTS

    EPA Science Inventory

    In 1979, Biesinger et al. described a technique for spatial and temporal matching of records of stream temperatures and fish sampling events to obtain estimates of yearly temperature regimes for freshwater fishes of the United States. his article describes the state of this Fish ...

  11. 48 CFR 2452.216-77 - Estimated quantities-requirements contract.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Estimated quantities... Provisions and Clauses 2452.216-77 Estimated quantities—requirements contract. As prescribed in 2416.506-70(c), insert the following provision: Estimated Quantities—Requirements Contract (FEB 2006) In accordance...

  12. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  13. Comparison of pooled standard deviation and standardized-t bootstrap methods for estimating uncertainty about average methane emission from rice cultivation

    NASA Astrophysics Data System (ADS)

    Kang, Namgoo; Jung, Min-Ho; Jeong, Hyun-Cheol; Lee, Yung-Seop

    2015-06-01

    The general sample standard deviation and the Monte-Carlo methods as an estimate of confidence interval is frequently being used for estimates of uncertainties with regard to greenhouse gas emission, based on the critical assumption that a given data set follows a normal (Gaussian) or statistically known probability distribution. However, uncertainty estimated using those methods are severely limited in practical applications where it is challenging to assume the probability distribution of a data set or where the real data distribution form appears to deviate significantly from statistically known probability distribution models. In order to solve these issues encountered especially in reasonable estimation of uncertainty about the average of greenhouse gas emission, we present two statistical methods, the pooled standard deviation method (PSDM) and the standardized-t bootstrap method (STBM) based upon statistical theories. We also report interesting results of the uncertainties about the average of a data set of methane (CH4) emission from rice cultivation under the four different irrigation conditions in Korea, measured by gas sampling and subsequent gas analysis. Results from the applications of the PSDM and the STBM to these rice cultivation methane emission data sets clearly demonstrate that the uncertainties estimated by the PSDM were significantly smaller than those by the STBM. We found that the PSDM needs to be adopted in many cases where a data probability distribution form appears to follow an assumed normal distribution with both spatial and temporal variations taken into account. However, the STBM is a more appropriate method widely applicable to practical situations where it is realistically impossible with the given data set to reasonably assume or determine a probability distribution model with a data set showing evidence of fairly asymmetric distribution but severely deviating from known probability distribution models.

  14. Calcium requirement: new estimations for men and women by cross-sectional statistical analyses of calcium balance data from metabolic studies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: Low intakes of calcium (Ca) are associated with increased risk of both osteoporosis and cardiovascular disease. Objective: To provide new estimates of the average Ca requirement for men and women, we determined the dietary Ca intake required to maintain neutral Ca balance. Design: Ca bal...

  15. Quantitative Estimates of Temporal Mixing across a 4th-order Depositional Sequence: Variation in Time-averaging along the Holocene Marine Succession of the Po Plain, Italy

    NASA Astrophysics Data System (ADS)

    Scarponi, D.; Kaufman, D.; Bright, J.; Kowalewski, M.

    2009-04-01

    Single fossiliferous beds contain biotic remnants that commonly vary in age over a time span of hundreds to thousands of years. Multiple recent studies suggest that such temporal mixing is a widespread phenomenon in marine depositional systems. This research focuses on quantitative estimates of temporal mixing obtained by direct dating of individual corbulid bivalve shells (Lentidium mediterraneum and Corbula gibba) from Po plain marine units of the Holocene 4th-order depositional sequence, including Transgressive Systems Tract [TST] and Highstand Systems Tract [HST]. These units displays a distinctive succession of facies consisting of brackish to marginal marine retrogradational deposits, (early TST), overlain by fully marine fine to coarse gray sands (late TST), and capped with progradational deltaic clays and sands (HST). More than 300 corbulid specimens, representing 19 shell-rich horizons evenly distributed along the depositional sequence and sampled from 9 cores, have been dated by means of aspartic acid racemization calibrated using 23 AMS-radiocarbon dates (14 dates for Lentidium mediterraneum and 9 dates for Corbula gibba, respectively). The results indicate that the scale of time-averaging is comparable when similar depositional environments from the same systems tract are compared across cores. However, time averaging is notably different when similar depositional environments from TST and HST segments of the sequence are compared. Specifically, late HST horizons (n=8) display relatively low levels of time-averaging: the mean within-horizon range of shell ages is 537 years and standard deviation averages 165 years. In contrast, late TST horizons (n=7) are dramatically more time-averaged: mean range of 5104 years and mean standard deviations of 1420 years. Thus, late TST horizons experience a 1 order of magnitude higher time-averaging than environmentally comparable late HST horizons. In conclusion the HST and TST systems tracts of the Po Plain display

  16. EURRECA-Estimating zinc requirements for deriving dietary reference values.

    PubMed

    Lowe, Nicola M; Dykes, Fiona C; Skinner, Anna-Louise; Patel, Sujata; Warthon-Medina, Marisol; Decsi, Tamás; Fekete, Katalin; Souverein, Olga W; Dullemeijer, Carla; Cavelaars, Adriënne E; Serra-Majem, Lluis; Nissensohn, Mariela; Bel, Silvia; Moreno, Luis A; Hermoso, Maria; Vollhardt, Christiane; Berti, Cristiana; Cetin, Irene; Gurinovic, Mirjana; Novakovic, Romana; Harvey, Linda J; Collings, Rachel; Hall-Moran, Victoria

    2013-01-01

    Zinc was selected as a priority micronutrient for EURRECA, because there is significant heterogeneity in the Dietary Reference Values (DRVs) across Europe. In addition, the prevalence of inadequate zinc intakes was thought to be high among all population groups worldwide, and the public health concern is considerable. In accordance with the EURRECA consortium principles and protocols, a series of literature reviews were undertaken in order to develop best practice guidelines for assessing dietary zinc intake and zinc status. These were incorporated into subsequent literature search strategies and protocols for studies investigating the relationships between zinc intake, status and health, as well as studies relating to the factorial approach (including bioavailability) for setting dietary recommendations. EMBASE (Ovid), Cochrane Library CENTRAL, and MEDLINE (Ovid) databases were searched for studies published up to February 2010 and collated into a series of Endnote databases that are available for the use of future DRV panels. Meta-analyses of data extracted from these publications were performed where possible in order to address specific questions relating to factors affecting dietary recommendations. This review has highlighted the need for more high quality studies to address gaps in current knowledge, in particular the continued search for a reliable biomarker of zinc status and the influence of genetic polymorphisms on individual dietary requirements. In addition, there is a need to further develop models of the effect of dietary inhibitors of zinc absorption and their impact on population dietary zinc requirements. PMID:23952091

  17. Estimated water requirements for the conventional flotation of copper ores

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water used by a conventional copper flotation plant. Water is required for many activities at a mine-mill site, including ore production and beneficiation, dust and fire suppression, drinking and sanitation, and minesite reclamation. The water required to operate a flotation plant may outweigh all of the other uses of water at a mine site, [however,] and the need to maintain a water balance is critical for the plant to operate efficiently. Process water may be irretrievably lost or not immediately available for reuse in the beneficiation plant because it has been used in the production of backfill slurry from tailings to provide underground mine support; because it has been entrapped in the tailings stored in the TSF, evaporated from the TSF, or leaked from pipes and (or) the TSF; and because it has been retained as moisture in the concentrate. Water retained in the interstices of the tailings and the evaporation of water from the surface of the TSF are the two most significant contributors to water loss at a conventional flotation circuit facility.

  18. Estimation of daily average net radiation from MODIS data and DEM over the Baiyangdian watershed in North China for clear sky days

    NASA Astrophysics Data System (ADS)

    Long, Di; Gao, Yanchun; Singh, Vijay P.

    2010-07-01

    SummaryDaily average net radiation (DANR) is a critical variable for estimation of daily evapotranspiration (ET) from remote sensing techniques at watershed or regional scales, and in turn for hydrological modeling and water resources management. This study attempts to comprehensively analyze physical mechanisms governing the variation of each component of DANR during a day, with the objective to improve parameterization schemes for daily average net shortwave radiation (DANSR) and daily average net longwave radiation (DANLR) using MODIS (MODerate Resolution Imaging Spectroradiometer) data products, DEM, and minimum meteorological data in order to map spatially consistent and reasonably distributed DANR at watershed scales for clear sky days. First, a geometric model for simulating daily average direct solar radiation by accounting for the effects of terrain factors (slope, azimuth and elevation) on the availability of direct solar radiation for sloping land surfaces is adopted. Specifically, the magnitudes of sunrise and sunset angles, the frequencies of a sloping surface being illuminated as well as the potential sunshine duration for a given sloping surface are computed on a daily basis. The geometric model is applied to the Baiyangdian watershed in North China, with showing the capability to distinctly characterize the spatial pattern of daily average direct solar radiation for sloping land surfaces. DANSR can then be successfully derived from simulated daily average direct solar radiation by means of the geometric model and the characteristics of nearly invariant diffuse solar radiation during daytime in conjunction with MCD43A1 albedo products. Second, four observations of Terra-MODIS and Aqua-MODIS land surface temperature (LST) and surface emissivities in band 31 and band 32 from MOD11A1, MYD11A1 and MOD11_L2 data products for six clear sky days from April to September in the year 2007, are utilized to simulate daily average LST to improve the accuracy of

  19. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  20. Ground-water pumpage and artificial recharge estimates for calendar year 2000 and average annual natural recharge and interbasin flow by hydrographic area, Nevada

    USGS Publications Warehouse

    Lopes, Thomas J.; Evetts, David M.

    2004-01-01

    Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth

  1. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  2. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  3. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  4. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  5. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  6. Estimating pollutant removal requirements for landfills in the UK: II. Model development.

    PubMed

    Hall, D H; Drury, D; Gronow, J R; Rosevear, A; Pollard, S J T; Smith, R

    2006-12-01

    A modelling methodology using a leachate source term has been produced for estimating the timescales for achieving environmental equilibrium status for landfilled waste. Results are reported as the period of active management required for modelled scenarios of non-flushed and flushed sites for a range of pre-filling treatments. The base scenario against which results were evaluated was raw municipal solid waste (MSW) for which only cadmium failed to reach equilibrium. Flushed raw MSW met our criteria for stabilisation with active leachate management for 40 years, subject to each of the leachate species being present at or below their average UK concentrations. Stable non-reactive wastes, meeting EU waste acceptance criteria, fared badly in the non-flushed scenario, with only two species stabilising after a management period within 1000 years and the majority requiring > 2000 years of active leachate management. The flushing scenarios showed only a marginal improvement, with arsenic still persisting beyond 2000 years management even with an additional 500 mm y(-1) of infiltration. The stabilisation time for mechanically sorted organic residues (without flushing) was high, and even with flushing, arsenic and chromium appeared to remain a problem. Two mechanical biological treatment (MBT) scenarios were examined, with medium and high intensity composting. Both were subjected to the non-flushing and flushing scenarios. The non-flushing case of both options fell short of the basic requirements of achieving equilibrium within decades. The intense composting option with minimal flushing appeared to create a scenario where equilibrium could be achieved. For incinerator bottom ash (raw and subjected to various treatments), antimony, copper, chloride and sulphate were the main controls on achieving equilibrium, irrespective of treatment type. Flushing at higher flushing rates (500 mm y(-1)) failed to demonstrate a significant reduction in the management period required. PMID

  7. Feasibility of non-invasive temperature estimation by the assessment of the average gray-level content of B-mode images.

    PubMed

    Teixeira, C A; Alvarenga, A V; Cortela, G; von Krüger, M A; Pereira, W C A

    2014-08-01

    This paper assesses the potential of the average gray-level (AVGL) from ultrasonographic (B-mode) images to estimate temperature changes in time and space in a non-invasive way. Experiments were conducted involving a homogeneous bovine muscle sample, and temperature variations were induced by an automatic temperature regulated water bath, and by therapeutic ultrasound. B-mode images and temperatures were recorded simultaneously. After data collection, regions of interest (ROIs) were defined, and the average gray-level variation computed. For the selected ROIs, the AVGL-Temperature relation were determined and studied. Based on uniformly distributed image partitions, two-dimensional temperature maps were developed for homogeneous regions. The color-coded temperature estimates were first obtained from an AVGL-Temperature relation extracted from a specific partition (where temperature was independently measured by a thermocouple), and then extended to the other partitions. This procedure aimed to analyze the AVGL sensitivity to changes not only in time but also in space. Linear and quadratic relations were obtained depending on the heating modality. We found that the AVGL-Temperature relation is reproducible over successive heating and cooling cycles. One important result was that the AVGL-Temperature relations extracted from one region might be used to estimate temperature in other regions (errors inferior to 0.5 °C) when therapeutic ultrasound was applied as a heating source. Based on this result, two-dimensional temperature maps were developed when the samples were heated in the water bath and also by therapeutic ultrasound. The maps were obtained based on a linear relation for the water bath heating, and based on a quadratic model for the therapeutic ultrasound heating. The maps for the water bath experiment reproduce an acceptable heating/cooling pattern, and for the therapeutic ultrasound heating experiment, the maps seem to reproduce temperature profiles

  8. A comparative study of two-dimensional multifractal detrended fluctuation analysis and two-dimensional multifractal detrended moving average algorithm to estimate the multifractal spectrum

    NASA Astrophysics Data System (ADS)

    Xi, Caiping; Zhang, Shunning; Xiong, Gang; Zhao, Huichang

    2016-07-01

    Multifractal detrended fluctuation analysis (MFDFA) and multifractal detrended moving average (MFDMA) algorithm have been established as two important methods to estimate the multifractal spectrum of the one-dimensional random fractal signal. They have been generalized to deal with two-dimensional and higher-dimensional fractal signals. This paper gives a brief introduction of the two-dimensional multifractal detrended fluctuation analysis (2D-MFDFA) and two-dimensional multifractal detrended moving average (2D-MFDMA) algorithm, and a detailed description of the application of the two-dimensional fractal signal processing by using the two methods. By applying the 2D-MFDFA and 2D-MFDMA to the series generated from the two-dimensional multiplicative cascading process, we systematically do the comparative analysis to get the advantages, disadvantages and the applicabilities of the two algorithms for the first time from six aspects such as the similarities and differences of the algorithm models, the statistical accuracy, the sensitivities of the sample size, the selection of scaling range, the choice of the q-orders and the calculation amount. The results provide a valuable reference on how to choose the algorithm from 2D-MFDFA and 2D-MFDMA, and how to make the schemes of the parameter settings of the two algorithms when dealing with specific signals in practical applications.

  9. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    SciTech Connect

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)

  10. Visual Estimation of Spatial Requirements for Locomotion in Novice Wheelchair Users

    ERIC Educational Resources Information Center

    Higuchi, Takahiro; Takada, Hajime; Matsuura, Yoshifusa; Imanaka, Kuniyasu

    2004-01-01

    Locomotion using a wheelchair requires a wider space than does walking. Two experiments were conducted to test the ability of nonhandicapped adults to estimate the spatial requirements for wheelchair use. Participants judged from a distance whether doorlike apertures of various widths were passable or not passable. Experiment 1 showed that…

  11. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  12. Estimation of the hydraulic conductivity of a two-dimensional fracture network using effective medium theory and power-law averaging

    NASA Astrophysics Data System (ADS)

    Zimmerman, R. W.; Leung, C. T.

    2009-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through generated stochastically two-dimensional fracture networks. The centers and orientations of the fractures are uniformly distributed, whereas their lengths follow a lognormal distribution. The aperture of each fracture is correlated with its length, either through direct proportionality, or through a nonlinear relationship. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this value by starting with the individual fracture conductances, and using various upscaling methods. Kirkpatrick’s effective medium approximation, which works well for pore networks on a core scale, generally underestimates the conductivity of the fracture networks. We attribute this to the fact that the conductances of individual fracture segments (between adjacent intersections with other fractures) are correlated with each other, whereas Kirkpatrick’s approximation assumes no correlation. The power-law averaging approach proposed by Desbarats for porous media is able to match the numerical value, using power-law exponents that generally lie between 0 (geometric mean) and 1 (harmonic mean). The appropriate exponent can be correlated with statistical parameters that characterize the fracture density.

  13. Quaternary estimates of average slip-rates for active faults in the Mongolian Altay Mountains: the advantages and assumptions of multiple dating techniques

    NASA Astrophysics Data System (ADS)

    Gregory, L. C.; Walker, R. T.; Thomas, A. L.; Amgaa, T.; Bayasgalan, G.; Amgalan, B.; West, A.

    2010-12-01

    Active faults in the Altay Mountains, western Mongolia, produce surface expressions that are generally well-preserved due to the arid central-Asian climate. Motion along the right-lateral strike-slip and oblique-reverse faults has displaced major river systems by kilometres over millions of years and there are clear scarps and linear features in the landscape along the surface traces of active fault strands. With combined remote sensing and field work, we have identified sites with surface features that have been displaced by tens of metres as a result of cumulative motion along faults. In an effort to accurately quantify an average slip-rate for the faults, we used multiple dating techniques to provide an age constraint for the displaced landscapes. At one site on the Olgiy fault, we applied 10Be terrestrial cosmogenic nuclides (TCN) and uranium-series geochronology on boulder tops and in-situ formed carbonate rinds, respectively. Based on a displacement of approximately 17m, and geochronology results that range from 20-60ky, we resolve a slip-rate of less than 1 mm/yr. We have also applied optically stimulated luminescence (OSL), 10Be TCN, and U-series methods on the Ar Hotol fault. Each of these dating techniques provides unique constraints on the relationship between the ‘age’ of a displaced surface and the actual amount of displacement, and each has inherent assumptions. We will consider the advantages and assumptions made in utilising these techniques in western Mongolia- e.g. U-series dating of carbonate rinds can provide a minimum age for alluvial fan deposition, and inheritance must be considered when using TCN techniques on boulder tops. This will be put into the context of estimating accurate and geologically relevant slip-rates, and improving our understanding of the active deformation of the Mongolian Altay.

  14. ESTIMATED DAILY AVERAGE PER CAPITA WATER INGESTION BY CHILD AND ADULT AGE CATEGORIES BASED ON USDA'S 1994-96 AND 1998 CONTINUING SURVEY OF FOOD INTAKES BY INDIVIDUALS (JOURNAL ARTICLE)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The ...

  15. [Estimating the impacts of future climate change on water requirement and water deficit of winter wheat in Henan Province, China].

    PubMed

    Ji, Xing-jie; Cheng, Lin; Fang, Wen-song

    2015-09-01

    Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future. PMID:26785550

  16. Estimating N requirements for corn using indices developed from a canopy reflectance sensor

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the increasing cost of fertilizer N, there is a renewed emphasis on developing new technologies for quantifying in-season N requirements for corn. The objectives of this research are (i) to evaluate different vegetative indices derived from an active reflectance sensor in estimating in-season N...

  17. Brief to the Committee on University Affairs. Estimates of Operating Grant Requirements for 1970-71.

    ERIC Educational Resources Information Center

    Committee of Presidents of Universities of Ontario, Toronto.

    This brief contains a refinement and amplification of preliminary estimates of operating fund requirements of the provincially assisted universities of Ontario for 1970-71. Part B of the report contains quantitative descriptors of university operations including budgeted operating expenditures for 1969-70, faculty income unit ratios in 1969-70,…

  18. Use Of Crop Canopy Size To Estimate Water Requirements Of Vegetable Crops

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Planting time, plant density, variety, and cultural practices vary widely for horticultural crops. It is difficult to estimate crop water requirements for crops with these variations. Canopy size, or factional ground cover, as an indicator of intercepted sunlight, is related to crop water use. We...

  19. Evaluating Multiple Indices from a Canopy Reflectance Sensor to Estimate Corn N Requirements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the increasing cost of fertilizer N, there is a renewed emphasis on developing new technologies for quantifying in-season N requirements for corn. The objectives of this research are (i) to evaluate different vegetative indices derived from an active reflectance sensor in estimating in-season N...

  20. Estimation of Managerial and Technical Personnel Requirements in Selected Industries. Training for Industry Series, No. 2.

    ERIC Educational Resources Information Center

    United Nations Industrial Development Organization, Vienna (Austria).

    The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…

  1. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Astrophysics Data System (ADS)

    Peffley, Al F.

    1991-04-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  2. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  3. On the Berdichevsky average

    NASA Astrophysics Data System (ADS)

    Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi

    2016-04-01

    Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.

  4. Averaging the inhomogeneous universe

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2012-03-01

    A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.

  5. Spent fuel disassembly hardware and other non-fuel bearing components: characterization, disposal cost estimates, and proposed repository acceptance requirements

    SciTech Connect

    Luksic, A.T.; McKee, R.W.; Daling, P.M.; Konzek, G.J.; Ludwick, J.D.; Purcell, W.L.

    1986-10-01

    There are two categories of waste considered in this report. The first is the spent fuel disassembly (SFD) hardware. This consists of the hardware remaining after the fuel pins have been removed from the fuel assembly. This includes end fittings, spacer grids, water rods (BWR) or guide tubes (PWR) as appropriate, and assorted springs, fasteners, etc. The second category is other non-fuel-bearing (NFB) components the DOE has agreed to accept for disposal, such as control rods, fuel channels, etc., under Appendix E of the standard utiltiy contract (10 CFR 961). It is estimated that there will be approximately 150 kg of SFD and NFB waste per average metric ton of uranium (MTU) of spent uranium. PWR fuel accounts for approximately two-thirds of the average spent-fuel mass but only 50 kg of the SFD and NFB waste, with most of that being spent fuel disassembly hardware. BWR fuel accounts for one-third of the average spent-fuel mass and the remaining 100 kg of the waste. The relatively large contribution of waste hardware in BWR fuel, will be non-fuel-bearing components, primarily consisting of the fuel channels. Chapters are devoted to a description of spent fuel disassembly hardware and non-fuel assembly components, characterization of activated components, disposal considerations (regulatory requirements, economic analysis, and projected annual waste quantities), and proposed acceptance requirements for spent fuel disassembly hardware and other non-fuel assembly components at a geologic repository. The economic analysis indicates that there is a large incentive for volume reduction.

  6. Data requirements for using combined conductivity mass balance and recursive digital filter method to estimate groundwater recharge in a small watershed, New Brunswick, Canada

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Xing, Zisheng; Danielescu, Serban; Li, Sheng; Jiang, Yefang; Meng, Fan-Rui

    2014-04-01

    Estimation of baseflow and groundwater recharge rates is important for hydrological analysis and modelling. A new approach which combines recursive digital filter (RDF) model with conductivity mass balance (CMB) method was considered to be reliable for baseflow separation because the combined method takes advantages of the reduced data requirement for RDF method and the reliability of CMB method. However, it is not clear what the minimum data requirements for producing acceptable estimates of the RDF model parameters are. In this study, 19-year record of stream discharge and water conductivity collected from the Black Brook Watershed (BBW), NB, Canada were used to test the combined baseflow separation method and assess the variability of parameters in the model over seasons. The data requirements and potential bias in estimated baseflow index (BFI) were evaluated using conductivity data for different seasons and/or resampled data segments at various sampling durations. Results indicated that the data collected during ground-frozen season are more suitable to estimate baseflow conductivity (Cbf) and data during snow-melting period are more suitable to estimate runoff conductivity (Cro). Relative errors of baseflow estimation were inversely proportional to the number of conductivity data records. A minimum of six-month discharge and conductivity data is required to obtain reliable parameters for current method with acceptable errors. We further found that the average annual recharge rate for the BBW was 322 mm in the past twenty years.

  7. The effects of the variations in sea surface temperature and atmospheric stability in the estimation of average wind speed by SEASAT-SASS

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1984-01-01

    The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.

  8. Establishing a method for estimating crop water requirements using the SEBAL method in Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Toulios, L.; Hadjimitsis, D.; Kountios, G.

    2014-08-01

    Water allocation to crops has always been of great importance in agricultural process. In this context, and under the current conditions, where Cyprus is facing a severe drought the last five years, purpose of this study is basically to estimate the needed crop water requirements for supporting irrigation management and monitoring irrigation on a systematic basis for Cyprus using remote sensing techniques. The use of satellite images supported by ground measurements has provided quite accurate results. Intended purpose of this paper is to estimate the Evapotranspiration (ET) of specific crops which is the basis for irrigation scheduling and establish a procedure for monitoring and managing irrigation water over Cyprus, using remotely sensed data from Landsat TM/ ETM+ and a sound methodology used worldwide, the Surface Energy Balance Algorithm for Land (SEBAL). The methodology set in this paper refers to COST action ES1106 (Agri-Wat) for determining crop water requirements as part of the water footprint and virtual water-trade.

  9. Bioenergetics model for estimating food requirements of female Pacific walruses (Odobenus rosmarus divergens)

    USGS Publications Warehouse

    Noren, S.R.; Udevitz, M.S.; Jay, C.V.

    2012-01-01

    Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.

  10. Space Station: Estimated total US funding requirements. Report to Congressional Requesters

    NASA Astrophysics Data System (ADS)

    1995-06-01

    This report reviews current estimated costs of the NASA space station, in particular the total U.S. funding requirements for the program and program uncertainties that may affect those requirements. U.S. funds required to design, launch, and operate the International Space Station will total about $94 billion through 2012 (about $77 billion in fiscal year 1995 constant dollars). This total may decrease to the extent NASA accomplishes its goal for achieving station operational efficiencies over the period 2003 to 2012, or efficiencies currently being studied in the space shuttle program materialize. Despite major progress, the program faces formidable challenges in completing all its tasks on schedule and within its budget. The program estimates through fiscal year 1997 show limited annual financial reserves - about 6 percent to 11 percent of estimated costs. Inadequate reserves would hinder program managers' ability to cope with unanticipated technical problems. In addition, the space station's current launch and assembly schedule is ambitious, and the shuttle program may have difficulty supporting it. Moreover, the prime contract target cost could increase if the contractor is unable to negotiate subcontractor agreements for the expected price.

  11. Model requirements for estimating and reporting soil C stock changes in national greenhouse gas inventories

    NASA Astrophysics Data System (ADS)

    Didion, Markus; Blujdea, Viorel; Grassi, Giacomo; Hernández, Laura; Jandl, Robert; Kriiska, Kaie; Lehtonen, Aleksi; Saint-André, Laurent

    2016-04-01

    Globally, soils are the largest terrestrial store of carbon (C) and small changes may contribute significantly to the global C balance. Due to the potential implications for climate change, accurate and consistent estimates of C fluxes at the large-scale are important as recognized, for example, in international agreements such as the United Nations Framework Convention on Climate Change (UNFCCC). Under the UNFCCC and also under the Kyoto Protocol it is required to report C balances annually. Most measurement-based soil inventories are currently not able to detect annual changes in soil C stocks consistently across space and representative at national scales. The use of models to obtain relevant estimates is considered an appropriate alternative under the UNFCCC and the Kyoto Protocol. Several soil carbon models have been developed but few models are suitable for a consistent application across larger-scales. Consistency is often limited by the lack of input data for models, which can result in biased estimates and, thus, the reporting criteria of accuracy (i.e., emission and removal estimates are systematically neither over nor under true emissions or removals) may be met. Based on a qualitative assessment of the ability to meet criteria established for GHG reporting under the UNFCCC including accuracy, consistency, comparability, completeness, and transparency, we identified the suitability of commonly used simulation models for estimating annual C stock changes in mineral soil in European forests. Among six discussed simulation models we found a clear trend toward models for providing quantitative precise site-specific estimates which may lead to biased estimates across space. To meet reporting needs for national GHG inventories, we conclude that there is a need for models producing qualitative realistic results in a transparent and comparable manner. Based on the application of one model along a gradient from Boreal forests in Finland to Mediterranean forests

  12. An examination of population exposure to traffic related air pollution: Comparing spatially and temporally resolved estimates against long-term average exposures at the home location.

    PubMed

    Shekarrizfard, Maryam; Faghih-Imani, Ahmadreza; Hatzopoulou, Marianne

    2016-05-01

    Air pollution in metropolitan areas is mainly caused by traffic emissions. This study presents the development of a model chain consisting of a transportation model, an emissions model, and atmospheric dispersion model, applied to dynamically evaluate individuals' exposure to air pollution by intersecting daily trajectories of individuals and hourly spatial variations of air pollution across the study domain. This dynamic approach is implemented in Montreal, Canada to highlight the advantages of the method for exposure analysis. The results for nitrogen dioxide (NO2), a marker of traffic related air pollution, reveal significant differences when relying on spatially and temporally resolved concentrations combined with individuals' daily trajectories compared to a long-term average NO2 concentration at the home location. We observe that NO2 exposures based on trips and activity locations visited throughout the day were often more elevated than daily NO2 concentrations at the home location. The percentage of all individuals with a lower 24-hour daily average at home compared to their 24-hour mobility exposure is 89.6%, of which 31% of individuals increase their exposure by more than 10% by leaving the home. On average, individuals increased their exposure by 23-44% while commuting and conducting activities out of home (compared to the daily concentration at home), regardless of air quality at their home location. We conclude that our proposed dynamic modelling approach significantly improves the results of traditional methods that rely on a long-term average concentration at the home location and we shed light on the importance of using individual daily trajectories to understand exposure. PMID:26970897

  13. Irrigation Requirement Estimation using MODIS Vegetation Indices and Inverse Biophysical Modeling; A Case Study for Oran, Algeria

    NASA Technical Reports Server (NTRS)

    Bounoua, L.; Imhoff, M.L.; Franks, S.

    2008-01-01

    the study site, for the month of July, spray irrigation resulted in an irrigation amount of about 1.4 mm per occurrence with an average frequency of occurrence of 24.6 hours. The simulated total monthly irrigation for July was 34.85 mm. In contrast, the drip irrigation resulted in less frequent irrigation events with an average water requirement about 57% less than that simulated during the spray irrigation case. The efficiency of the drip irrigation method rests on its reduction of the canopy interception loss compared to the spray irrigation method. When compared to a country-wide average estimate of irrigation water use, our numbers are quite low. We would have to revise the reported country level estimates downward to 17% or less

  14. Capital requirements for the transportation of energy materials: 1979 arc estimates

    SciTech Connect

    Not Available

    1980-08-29

    Summaries of transportation investment requirements through 1990 are given for the low, medium and high scenarios. Total investment requirements for the three modes and the three energy commodities can accumulate to a $46.3 to $47.0 billion range depending on the scenario. The high price of oil, following the evidence of the last year, is projected to hold demand for oil below the recent past. Despite the overall decrease in traffic some investment in crude oil and LPG pipelines is necessary to reach new sources of supply. Although natural gas production and consumption is projected to decline through 1990, new investments in carrying capacity also are required due to locational shifts in supply. The Alaska Natural Gas Transportation System is the dominant investment for energy transportation in the next ten years. This year's report focuses attention on waterborne coal transportation to the northeast states in keeping with a return to significant coal consumption projected for this area. A resumption of such shipments will require a completely new fleet. The investment estimates given in this report identify capital required to transport projected energy supplies to market. The requirement is strategic in the sense that other reasonable alternatives do not exist or that a shared load of new growth can be expected. Not analyzed or forecasted are investments in transportation facilities made in response to local conditions. The total investment figures, therefore, represent a minimum necessary capital improvement to respond to changes in interregional supply conditions.

  15. Preliminary estimate of environmental flow requirements of the Rusape River, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Love, Faith; Madamombe, Elisha; Marshall, Brian; Kaseke, Evans

    Environmental flow requirements for the Rusape River, a tributary of the Save River, in Zimbabwe, were estimated using a rapid results approach. Thirty years of hydrological data with daily time steps from gauging stations upstream and downstream of the Rusape Dam were analysed using DRIFT Software. The dam appeared to have caused an increase in intra-annual and inter-annual flood events downstream compared to upstream, including significant dry season releases, while inter-annual floods were larger. The water releases from the dam differ from the natural flow in both volume and frequency, especially in the dry season and may have had a negative impact on the local ecosystem and subsistence farmers. The building block method (BMM) was applied, using the hydrological analyses performed, in order to estimate environmental flow requirements, which are presented in mean monthly flows. The flow regime that is recommended for the Rusape River should reduce or reverse these impacts, whilst ensuring sufficient water resources are released for economic needs. The EFR proposed can be achieved within mean monthly flows observed. However, it should be stressed that the EFR proposed have been developed from a rapid method, and are only a first estimate of the EFR for the Rusape River. This study represents a step in developing a management plan for the Save Basin, shared between Zimbabwe and Mozambique.

  16. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging...). Multiply the result of Step (1) with the result of Step (3). Step (6). Multiply the result of Step (2) with the result of Step (4). Step (7). Add the result of Step (5) and the result of Step (6). This is...

  17. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging...). Multiply the result of Step (1) with the result of Step (3). Step (6). Multiply the result of Step (2) with the result of Step (4). Step (7). Add the result of Step (5) and the result of Step (6). This is...

  18. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging...). Multiply the result of Step (1) with the result of Step (3). Step (6). Multiply the result of Step (2) with the result of Step (4). Step (7). Add the result of Step (5) and the result of Step (6). This is...

  19. Determining the required accuracy of LST products for estimating surface energy fluxes

    NASA Astrophysics Data System (ADS)

    Pinheiro, A. C.; Reichle, R.; Sujay, K.; Arsenault, K.; Privette, J. L.; Yu, Y.

    2006-12-01

    Land Surface Temperature (LST) is an important parameter to assess the energy state of a surface. Synoptic satellite observations of LST must be used when attempting to estimate fluxes over large spatial scales. Due to the close coupling between LST, root level water availability, and mass and energy fluxes at the surface, LST is particularly useful over agricultural areas to help determine crop water demands and facilitate water management decisions (e.g., irrigation). Further, LST can be assimilated into land surface models to help improve estimates of latent and sensible heat fluxes. However, the accuracy of LST products and its impact on surface flux estimation is not well known. In this study, we quantify the uncertainty limits in LST products for accurately estimating latent heat fluxes over agricultural fields in the Rio Grande River basin of central New Mexico. We use the Community Land Model (CLM) within the Land Information Systems (LIS), and adopt an Ensemble Kalman Filter approach to assimilate the LST fields into the model. We evaluate the LST and assimilation performance against field measurements of evapotranspiration collected at two eddy-covariance towers in semi-arid cropland areas. Our results will help clarify sensor and LST product requirements for future remote sensing systems.

  20. Estimating sugarcane water requirements for biofuel feedstock production in Maui, Hawaii using satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Anderson, R. G.; Wang, D.

    2011-12-01

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop evapotranspiration (ETc). Generic Kc values are available for many crop types but not for sugarcane in Maui, Hawaii, which grows on a relatively unstudied biennial cycle. In this study, an algorithm is being developed to estimate sugarcane Kc using normalized difference vegetation index (NDVI) derived from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery. A series of ASTER NDVI maps were used to depict canopy development over time or fractional canopy cover (fc) which was measured with a handheld multispectral camera in the fields during satellite overpass days. Canopy cover was correlated with NDVI values. Then the NDVI based canopy cover was used to estimate Kc curves for sugarcane plants. The remotely estimated Kc and ETc values were compared and validated with ground-truth ETc measurements. The approach is a promising tool for large scale estimation of evapotranspiration of sugarcane or other biofuel crops.

  1. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  2. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  3. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  4. Toxic Release Inventory reporting requirement: Estimating volatile organic compound releases from industrial wastewater treatment facilities

    SciTech Connect

    Hall, F.E. Jr.

    1997-12-31

    In production/maintenance processes at the Oklahoma City Air Logistics Center, industrial wastewater streams are generated which contain organic compounds. These wastewaters are collected and treated in a variety of ways. Some of these collection and treatment steps result in the release of volatile organic compounds (VOC) from the wastewater to the ambient air. This paper provides a discussion of the potential VOC emission sources and presents estimates of emissions for an Industrial Wastewater Treatment Plant (IWTP). As regulatory reporting requirements become increasingly more stringent, Air Force installations are being required to quantify and report VOC releases to the environment. The computer software described in this paper was used to identify and quantify VOC discharges to the environment. The magnitude of VOC emissions depends greatly on many factors such as the physical properties of the pollutants, the temperature of the wastewater, and the design of the individual collection and treatment process units. IWTP VOC releases can be estimated using a computer model designed by the Environmental Protection Agency. The Surface Impoundment Model System (SIMS) model utilizes equipment information to predict air emissions discharged from each individual process unit. SIMS utilizes mass transfer expressions, process unit information, in addition to chemical/physical property data for the interested chemicals. By inputting process conditions and constraints, SIMS determines the effluent concentrations along with the air emissions discharged from each individual process unit. The software is user-friendly with the capable of estimating effluent concentration and ambient air releases. The SIMS software was used by Tinker AFB chemical engineers to predict VOC releases to satisfy the Toxic Release Inventory reporting requirements.

  5. Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.

  6. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  7. Competing Conservation Objectives for Predators and Prey: Estimating Killer Whale Prey Requirements for Chinook Salmon

    PubMed Central

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  8. Competing conservation objectives for predators and prey: estimating killer whale prey requirements for Chinook salmon.

    PubMed

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  9. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions

    PubMed Central

    Trites, Andrew W.; Rosen, David A. S.; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion—an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric—Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal—and they should

  10. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions.

    PubMed

    Ware, Colin; Trites, Andrew W; Rosen, David A S; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal-and they should also

  11. Preliminary estimates of galactic cosmic ray shielding requirements for manned interplanetary missions

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Nealy, John E.

    1988-01-01

    Estimates of radiation risk to the blood forming organs from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different constituents per layer. Calculated galactic cosmic ray doses and dose equivalents behind various thicknesses of aluminum and water shielding are presented for solar maximum and solar minimum periods. Estimates of risk to the blood forming organs are made using 5 cm depth dose/dose equivalent values for water. These results indicate that at least 5 g/sq cm (5 cm) of water of 6.5 g/sq cm (2.4 cm) of aluminum shield is required to reduce annual exposure below the current recommended limit of 50 rem. Because of the large uncertainties in fragmentation parameters, and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as 70 percent. Therefore, more detailed analyses with improved inputs could indicate the need for additional shielding.

  12. Estimation of crop water requirements using remote sensing for operational water resources management

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Spiliotopoulos, Marios; Tzabiras, John; Loukas, Athanasios; Mylopoulos, Nikitas

    2015-06-01

    An integrated modeling system, developed in the framework of "Hydromentor" research project, is applied to evaluate crop water requirements for operational water resources management at Lake Karla watershed, Greece. The framework includes coupled components for operation of hydrotechnical projects (reservoir operation and irrigation works) and estimation of agricultural water demands at several spatial scales using remote sensing. The study area was sub-divided into irrigation zones based on land use maps derived from Landsat 5 TM images for the year 2007. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used to derive actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat TM imagery. Agricultural water needs were estimated using the FAO method for each zone and each control node of the system for a number of water resources management strategies. Two operational strategies of hydro-technical project development (present situation without operation of the reservoir and future situation with the operation of the reservoir) are coupled with three water demand strategies. In total, eight (8) water management strategies are evaluated and compared. The results show that, under the existing operational water resources management strategies, the crop water requirements are quite large. However, the operation of the proposed hydro-technical projects in Lake Karla watershed coupled with water demand management measures, like improvement of existing water distribution systems, change of irrigation methods, and changes of crop cultivation could alleviate the problem and lead to sustainable and ecological use of water resources in the study area.

  13. Updated estimates of long-term average dissolved-solids loading in streams and rivers of the Upper Colorado River Basin

    USGS Publications Warehouse

    Tillman, Fred D; Anning, David W.

    2014-01-01

    The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating over 4.5 million acres of farmland, and annually generating about 12 billion kilowatt hours of hydroelectric power. The Upper Colorado River Basin, part of the Colorado River Basin, encompasses more than 110,000 mi2 and is the source of much of more than 9 million tons of dissolved solids that annually flows past the Hoover Dam. High dissolved-solids concentrations in the river are the cause of substantial economic damages to users, primarily in reduced agricultural crop yields and corrosion, with damages estimated to be greater than 300 million dollars annually. In 1974, the Colorado River Basin Salinity Control Act created the Colorado River Basin Salinity Control Program to investigate and implement a broad range of salinity control measures. A 2009 study by the U.S. Geological Survey, supported by the Salinity Control Program, used the Spatially Referenced Regressions on Watershed Attributes surface-water quality model to examine dissolved-solids supply and transport within the Upper Colorado River Basin. Dissolved-solids loads developed for 218 monitoring sites were used to calibrate the 2009 Upper Colorado River Basin Spatially Referenced Regressions on Watershed Attributes dissolved-solids model. This study updates and develops new dissolved-solids loading estimates for 323 Upper Colorado River Basin monitoring sites using streamflow and dissolved-solids concentration data through 2012, to support a planned Spatially Referenced Regressions on Watershed Attributes modeling effort that will investigate the contributions to dissolved-solids loads from irrigation and rangeland practices.

  14. A Method for Automated Classification of Parkinson's Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI.

    PubMed

    Banerjee, Monami; Okun, Michael S; Vaillancourt, David E; Vemuri, Baba C

    2016-01-01

    Parkinson's disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  15. A Method for Automated Classification of Parkinson’s Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI

    PubMed Central

    Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.

    2016-01-01

    Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  16. Assessing potential of vertical average soil moisture (0-40cm) estimation for drought monitoring using MODIS data: a case study

    NASA Astrophysics Data System (ADS)

    Ma, Jianwei; Huang, Shifeng; Li, Jiren; Li, Xiaotao; Song, Xiaoning; Leng, Pei; Sun, Yayong

    2015-12-01

    Soil moisture is an important parameter in the research of hydrology, agriculture, and meteorology. The present study is designed to produce a near real time soil moisture estimation algorithm by linking optical/IR measurements to ground measured soil moisture, and then used to monitoring region drought. It has been found that the Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST) are related to surface soil moisture. Therefore, a relationship between ground measurement soil moisture and NDVI and LST can be developed. Six days' NDVI and LST data calculated from Terra Moderate Resolution Imaging Spectroradiometer (MODIS) of Shandong province during October in 2009 to May in 2010 were combined with ground measured volumetric soil moisture in different depth (10cm, 20cm, 40cm, and mean in vertical (0-40cm)) and different soil type to determine regression relationships at a 1 km scale. Based on the regression relationships, mean volumetric soil moisture in vertical (0-40cm) at 1 km resolution can be calculated over the Shandong province, and then drought maps were obtained. The result shows that significantly relationship exists between the NDVI and LST and soil moisture at different soil depths, and regression relationships are soil type dependent. What is more, the drought monitoring results agree well with actual situation.

  17. Capital requirements for the transportation of energy materials: 1979 ARC estimates. Draft final report

    SciTech Connect

    Not Available

    1980-08-13

    This report contains TERA's estimates of capital requirements to transport natural gas, crude oil, petroleum products, and coal in the United States by 1990. The low, medium, and high world-oil-price scenarios from the EIA's Mid-range Energy Forecasting System (MEFS), as used in the 1979 Annual Report to Congress (ARC), were provided as a basis for the analysis and represent three alternative futures. TERA's approach varies by energy commodity to make best use of the information and analytical tools available. Summaries of transportation investment requirements through 1990 are given. Total investment requirements for three modes (pipelines, rails, waterways and the three energy commodities can accumulate to a $49.9 to $50.9 billion range depending on the scenario. The scenarios are distinguished primarily by the world price of oil which, given deregulation of domestic oil prices, affects US oil prices even more profoundly than in the past. The high price of oil, following the evidence of the last year, is projected to hold demand for oil below the recent past.

  18. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    NASA Astrophysics Data System (ADS)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  19. EURRECA-Estimating vitamin D requirements for deriving dietary reference values.

    PubMed

    Cashman, Kevin D; Kiely, Mairead

    2013-01-01

    The time course of the EURRECA from 2008 to 2012, overlapped considerably with the timeframe of the process undertaken by the North American Institute of Medicine (IOM) to revise dietary reference intakes for vitamin D and calcium (published November 2010). Therefore the aims of the vitamin D-related activities in EURRECA were formulated to address knowledge requirements that would complement the activities undertaken by the IOM and provide additional resources for risk assessors and risk management agencies charged with the task of setting dietary reference values for vitamin D. A total of three systematic reviews were carried out. The first, which pre-dated the IOM review process, identified and evaluated existing and novel biomarkers of vitamin D status and confirmed that circulating 25-hydroxyvitamin D (25(OH)D) concentrations is a robust and reliable marker of vitamin D status. The second systematic review conducted a meta-analysis of the dose-response of serum 25(OH)D to vitamin D intake from randomized controlled trials (RCT) among adults to explore the most appropriate model of the vitamin D intake-serum 25(OH)D) relationship to estimate requirements. The third review also carried out a meta-analysis to evaluate evidence of efficacy from RCT using foods fortified with vitamin D, and found they increased circulating 25(OH)D concentrations in a dose-dependent manner but identified a need for stronger data on the efficacy of vitamin D-fortified food on deficiency prevention and potential health outcomes, including adverse effects. Finally, narrative reviews provided estimates of the prevalence of inadequate intakes of vitamin D in adults and children from international dietary surveys, as well as a compilation of research requirements for vitamin D to inform current and future assessments of vitamin D requirements. [Supplementary materials are available for this article. Go to the publisher's onilne edition of Critical Reviews in Food Science and Nutrion for

  20. Estimating Irrigation Water Requirements using MODIS Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah

    2007-01-01

    An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.

  1. Estimation of crop water requirements: extending the one-step approach to dual crop coefficients

    NASA Astrophysics Data System (ADS)

    Lhomme, J. P.; Boudhina, N.; Masmoudi, M. M.; Chehbouni, A.

    2015-07-01

    Crop water requirements are commonly estimated with the FAO-56 methodology based upon a two-step approach: first a reference evapotranspiration (ET0) is calculated from weather variables with the Penman-Monteith equation, then ET0 is multiplied by a tabulated crop-specific coefficient (Kc) to determine the water requirement (ETc) of a given crop under standard conditions. This method has been challenged to the benefit of a one-step approach, where crop evapotranspiration is directly calculated from a Penman-Monteith equation, its surface resistance replacing the crop coefficient. Whereas the transformation of the two-step approach into a one-step approach has been well documented when a single crop coefficient (Kc) is used, the case of dual crop coefficients (Kcb for the crop and Ke for the soil) has not been treated yet. The present paper examines this specific case. Using a full two-layer model as a reference, it is shown that the FAO-56 dual crop coefficient approach can be translated into a one-step approach based upon a modified combination equation. This equation has the basic form of the Penman-Monteith equation but its surface resistance is calculated as the parallel sum of a foliage resistance (replacing Kcb) and a soil surface resistance (replacing Ke). We also show that the foliage resistance, which depends on leaf stomatal resistance and leaf area, can be inferred from the basal crop coefficient (Kcb) in a way similar to the Matt-Shuttleworth method.

  2. A new remote sensing procedure for the estimation of crop water requirements

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, M.; Loukas, A.; Mylopoulos, N.

    2015-06-01

    The objective of this work is the development of a new approach for the estimation of water requirements for the most important crops located at Karla Watershed, central Greece. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used as a basis for the derivation of actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat ETM+ imagery. MODIS imagery has been also used, and a spatial downscaling procedure is followed between the two sensors for the derivation of a new NDVI product with a spatial resolution of 30 m x 30 m. GER 1500 spectro-radiometric measurements are additionally conducted during 2012 growing season. Cotton, alfalfa, corn and sugar beets fields are utilized, based on land use maps derived from previous Landsat 7 ETM+ images. A filtering process is then applied to derive NDVI values after acquiring Landsat ETM+ based reflectance values from the GER 1500 device. ETrF vs NDVI relationships are produced and then applied to the previous satellite based downscaled product in order to finally derive a 30 m x 30 m daily ETrF map for the study area. CropWat model (FAO) is then applied, taking as an input the new crop coefficient values with a spatial resolution of 30 m x 30 m available for every crop. CropWat finally returns daily crop water requirements (mm) for every crop and the results are analyzed and discussed.

  3. MANPOWER REQUIREMENTS AND DEMAND IN AGRICULTURE BY REGIONS AND NATIONALLY, WITH ESTIMATION OF VOCATIONAL TRAINING AND EDUCATIONAL NEEDS AND PRODUCTIVITY.

    ERIC Educational Resources Information Center

    ARCUS, PETER; HEADY, EARL O.

    THE PURPOSE OF THIS STUDY IS TO ESTIMATE THE MANPOWER REQUIREMENTS FOR THE NATION FOR 144 REGIONS THE TYPES OF SKILLS AND WORK ABILITIES REQUIRED BY AGRICULTURE IN THE NEXT 15 YEARS, AND THE TYPES AND AMOUNTS OF EDUCATION NEEDED. THE QUANTITATIVE ANALYSIS IS BEING MADE BY METHODS APPROPRIATE TO THE PHASES OF THE STUDY--(1) INTERRELATIONS AMONG…

  4. Evaluation of a method estimating real-time individual lysine requirements in two lines of growing-finishing pigs.

    PubMed

    Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J

    2015-04-01

    The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be

  5. Assessment of radar resolution requirements for soil moisture estimation from simulated satellite imagery. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.

    1982-01-01

    Radar simulations were performed at five-day intervals over a twenty-day period and used to estimate soil moisture from a generalized algorithm requiring only received power and the mean elevation of a test site near Lawrence, Kansas. The results demonstrate that the soil moisture of about 90% of the 20-m by 20-m pixel elements can be predicted with an accuracy of + or - 20% of field capacity within relatively flat agricultural portions of the test site. Radar resolutions of 93 m by 100 m with 23 looks or coarser gave the best results, largely because of the effects of signal fading. For the distribution of land cover categories, soils, and elevation in the test site, very coarse radar resolutions of 1 km by 1 km and 2.6 km by 3.1 km gave the best results for wet moisture conditions while a finer resolution of 93 m by 100 m was found to yield superior results for dry to moist soil conditions.

  6. Electrofishing effort required to estimate biotic condition in Southern Idaho Rivers

    USGS Publications Warehouse

    Maret, T.R.; Ott, D.S.; Herlihy, A.T.

    2007-01-01

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions. ?? Copyright by the American Fisheries Society 2007.

  7. Estimating the Reliability of Dynamic Variables Requiring Rater Judgment: A Generalizability Paradigm.

    ERIC Educational Resources Information Center

    Webber, Larry; And Others

    Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool. Systolic…

  8. Number of trials required to estimate a free-energy difference, using fluctuation relations.

    PubMed

    Yunger Halpern, Nicole; Jarzynski, Christopher

    2016-05-01

    The difference ΔF between free energies has applications in biology, chemistry, and pharmacology. The value of ΔF can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a ΔF estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006)PLEEE81539-375510.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of ΔF. Estimating ΔF from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations. PMID:27300866

  9. Number of trials required to estimate a free-energy difference, using fluctuation relations

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Jarzynski, Christopher

    2016-05-01

    The difference Δ F between free energies has applications in biology, chemistry, and pharmacology. The value of Δ F can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a Δ F estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006), 10.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of Δ F . Estimating Δ F from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations.

  10. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  11. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  12. Physically-based Methods for the Estimation of Crop Water Requirements from E.O. Optical Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The estimation of evapotranspiration (ET) represent the basic information for the evaluation of crop water requirements. A widely used method to compute ET is based on the so-called "crop coefficient" (Kc), defined as the ratio of total evapotranspiration by reference evapotranspiration ET0. The val...

  13. Reported energy intake by weight status, day and estimated energy requirement among adults: NHANES 2003-2008

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Objective: To describe energy intake reporting by gender, weight status, and interview sequence and to compare reported intakes to the Estimated Energy Requirement at different levels of physical activity. Methods: Energy intake was self-reported by 24-hour recall on two occasions (day 1 and day 2)...

  14. Utility of multi temporal satellite images for crop water requirements estimation and irrigation management in the Jordan Valley

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Identifying the spatial and temporal distribution of crop water requirements is a key for successful management of water resources in the dry areas. Climatic data were obtained from three automated weather stations to estimate reference evapotranspiration (ETO) in the Jordan Valley according to the...

  15. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model

    PubMed Central

    Jiang, Shengyu; Wang, Chun; Weiss, David J

    2016-01-01

    Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916

  16. Estimated quantitative amino acid requirements for Florida pompano reared in low-salinity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As with most marine carnivores, Florida pompano require relatively high crude protein diets to obtain optimal growth. Precision formulations to match the dietary indispensable amino acid (IAA) pattern to a species’ requirements can be used to lower the overall dietary protein. However IAA requirem...

  17. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  18. Minimizing instrumentation requirement for estimating crop water stress index and transpiration of maize

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Research was conducted in northern Colorado in 2011 to estimate the Crop Water Stress Index (CWSI) and actual water transpiration (Ta) of maize under a range of irrigation regimes. The main goal was to obtain these parameters with minimum instrumentation and measurements. The results confirmed that ...

  19. EVALUATION OF SAMPLING FREQUENCIES REQUIRED TO ESTIMATE NUTRIENT AND SUSPENDED SEDIMENT LOADS IN LARGE RIVERS

    EPA Science Inventory

    Nutrients and suspended sediments in streams and large rivers are two major issues facing state and federal agencies. Accurate estimates of nutrient and sediment loads are needed to assess a variety of important water-quality issues including total maximum daily loads, aquatic ec...

  20. A Method to Estimate the Number of House Officers Required in Teaching Hospitals.

    ERIC Educational Resources Information Center

    Chan, Linda S.; Bernstein, Sol

    1980-01-01

    A method of estimating the number of house officers needed for direct patient care in teaching hospitals is discussed. An application of the proposed method is illustrated for 11 clinical services at the Los Angeles County-University of Southern California Medical Center. (Author/MLW)

  1. Development of Procedures for Generating Alternative Allied Health Manpower Requirements and Supply Estimates.

    ERIC Educational Resources Information Center

    Applied Management Sciences, Inc., Silver Spring, MD.

    This report presents results of a project to assess the adequacy of existing data sources on the supply of 21 allied health occupations in order to develop improved data collection strategies and improved procedures for estimation of manpower needs. Following an introduction, chapter 2 provides a discussion of the general phases of the project and…

  2. Sampling and Calibration Requirements for Soil Property Estimation Using Reflectance Spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Optical diffuse reflectance sensing is a potential approach for rapid and reliable on-site estimation of soil properties. One issue with this sensing approach is whether additional calibration is necessary when the sensor is applied under conditions (e.g., soil types or ambient conditions) different...

  3. SAMPLING AND CALIBRATION REQUIREMENTS FOR SOIL PROPERTY ESTIMATION USING NIR SPECTROSCOPY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil physical and chemical properties are important in crop production since they control the availability of plant water and nutrients. Optical diffuse reflectance sensing is a potential approach for rapid and reliable on-site estimation of soil properties. One issue with this sensing approach is w...

  4. Shadow Radiation Shield Required Thickness Estimation for Space Nuclear Power Units

    NASA Astrophysics Data System (ADS)

    Voevodina, E. V.; Martishin, V. M.; Ivanovsky, V. A.; Prasolova, N. O.

    The paper concerns theoretical possibility of visiting orbital transport vehicles based on nuclear power unit and electric propulsion system on the Earth's orbit by astronauts to maintain work with payload from the perspective of radiation safety. There has been done estimation of possible time of the crew's staying in the area of payload of orbital transport vehicles for different reactor powers, which is a consistent part of nuclear power unit.

  5. Biology, population structure, and estimated forage requirements of lake trout in Lake Michigan

    USGS Publications Warehouse

    Eck, Gary W.; Wells, LaRue

    1983-01-01

    Data collected during successive years (1971-79) of sampling lake trout (Salvelinus namaycush) in Lake Michigan were used to develop statistics on lake trout growth, maturity, and mortality, and to quantify seasonal lake trout food and food availability. These statistics were then combined with data on lake trout year-class strengths and age-specific food conversion efficiencies to compute production and forage fish consumption by lake trout in Lake Michigan during the 1979 growing season (i.e., 15 May-1 December). An estimated standing stock of 1,486 metric tons (t) at the beginning of the growing season produced an estimated 1,129 t of fish flesh during the period. The lake trout consumed an estimated 3,037 t of forage fish, to which alewives (Alosa pseudoharengus) contributed about 71%, rainbow smelt (Osmerus mordax) 18%, and slimy sculpins (Cottus cognatus) 11%. Seasonal changes in bathymetric distributions of lake trout with respect to those of forage fish of a suitable size for prey were major determinants of the size and species compositions of fish in the seasonal diet of lake trout.

  6. Estimating resting energy expenditure in patients requiring nutritional support: a survey of dietetic practice.

    PubMed

    Green, A J; Smith, P; Whelan, K

    2008-01-01

    Estimation of resting energy expenditure (REE) involves predicting basal metabolic rate (BMR) plus adjustment for metabolic stress. The aim of this study was to investigate the methods used to estimate REE and to identify the impact of the patient's clinical condition and the dietitians' work profile on the stress factor assigned. A random sample of 115 dietitians from the United Kingdom with an interest in nutritional support completed a postal questionnaire regarding the estimation of REE for 37 clinical conditions. The Schofield equation was used by the majority (99%) of dietitians to calculate BMR; however, the stress factors assigned varied considerably with coefficients of variation ranging from 18.5 (cancer with cachexia) to 133.9 (HIV). Dietitians specializing in gastroenterology assigned a higher stress factor to decompensated liver disease than those not specializing in gastroenterology (19.3 vs 10.7, P=0.004). The results of this investigation strongly suggest that there is wide inconsistency in the assignment of stress factors within specific conditions and gives rise to concern over the potential consequences in terms of under- or overfeeding that may ensue. PMID:17311053

  7. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  8. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  9. A comparison of methods to estimate nutritional requirements from experimental data.

    PubMed

    Pesti, G M; Vedenov, D; Cason, J A; Billard, L

    2009-01-01

    1. Research papers use a variety of methods for evaluating experiments designed to determine nutritional requirements of poultry. Growth trials result in a set of ordered pairs of data. Often, point-by-point comparisons are made between treatments using analysis of variance. This approach ignores that response variables (body weight, feed efficiency, bone ash, etc.) are continuous rather than discrete. Point-by-point analyses harvest much less than the total amount of information from the data. Regression models are more effective at gleaning information from data, but the concept of "requirements" is poorly defined by many regression models. 2. Response data from a study of the lysine requirements of young broilers was used to compare methods of determining requirements. In this study, multiple range tests were compared with quadratic polynomials (QP), broken line models with linear (BLL) or quadratic (BLQ) ascending portions, the saturation kinetics model (SK) a logistic model (LM) and a compartmental (CM) model. 3. The sum of total residuals squared was used to compare the models. The SK and LM were the best fit models, followed by the CM, BLL, BLQ, and QP models. A plot of the residuals versus nutrient intake showed clearly that the BLQ and SK models fitted the data best in the important region where the ascending portion meets the plateau. 4. The BLQ model clearly defines the technical concept of nutritional requirements as typically defined by nutritionists. However, the SK, LM and CM models better depict the relationship typically defined by economists as the "law of diminishing marginal productivity". The SK model was used to demonstrate how the law of diminishing marginal productivity can be applied to poultry nutrition, and how the "most economical feeding level" may replace the concept of "requirements". PMID:19234926

  10. Estimating Sugarcane Water Requirements for Biofuel Feedstock Production in Maui, Hawaii Using Satellite Imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop eva...

  11. The metabolic power requirements of flight and estimations of flight muscle efficiency in the cockatiel (Nymphicus hollandicus).

    PubMed

    Morris, Charlotte R; Nelson, Frank E; Askew, Graham N

    2010-08-15

    Little is known about how in vivo muscle efficiency, that is the ratio of mechanical and metabolic power, is affected by changes in locomotory tasks. One of the main problems with determining in vivo muscle efficiency is the large number of muscles generally used to produce mechanical power. Animal flight provides a unique model for determining muscle efficiency because only one muscle, the pectoralis muscle, produces nearly all of the mechanical power required for flight. In order to estimate in vivo flight muscle efficiency, we measured the metabolic cost of flight across a range of flight speeds (6-13 m s(-1)) using masked respirometry in the cockatiel (Nymphicus hollandicus) and compared it with measurements of mechanical power determined in the same wind tunnel. Similar to measurements of the mechanical power-speed relationship, the metabolic power-speed relationship had a U-shape, with a minimum at 10 m s(-1). Although the mechanical and metabolic power-speed relationships had similar minimum power speeds, the metabolic power requirements are not a simple multiple of the mechanical power requirements across a range of flight speeds. The pectoralis muscle efficiency (estimated from mechanical and metabolic power, basal metabolism and an assumed value for the 'postural costs' of flight) increased with flight speed and ranged from 6.9% to 11.2%. However, it is probable that previous estimates of the postural costs of flight have been too low and that the pectoralis muscle efficiency is higher. PMID:20675549

  12. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    PubMed Central

    Valleriani, Angelo

    2016-01-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811

  13. Estimating minimum environmental flow requirements for well-mixed estuaries in Spain

    NASA Astrophysics Data System (ADS)

    Peñas, Francisco J.; Juanes, José A.; Galván, Cristina; Medina, Raúl; Castanedo, Sonia; Álvarez, César; Bárcena, Javier F.

    2013-12-01

    Following the principles of the European Water Framework Directive, the current Spanish water management legislation requires the definition of the environmental flow regimes for all water bodies, including estuaries. The scientific community has tried to answer the question of how much freshwater an estuary needs since the mid-1970s, resulting in the development of several methodologies and approaches in different parts of the world. However the ability to reproduce most of these approaches is difficult due to the scarcity of required data and also to the differences between the studied estuaries. In this paper, we present a methodology to calculate environmental flow regimes in well-mixed estuaries based on the numerical modelling of salinity and which takes into account the seasonal climatic and hydrologic pattern of the catchment. The approach follows three sequential steps: 1) Definition of reference conditions based on the unaltered salinity patterns and zoning of the estuary, 2) definition of salinity thresholds and 3) calculation of the minimum flows required to satisfy these thresholds. The application of the methodology to five estuaries on the northern coast of Spain has highlighted the importance of considering the hydrological variability and the division of the estuary into homogeneous zones. Moreover, the studies carried out demonstrate the ineffectiveness of river specific methodologies when used to define environmental flow regimes in several estuaries and periods, and the need to apply specific methodologies. The methodology is based on the principles defined by other already tested approaches, but its greatest advantage lies in the ability to be applied to large scales, when physical and biological data is scarce.

  14. Estimation of the lead thickness required to shield scattered radiation from synchrotron radiation experiments

    NASA Astrophysics Data System (ADS)

    Wroblewski, Thomas

    2015-03-01

    In the enclosure of synchrotron radiation experiments using a monochromatic beam, secondary radiation arises from two effects, namely fluorescence and scattering. While fluorescence can be regarded as isotropic, the angular dependence of Compton scattering has to be taken into account if the shielding shall not become unreasonably thick. The scope of this paper is to clarify how the different factors starting from the spectral properties of the source and the attenuation coefficient of the shielding, over the spectral and angular distribution of the scattered radiation and the geometry of the experiment influence the thickness of lead required to keep the dose rate outside the enclosure below the desired threshold.

  15. Estimation of distance error by fuzzy set theory required for strength determination of HDR 192Ir brachytherapy sources

    PubMed Central

    Kumar, Sudhir; Datta, D.; Sharma, S. D.; Chourasiya, G.; Babu, D. A. R.; Sharma, D. N.

    2014-01-01

    Verification of the strength of high dose rate (HDR) 192Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm3 is one of the recommended methods for measuring RAKR of HDR 192Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR 192Ir source strength measurement. PMID:24872605

  16. Using a generalized version of the Titius-Bode relation to extrapolate the patterns seen in Kepler multi-exoplanet systems, and estimate the average number of planets in circumstellar habitable zones

    NASA Astrophysics Data System (ADS)

    Lineweaver, Charles H.

    2015-08-01

    The Titius-Bode (TB) relation’s successful prediction of the period of Uranus was the main motivation that led to the search for another planet between Mars and Jupiter. This search led to the discovery of the asteroid Ceres and the rest of the asteroid belt. The TB relation can also provide useful hints about the periods of as-yet-undetected planets around other stars. In Bovaird & Lineweaver (2013) [1], we used a generalized TB relation to analyze 68 multi-planet systems with four or more detected exoplanets. We found that the majority of exoplanet systems in our sample adhered to the TB relation to a greater extent than the Solar System does. Thus, the TB relation can make useful predictions about the existence of as-yet-undetected planets in Kepler multi-planet systems. These predictions are one way to correct for the main obstacle preventing us from estimating the number of Earth-like planets in the universe. That obstacle is the incomplete sampling of planets of Earth-mass and smaller [2-5]. In [6], we use a generalized Titius-Bode relation to predict the periods of 228 additional planets in 151 of these Kepler multiples. These Titius-Bode-based predictions suggest that there are, on average, 2±1 planets in the habitable zone of each star. We also estimate the inclination of the invariable plane for each system and prioritize our planet predictions by their geometric probability to transit. We highlight a short list of 77 predicted planets in 40 systems with a high geometric probability to transit, resulting in an expected detection rate of ~15 per cent, ~3 times higher than the detection rate of our previous Titius-Bode-based predictions.References: [1] Bovaird, T. & Lineweaver, C.H (2013) MNRAS, 435, 1126-1138. [2] Dong S. & Zhu Z. (2013) ApJ, 778, 53 [3] Fressin F. et al. (2013) ApJ, 766, 81 [4] Petigura E. A. et al. (2013) PNAS, 110, 19273 [5] Silburt A. et al. (2014), ApJ (arXiv:1406.6048v2) [6] Bovaird, T., Lineweaver, C.H. & Jacobsen, S.K. (2015, in

  17. Estimation of the standardized ileal digestible lysine requirement for primiparous pregnant sows.

    PubMed

    Shi, M; Shi, C X; Li, Y K; Li, D F; Wang, F L

    2016-04-01

    This experiment was conducted to determine the optimal standardized ileal digestible lysine (SID Lys) level in diets fed to primiparous sows during gestation. A total of 150 (Landrace × Large White) crossbred gilts (weighing 149.9 ± 3.1 kg) were fed gestation diets (12.55 MJ of ME/kg) containing SID Lys levels of 0.43, 0.52, 0.60, 0.70 or 0.80% respectively. Gilts were fed 2.0 kg/day from day 1 to 80 and 3.0 kg/day from day 80 to 110 of gestation respectively. Gilts were allocated to treatments based on their body weight on the day of breeding. Weight gain from day 80 to 110 increased with increasing dietary SID Lys levels (p = 0.044). Fitted broken-line (p = 0.031) and quadratic plot (p = 0.047) analysis of body weight gain indicated that the optimal SID Lys level for primiparous sows was 0.70 and 0.69% respectively. During gestation, neither backfat thickness nor loin eye area was affected by dietary SID Lys level. Increasing dietary Lys had no effect on the litter size at birth or pigs born alive per litter. Litter weight at birth was not affected by dietary SID Lys level. The litter weight variation at birth quadratically decreased with increasing dietary SID Lys (p = 0.021) and was minimized at 0.70% dietary SID Lys. Gilts fed the 0.70% SID Lys diet had the highest dry matter (p = 0.031) and protein (p = 0.044) content in colostrum. On day 110 of gestation, gilts fed the 0.70% SID Lys diet tended to have the highest serum prolactin (p = 0.085) and serum insulin (p = 0.074) levels. The data demonstrate that the optimal dietary SID Lys was 0.70% for pregnant gilts, which is similar to the recommendation of 0.69% that was estimated by the NRC (2012). PMID:26174182

  18. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Technical Reports Server (NTRS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  19. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  20. Estimates of power requirements for a manned Mars rover powered by a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are met using an SP-100 type reactor. The primary electric power needs, which include 30-kWe net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine (FPSE) yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle (CBC) using He/Xe as the working fluid. The specific mass of the nuclear reactor power systrem, including a man-rated radiation shield, ranged from 150-kg/kWe to 190-kg/kWe and the total mass of the Rover vehicle varied depend upon the cruising speed.

  1. Estimation of irrigation requirement for wheat in the southern Spain by using a soil water balance remote sensing driven

    NASA Astrophysics Data System (ADS)

    González, Laura; Bodas, Vicente; Espósito, Gabriel; Campos, Isidro; Aliaga, Jerónimo; Calera, Alfonso

    2013-04-01

    This paper aims to evaluate the use of a remote sensing-driven soil water balance to estimate irrigation water requirements of wheat. The applied methodology is based on the approach of the dual crop coefficient proposed in the FAO-56 manual (Allen et al., 1998), where the basal crop coefficient is derived from a time series of remote sensing multispectral imagery which describes the growing cycle of wheat. This approach allows the estimation of the evapotranspiration (ET) and irrigation water requirements by means of a soil water balance in the root layer. The assimilation of satellite data into the FAO-56 soil water balance is based on the relationship between spectral vegetation indices (VI) and the transpiration coefficient (Campos et al., 2010; Sánchez et al., 2010). Two approaches to plant transpiration estimation were analyzed, the basal crop coefficient methodology and the transpiration coefficient approach described in the FAO-56 (Allen et al., 1998) and FAO-66 (Steduto et al., 2012) manuals respectively. The model is computed at daily time step and the results analyzed in this work are the net irrigation water requirements and water stress estimates. Analysis of results has been done by comparison with irrigation data (irrigation dates and volume applied) provided by farmers in 28 plots of wheat for the period 2004-2012 in the Spanish region of La Mancha, southern Spain, under different meteorological conditions. Total irrigation dose during the growing season varies from 200 mm to 700 mm. In some of plots soil moisture sensors data are available, which allowed the comparison with modeled soil moisture. Net irrigation water requirements estimated by the proposed model shows a good agreement with data, having in account the efficiency of the different irrigation systems. Despite the irrigation doses are generally greater than irrigation water requirements, the crops could suffer water stress periods during the campaign, because real irrigation timing and

  2. The balanced survivor average causal effect.

    PubMed

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  3. An applied simulation model for estimating the supply of and requirements for registered nurses based on population health needs.

    PubMed

    Tomblin Murphy, Gail; MacKenzie, Adrian; Alder, Robert; Birch, Stephen; Kephart, George; O'Brien-Pallas, Linda

    2009-11-01

    Aging populations, limited budgets, changing public expectations, new technologies, and the emergence of new diseases create challenges for health care systems as ways to meet needs and protect, promote, and restore health are considered. Traditional planning methods for the professionals required to provide these services have given little consideration to changes in the needs of the populations they serve or to changes in the amount/types of services offered and the way they are delivered. In the absence of dynamic planning models that simulate alternative policies and test policy mixes for their relative effectiveness, planners have tended to rely on projecting prevailing or arbitrarily determined target provider-population ratios. A simulation model has been developed that addresses each of these shortcomings by simultaneously estimating the supply of and requirements for registered nurses based on the identification and interaction of the determinants. The model's use is illustrated using data for Nova Scotia, Canada. PMID:20164064

  4. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  5. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  6. A note on generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag

    2007-11-01

    We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.

  7. Comment on "Technical Note: On the Matt-Shuttleworth approach to estimate crop water requirements" by Lhomme et al. (2014)

    NASA Astrophysics Data System (ADS)

    Shuttleworth, W. J.

    2014-05-01

    It is clear from Lhomme et al. (2014) that aspects of the explanation of the Matt-Shuttleworth approach can generate confusion. Presumably this is because the description in Shuttleworth (2006) was not sufficiently explicit and simple. This paper explains the logic behind the Matt-Shuttleworth approach clearly, simply and concisely. It shows how the Matt-Shuttleworth can be implemented using a few simple equations and provides access to ancillary calculation resources that can be used for such implementation. If the crop water requirement community decided that it is preferable to use the Penman-Monteith equation to estimate crop water requirements directly for all crops, the United Nations Food and Agriculture Organization could now update Irrigation and Drainage Paper 56 using the Matt-Shuttleworth approach by deriving tabulated values of surface resistance from Table 12 of Allen et al. (1998), with the estimation of crop evaporation then being directly made in a one-step calculation using an equation similar to that already recommended by the United Nations Food and Agriculture Organization for calculating reference crop evaporation.

  8. Comment on "Technical Note: On the Matt-Shuttleworth approach to estimate crop water requirements" by Lhomme et al. (2014)

    NASA Astrophysics Data System (ADS)

    Shuttleworth, W. J.

    2014-11-01

    It is clear from Lhomme et al. (2014) that aspects of the explanation of the Matt-Shuttleworth approach can generate confusion. Presumably this is because the description in Shuttleworth (2006) was not sufficiently explicit and simple. This paper explains the logic behind the Matt-Shuttleworth approach clearly, simply and concisely. It shows how the Matt-Shuttleworth can be implemented using a few simple equations and provides access to ancillary calculation resources that can be used for such implementation. If the crop water requirement community decided that it is preferable to use the Penman-Monteith equation to estimate crop water requirements directly for all crops, the United Nations Food and Agriculture Organization could now update Irrigation and Drainage Paper 56 using the Matt-Shuttleworth approach by deriving tabulated values of surface resistance from Table 12 of Allen et al. (1998), with the estimation of crop evaporation then being directly made in a one-step calculation using an equation similar to that already recommended by the United Nations Food and Agriculture Organization for calculating reference crop evaporation.

  9. A RAPID NON-DESTRUCTIVE METHOD FOR ESTIMATING ABOVEGROUND BIOMASS OF SALT MARSH GRASSES

    EPA Science Inventory

    Understanding the primary productivity of salt marshes requires accurate estimates of biomass. Unfortunately, these estimates vary enough within and among salt marshes to require large numbers of replicates if the averages are to be statistically meaningful. Large numbers of repl...

  10. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  11. Number of Days Required to Estimate Habitual Activity Using Wrist-Worn GENEActiv Accelerometer: A Cross-Sectional Study

    PubMed Central

    Dillon, Christina B.; Fitzgerald, Anthony P.; Kearney, Patricia M.; Perry, Ivan J.; Rennie, Kirsten L.; Kozarski, Robert; Phillips, Catherine M.

    2016-01-01

    Introduction Objective methods like accelerometers are feasible for large studies and may quantify variability in day-to-day physical activity better than self-report. The variability between days suggests that day of the week cannot be ignored in the design and analysis of physical activity studies. The purpose of this paper is to investigate the optimal number of days needed to obtain reliable estimates of weekly habitual physical activity using the wrist-worn GENEActiv accelerometer. Methods Data are from a subsample of the Mitchelstown cohort; 475 (44.6% males; mean aged 59.6±5.5 years) middle-aged Irish adults. Participants wore the wrist GENEActiv accelerometer for 7-consecutive days. Data were collected at 100Hz and summarised into a signal magnitude vector using 60s epochs. Each time interval was categorised according to intensity based on validated cut-offs. Spearman pairwise correlations determined the association between days of the week. Repeated measures ANOVA examined differences in average minutes across days. Intraclass correlations examined the proportion of variability between days, and Spearman-Brown formula estimated intra-class reliability coefficient associated with combinations of 1–7 days. Results Three hundred and ninety-seven adults (59.7±5.5yrs) had valid accelerometer data. Overall, men were most sedentary on weekends while women spent more time in sedentary behaviour on Sunday through Tuesday. Post hoc analysis found sedentary behaviour and light activity levels on Sunday to differ to all other days in the week. Analysis revealed greater than 1 day monitoring is necessary to achieve acceptable reliability. Monitoring frame duration for reliable estimates varied across intensity categories, (sedentary (3 days), light (2 days), moderate (2 days) and vigorous activity (6 days) and MVPA (2 days)). Conclusion These findings provide knowledge into the behavioural variability in weekly activity patterns of middle-aged adults. Since Sunday

  12. Requirements for zero energy balance of nonlactating, pregnant dairy cows fed fresh autumn pasture are greater than currently estimated.

    PubMed

    Mandok, K S; Kay, J K; Greenwood, S L; Edwards, G R; Roche, J R

    2013-06-01

    Fifty-three nonlactating, pregnant Holstein-Friesian and Holstein-Friesian × Jersey cross dairy cows were grouped into 4 cohorts (n=15, 12, 13, and 13) and offered 1 of 3 allowances of fresh, cut pasture indoors for 38 ± 2 d (mean ± SD). Cows were released onto a bare paddock after their meal until the following morning. Animals were blocked by age (6 ± 2 yr), day of gestation (208 ± 17 d), and body weight (BW; 526 ± 55 kg). The 3 pasture allowances [low: 7.5 kg of dry matter (DM), medium: 10.1 kg of DM, or high: 12.4 kg of DM/cow per day] were offered in individual stalls to determine the estimated DM and metabolizable energy (ME) intake required for zero energy balance. Individual cow DM intake was determined daily and body condition score was assessed once per week. Cow BW was recorded once per week in cohorts 1 and 2, and 3 times per week in cohorts 3 and 4. Low, medium, and high allowance treatments consumed 7.5, 9.4, and 10.6 kg of DM/cow per day [standard error of the difference (SED)=0.26 kg of DM], and BW gain, including the conceptus, was 0.2, 0.6, and 0.9 kg/cow per day (SED=0.12 kg), respectively. The ME content of the pasture was estimated from in vitro true digestibility and by near infrared spectroscopy. Total ME requirements for maintenance, pregnancy, and limited activity were 1.07 MJ of ME/kg of measured metabolic BW per day. This is more than 45% greater than current recommendations. Differences may be due to an underestimation of ME requirements for maintenance or pregnancy, an overestimation of diet metabolizability, or a combination of these. Further research is necessary to determine the reasons for the greater ME requirements measured in the present study, but the results are important for on-farm decisions regarding feed allocation for nonlactating, pregnant dairy cows. PMID:23522671

  13. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  14. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  15. Phytoplankton Productivity in an Arctic Fjord (West Greenland): Estimating Electron Requirements for Carbon Fixation and Oxygen Production

    PubMed Central

    Hancke, Kasper; Dalsgaard, Tage; Sejr, Mikael Kristian; Markager, Stiig; Glud, Ronnie Nøhr

    2015-01-01

    Accurate quantification of pelagic primary production is essential for quantifying the marine carbon turnover and the energy supply to the food web. Knowing the electron requirement (Κ) for carbon (C) fixation (ΚC) and oxygen (O2) production (ΚO2), variable fluorescence has the potential to quantify primary production in microalgae, and hereby increasing spatial and temporal resolution of measurements compared to traditional methods. Here we quantify ΚC and ΚO2 through measures of Pulse Amplitude Modulated (PAM) fluorometry, C fixation and O2 production in an Arctic fjord (Godthåbsfjorden, W Greenland). Through short- (2h) and long-term (24h) experiments, rates of electron transfer (ETRPSII), C fixation and/or O2 production were quantified and compared. Absolute rates of ETR were derived by accounting for Photosystem II light absorption and spectral light composition. Two-hour incubations revealed a linear relationship between ETRPSII and gross 14C fixation (R2 = 0.81) during light-limited photosynthesis, giving a ΚC of 7.6 ± 0.6 (mean ± S.E.) mol é (mol C)−1. Diel net rates also demonstrated a linear relationship between ETRPSII and C fixation giving a ΚC of 11.2 ± 1.3 mol é (mol C)−1 (R2 = 0.86). For net O2 production the electron requirement was lower than for net C fixation giving 6.5 ± 0.9 mol é (mol O2)−1 (R2 = 0.94). This, however, still is an electron requirement 1.6 times higher than the theoretical minimum for O2 production [i.e. 4 mol é (mol O2)−1]. The discrepancy is explained by respiratory activity and non-photochemical electron requirements and the variability is discussed. In conclusion, the bio-optical method and derived electron requirement support conversion of ETR to units of C or O2, paving the road for improved spatial and temporal resolution of primary production estimates. PMID:26218096

  16. Estimation of protein requirement for maintenance in adult parrots (Amazona spp.) by determining inevitable N losses in excreta.

    PubMed

    Westfahl, C; Wolf, P; Kamphues, J

    2008-06-01

    Especially in older pet birds, an unnecessary overconsumption of protein--presumably occurring in human custody--should be avoided in view of a potential decrease in the excretory organs' (liver, kidney) efficiency. Inevitable nitrogen (N)-losses enable the estimation of protein requirement for maintenance, because these losses have at least to be replaced to maintain N equilibrium. To determine the inevitable N losses in excreta of adult amazons (Amazona spp.), a frugivor-granivorous avian species from South America, adult amazons (n = 8) were fed a synthetic nearly N-free diet (in dry matter; DM: 37.8% starch, 26.6% sugar, 11.0% fat) for 9 days. Throughout the trial, feed and water intake were recorded, the amounts of excreta were measured and analysed for DM and ash content, N (Dumas analysis) and uric acid (enzymatic-photometric analysis) content. Effects of the N-free diet on body weight (BW) and protein-related blood parameters were quantified and compared with data collected during a previous 4-day period in which a commercial seed mixture was offered to the birds. After feeding an almost N-free diet for 9 days, under the conditions of a DM intake (20.1 g DM/bird/day) as in seeds and digestibility of organic matter comparable with those when fed seeds (82% and 76% respectively), it was possible to quantify the inevitable N losses via excrements to be 87.2 mg/bird/day or 172.5 mg/kg BW(0.75)/day. Assuming a utilization coefficient of 0.57 this leads to an estimated protein need of approximately 1.9 g/kg BW(0.75)/day (this value does not consider further N losses via feathers and desquamated cells; with the prerequisite that there is a balanced amino acid pattern). PMID:18477321

  17. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  18. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  19. Estimating Resource Requirements to Staff a Response to a Medium to Large Outbreak of Foot and Mouth Disease in Australia.

    PubMed

    Garner, M G; Bombarderi, N; Cozens, M; Conway, M L; Wright, T; Paskin, R; East, I J

    2016-02-01

    A recent report to the Australian Government identified concerns relating to Australia's capacity to respond to a medium to large outbreak of FMD. To assess the resources required, the AusSpread disease simulation model was used to develop a plausible outbreak scenario that included 62 infected premises in five different states at the time of detection, 28 days after the disease entered the first property in Victoria. Movements of infected animals and/or contaminated product/equipment led to smaller outbreaks in NSW, Queensland, South Australia and Tasmania. With unlimited staff resources, the outbreak was eradicated in 63 days with 54 infected premises and a 98% chance of eradication within 3 months. This unconstrained response was estimated to involve 2724 personnel. Unlimited personnel was considered unrealistic, and therefore, the course of the outbreak was modelled using three levels of staffing and the probability of achieving eradication within 3 or 6 months of introduction determined. Under the baseline staffing level, there was only a 16% probability that the outbreak would be eradicated within 3 months, and a 60% probability of eradication in 6 months. Deployment of an additional 60 personnel in the first 3 weeks of the response increased the likelihood of eradication in 3 months to 68%, and 100% in 6 months. Deployment of further personnel incrementally increased the likelihood of timely eradication and decreased the duration and size of the outbreak. Targeted use of vaccination in high-risk areas coupled with the baseline personnel resources increased the probability of eradication in 3 months to 74% and to 100% in 6 months. This required 25 vaccination teams commencing 12 days into the control program increasing to 50 vaccination teams 3 weeks later. Deploying an equal number of additional personnel to surveillance and infected premises operations was equally effective in reducing the outbreak size and duration. PMID:24894407

  20. A Site-sPecific Agricultural water Requirement and footprint Estimator (SPARE:WATER 1.0)

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.

    2013-07-01

    The agricultural water footprint addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). By considering site-specific properties when calculating the crop water footprint, this methodology can be used to support decision making in the agricultural sector on local to regional scale. We therefore developed the spatial decision support system SPARE:WATER that allows us to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirements and water footprints are assessed on a grid basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume inefficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water is defined as the water needed to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept, we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008, with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional estimation of crop water footprints.

  1. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  2. Orbit-averaged implicit particle codes

    NASA Astrophysics Data System (ADS)

    Cohen, B. I.; Freis, R. P.; Thomas, V.

    1982-03-01

    The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.

  3. Capital Requirements Estimating Model (CREMOD) for electric utilities. Volume I. Methodology description, model, description, and guide to model applications. [For each year up to 1990

    SciTech Connect

    Collins, D E; Gammon, J; Shaw, M L

    1980-01-01

    The Capital Requirements Estimating Model for the Electric Utilities (CREMOD) is a system of programs and data files used to estimate the capital requirements of the electric utility industry for each year between the current one and 1990. CREMOD disaggregates new electric plant capacity levels from the Mid-term Energy Forecasting System (MEFS) Integrating Model solution over time using actual projected commissioning dates. It computes the effect on aggregate capital requirements of dispersal of new plant and capital expenditures over relatively long construction lead times on aggregate capital requirements for each year. Finally, it incorporates the effects of real escalation in the electric utility construction industry on these requirements and computes the necessary transmission and distribution expenditures. This model was used in estimating the capital requirements of the electric utility sector. These results were used in compilation of the aggregate capital requirements for the financing of energy development as published in the 1978 Annual Report to Congress. This volume, Vol. I, explains CREMOD's methodology, functions, and applications.

  4. Geologic analysis of averaged magnetic satellite anomalies

    NASA Technical Reports Server (NTRS)

    Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.

    1985-01-01

    To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.

  5. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    SciTech Connect

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  6. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  7. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  8. Age-dependence of the average and equivalent refractive indices of the crystalline lens.

    PubMed

    Charman, W Neil; Atchison, David A

    2013-12-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  9. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... property? If the amount of the estimated residual value you rely upon to satisfy the full payout lease requirement of § 714.4(b) exceeds 25% of the original cost of the leased property, a financially capable...

  10. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... property? If the amount of the estimated residual value you rely upon to satisfy the full payout lease requirement of § 714.4(b) exceeds 25% of the original cost of the leased property, a financially capable...

  11. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... property? If the amount of the estimated residual value you rely upon to satisfy the full payout lease requirement of § 714.4(b) exceeds 25% of the original cost of the leased property, a financially capable...

  12. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... property? If the amount of the estimated residual value you rely upon to satisfy the full payout lease requirement of § 714.4(b) exceeds 25% of the original cost of the leased property, a financially capable...

  13. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... property? If the amount of the estimated residual value you rely upon to satisfy the full payout lease requirement of § 714.4(b) exceeds 25% of the original cost of the leased property, a financially capable...

  14. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  15. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  16. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  17. First Order Estimates of Energy Requirements for Pollution Control. Interagency Energy-Environment Research and Development Program Report.

    ERIC Educational Resources Information Center

    Barker, James L.; And Others

    This U.S. Environmental Protection Agency report presents estimates of the energy demand attributable to environmental control of pollution from stationary point sources. This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes mobile sources such as trucks, and…

  18. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  19. Physics of the spatially averaged snowmelt process

    NASA Astrophysics Data System (ADS)

    Horne, Federico E.; Kavvas, M. Levent

    1997-04-01

    It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.

  20. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the

  1. New Estimates of Working Time for Elementary School Teachers.

    ERIC Educational Resources Information Center

    Drago, Robert; Caplan, Robert; Costanza, David; Brubaker, Tanya; Cloud, Darnell; Harris, Naomi; Kashlan, Russell; Riggs, T. Lynn

    1999-01-01

    Data from a time-diary survey suggest that the average elementary school teacher works almost two hours per day more than the time required by contract. However, findings show that choice of measurement substantially affects time estimates. (JOW)

  2. Estimating Temperature Retrieval Accuracy Associated With Thermal Band Spatial Resolution Requirements for Center Pivot Irrigation Monitoring and Management

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Irons, James; Spruce, Joseph P.; Underwood, Lauren W.; Pagnutti, Mary

    2006-01-01

    This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400-800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30-240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400-m, 600-m, and 800-m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.

  3. The Averaging Problem in Cosmology

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2009-06-01

    This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.

  4. Mean Element Propagations Using Numerical Averaging

    NASA Technical Reports Server (NTRS)

    Ely, Todd A.

    2009-01-01

    The long-term evolution characteristics (and stability) of an orbit are best characterized using a mean element propagation of the perturbed two body variational equations of motion. The averaging process eliminates short period terms leaving only secular and long period effects. In this study, a non-traditional approach is taken that averages the variational equations using adaptive numerical techniques and then numerically integrating the resulting EOMs. Doing this avoids the Fourier series expansions and truncations required by the traditional analytic methods. The resultant numerical techniques can be easily adapted to propagations at most solar system bodies.

  5. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  6. Estimates of Minimum Energy Requirements for Range-Controlled Return of a Nonlifting Satellite from a Circular Orbit

    NASA Technical Reports Server (NTRS)

    Jackson, Charlie M., Jr.

    1961-01-01

    Existing expressions are used to obtain the minimum propellant fraction required for return from a circular orbit as a function of vacuum trajectory range. trajectory are matched to those of the atmospheric trajectory to obtain a complete return from orbit to earth. The results are restricted by the assumptions of (1) impulsive velocity change, (2) nearly circular transfer trajectory, ( 3) spherical earth, atmosphere, and gravitational field, (4) exponential atmospheric density variation with attitude, and (5) a nonrotating atmosphere. The solutions for the parameters of the vacuum Calculations are made t o determine the effects of longitudinal and lateral range on required propeUant fraction and reentry loading for a nonrotating earth and for several orbital altitudes. the single- and two-impulse method of return is made and the results indicate a "trade off" between propellant fraction required and landing- position accuracy. A comparison of An example of a return mission from a polar orbit is discussed where the initial deorbit point is the intersection of the North Pole horizon with the satellite orbit. Some effects of a rotating earth are also considered. It is found that, for each target-orbital-plane longitudinal difference, there exists a target latitude for which the required propellant fraction is a minimum.

  7. Estimates of electricity requirements for the recovery of mineral commodities, with examples applied to sub-Saharan Africa

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2011-01-01

    To produce materials from mine to market it is necessary to overcome obstacles that include the force of gravity, the strength of molecular bonds, and technological inefficiencies. These challenges are met by the application of energy to accomplish the work that includes the direct use of electricity, fossil fuel, and manual labor. The tables and analyses presented in this study contain estimates of electricity consumption for the mining and processing of ores, concentrates, intermediate products, and industrial and refined metallic commodities on a kilowatt-hour per unit basis, primarily the metric ton or troy ounce. Data contained in tables pertaining to specific currently operating facilities are static, as the amount of electricity consumed to process or produce a unit of material changes over time for a great number of reasons. Estimates were developed from diverse sources that included feasibility studies, company-produced annual and sustainability reports, conference proceedings, discussions with government and industry experts, journal articles, reference texts, and studies by nongovernmental organizations.

  8. Estimating pollutant removal requirements for landfills in the UK: I. Benchmark study and characteristics of waste treatment technologies.

    PubMed

    Hall, D H; Drury, D; Gronow, J R; Rosevear, A; Pollard, S J T; Smith, R

    2006-12-01

    Introduction of the EU Landfill Directive is having a significant impact on waste management in the UK and in other member states that have relied on landfilling. This paper considers the length of the aftercare period required by the municipal solid waste streams that the UK will most probably generate following implementation of the Landfill Directive. Data were derived from literature to identify properties of residues from the most likely treatment processes and the probable management times these residues will require within the landfill environment were then modelled. Results suggest that for chloride the relevant water quality standard (250 mg l(-1)) will be achieved with a management period of 40 years and for lead (0.1 mg I(-1)), 240 years. This has considerable implications for the sustainability of landfill and suggests that current timescales for aftercare of landfills may be inadequate. PMID:17285936

  9. In-season time series analysis of Resourcesat-1 AWiFS data for estimating irrigation water requirement

    NASA Astrophysics Data System (ADS)

    Raju, P. V.; Sesha Sai, M. V. R.; Roy, P. S.

    2008-06-01

    AWiFS sensor on board IRS-P6 (Resourcesat-1), with its unique features—wide swath and 5-day revisit capability provides excellent opportunities to carry out in-season analysis of irrigated agriculture. The study carried out in Hirakud command area, Orissa State indicated that the progression of rice crop acreage could be mapped through analysis of time series AWiFS data set. The spectral emergence pattern of rice crop was found useful to identify the period of rice transplantation and its variability across the command area. This information, integrated with agro-meteorological data, was used to quantify 10-daily canal-wise irrigation water requirement. A comparison with field measured actual irrigation supplies indicated an overall supply adequacy of 88% and showed wide variability at lateral canal level ranging between 18% and 109%. The supply pattern also did not correspond with the chronological variations associated with crop water requirement, supplies were 15% excess during initial part of season (December and January) and were 20.1% deficit during later part of season (February to April). Rescheduling the excess supplies of the initial period could have reduced the deficit to 15% during peak season. The study has demonstrated the usefulness of AWiFS data to generate the irrigation water requirement by mid-season, subsequent to which 38% supplies were yet to be allocated. This would support the irrigation managers to reschedule the irrigation water supplies to achieve better synchronization between requirement and supply leading to improved water use efficiency.

  10. Reproductive performance and estimates of labor requirements associated with combinations of artificial insemination and natural service in swine.

    PubMed

    Flowers, W L; Alhusen, H D

    1992-03-01

    A study was conducted to examine effects of mating systems composed of natural service (NS) and AI in swine on farrowing rate, litter size, and labor requirements. Sows and gilts were bred once per day via one of the following treatments (d 1/d 2): NS/NS, NS/AI, AI/AI, and NS/none. Gilts bred with NS/AI, AI/AI, and NS/NS had higher (P less than .05) farrowing rates than gilts bred with NS/none matings. Similarly, farrowing rates were higher (P less than .05) in NS/AI than in NS/NS gilts. Numbers of pigs born alive were greater (P less than .05) in NS/NS, NS/AI, and AI/AI than in NS/none gilts. In sows, a treatment x time interaction (P less than .01) was present for farrowing rate. In the AI/AI treatment, farrowing rate increased (P less than .01) from 70.0% (wk 1 through 3) to 88.5% (wk 4 through 10). Farrowing rates were 87.3, 93.2, and 76.0% in the NS/NS, NS/AI, and NS/none groups, respectively, and did not change (P = .72) over time. Sows bred via NS/NS and NS/AI had larger litters (P less than .05) than NS/none sows. In the present study, if four or more sows and gilts were bred, then AI required less (P less than .05) time per animal than NS. Furthermore, gilts required more (P less than .05) time for breeding than sows. Results from this study demonstrate that gilts and sows responded differently to combinations of NS and AI in terms of reproductive performance. In addition, differences in labor requirements per sow or gilt between NS and AI matings were dependent on parity and daily breeding demands. PMID:1563988

  11. High Average Power Yb:YAG Laser

    SciTech Connect

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  12. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  13. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  14. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  15. Estimation of the maintenance energy requirements, methane emissions and nitrogen utilization efficiency of two suckler cow genotypes.

    PubMed

    Zou, C X; Lively, F O; Wylie, A R G; Yan, T

    2016-04-01

    Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; P<0.001) indicated values for net energy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; P<0.001). There were positive linear relationships between N intake and N outputs in manure, and manure N accounted for 0.923 of the N intake. The present results provide approaches to predict maintenance energy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems. PMID:26593693

  16. On generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag M.

    2007-09-01

    We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

  17. Bayesian Geostatistical Model-Based Estimates of Soil-Transmitted Helminth Infection in Nigeria, Including Annual Deworming Requirements

    PubMed Central

    Oluwole, Akinola S.; Ekpo, Uwem F.; Karagiannis-Voules, Dimitrios-Alexios; Abe, Eniola M.; Olamiju, Francisca O.; Isiyaku, Sunday; Okoronkwo, Chukwu; Saka, Yisa; Nebe, Obiageli J.; Braide, Eka I.; Mafiana, Chiedu F.; Utzinger, Jürg; Vounatsou, Penelope

    2015-01-01

    Background The acceleration of the control of soil-transmitted helminth (STH) infections in Nigeria, emphasizing preventive chemotherapy, has become imperative in light of the global fight against neglected tropical diseases. Predictive risk maps are an important tool to guide and support control activities. Methodology STH infection prevalence data were obtained from surveys carried out in 2011 using standard protocols. Data were geo-referenced and collated in a nationwide, geographic information system database. Bayesian geostatistical models with remotely sensed environmental covariates and variable selection procedures were utilized to predict the spatial distribution of STH infections in Nigeria. Principal Findings We found that hookworm, Ascaris lumbricoides, and Trichuris trichiura infections are endemic in 482 (86.8%), 305 (55.0%), and 55 (9.9%) locations, respectively. Hookworm and A. lumbricoides infection co-exist in 16 states, while the three species are co-endemic in 12 states. Overall, STHs are endemic in 20 of the 36 states of Nigeria, including the Federal Capital Territory of Abuja. The observed prevalence at endemic locations ranged from 1.7% to 51.7% for hookworm, from 1.6% to 77.8% for A. lumbricoides, and from 1.0% to 25.5% for T. trichiura. Model-based predictions ranged from 0.7% to 51.0% for hookworm, from 0.1% to 82.6% for A. lumbricoides, and from 0.0% to 18.5% for T. trichiura. Our models suggest that day land surface temperature and dense vegetation are important predictors of the spatial distribution of STH infection in Nigeria. In 2011, a total of 5.7 million (13.8%) school-aged children were predicted to be infected with STHs in Nigeria. Mass treatment at the local government area level for annual or bi-annual treatment of the school-aged population in Nigeria in 2011, based on World Health Organization prevalence thresholds, were estimated at 10.2 million tablets. Conclusions/Significance The predictive risk maps and estimated

  18. Development of an estimation model for the evaluation of the energy requirement of dilute acid pretreatments of biomass☆

    PubMed Central

    Mafe, Oluwakemi A.T.; Davies, Scott M.; Hancock, John; Du, Chenyu

    2015-01-01

    This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly. PMID:26109752

  19. Emissions averaging top option for HON compliance

    SciTech Connect

    Kapoor, S. )

    1993-05-01

    In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.

  20. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  1. Averaged Electroencephalic Audiometry in Infants

    ERIC Educational Resources Information Center

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  2. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  3. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  4. Averaging facial expression over time

    PubMed Central

    Haberman, Jason; Harp, Tom; Whitney, David

    2010-01-01

    The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064

  5. Average Cost of Common Schools.

    ERIC Educational Resources Information Center

    White, Fred; Tweeten, Luther

    The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…

  6. Viewpoint: observations on scaled average bioequivalence.

    PubMed

    Patterson, Scott D; Jones, Byron

    2012-01-01

    The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. PMID:22162308

  7. Exact averaging of laminar dispersion

    NASA Astrophysics Data System (ADS)

    Ratnakar, Ram R.; Balakotaiah, Vemuri

    2011-02-01

    We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.

  8. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  9. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  10. Average luminosity distance in inhomogeneous universes

    SciTech Connect

    Kostov, Valentin

    2010-04-01

    allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  11. Using Multiple Representations To Improve Conceptions of Average Speed.

    ERIC Educational Resources Information Center

    Reed, Stephen K.; Jazo, Linda

    2002-01-01

    Discusses improving mathematical reasoning through the design of computer microworlds and evaluates a computer-based learning environment that uses multiple representations to improve undergraduate students' conception of average speed. Describes improvement of students' estimates of average speed by using visual feedback from a simulation.…

  12. A site-specific agricultural water requirement and footprint estimator (SPARE:WATER 1.0) for irrigation agriculture

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.

    2013-01-01

    The water footprint accounting method addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). Most of current water footprint assessments focus on global to continental scale. We therefore developed the spatial decision support system SPARE:WATER that allows to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirement and water footprints are assessed on a grid-basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume in-efficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water can be defined as the water to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008 with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional water footprint assessments.

  13. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  14. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  15. E.O.-based estimation of transpiration and crop water requirements for vineyards: a case study in southern Italy

    NASA Astrophysics Data System (ADS)

    D'Urso, Guido; Maltese, Antonino; Palladino, Mario

    2014-10-01

    An efficient use of water for irrigation is a challenging task. From an agronomical point of view, it requires establishing the optimal amount of water to be supplied, at the correct time, based on phenological phase and water stress spatial distribution. Indeed, the knowledge of the actual water stress is essential for agronomic decisions, vineyards need to be managed to maintain a moderate water stress, thus allowing to optimize berries quality and quantity. Methods for quickly quantifying where, when and in what extent, vines begin to experience water stress are beneficial. Traditional point based methodologies, such those based on Scholander pressure chamber, even if well established are time expensive and do not give a comprehensive picture of the vineyard water deficit. Earth Observation (E.O.) based methodologies promise to achieve a synoptic overview of the water stress. Some E.O. data, indeed, sense the territory in the thermal part of the spectrum and, as it is well recognized, leaf radiometric temperature is related to the plant water status. However, current satellite sensors have not detailed enough spatial resolution to detect pure canopy pixels; thus, the pixel radiometric temperature characterizes the whole soil-vegetation system, and in variable proportions. On the other hand, due to limits in the actual crop dusters, there is no need to characterize the water stress distribution at plant scale, and a coarser spatial characterization would be sufficient. The research aims to assess to what extent: 1) E.O. based canopy radiometric temperature can be used, straightforwardly, to detected plant water status; 2) E.O. based canopy transpiration, would be more suitable (or not) to describe the spatial variability in plant water stress. To these aims: 1) radiometric canopy temperature measured in situ, and derived from a two-source energy balance model applied on airborne data, were compared with in situ leaf water potential from freshly cut leaves; 2) two

  16. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  17. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  18. The allometric relationship between resting metabolic rate and body mass in wild waterfowl (Anatidae) and an application to estimation of winter habitat requirements

    USGS Publications Warehouse

    Miller, M.R.; Eadie, J. McA

    2006-01-01

    We examined the allometric relationship between resting metabolic rate (RMR; kJ day-1) and body mass (kg) in wild waterfowl (Anatidae) by regressing RMR on body mass using species means from data obtained from published literature (18 sources, 54 measurements, 24 species; all data from captive birds). There was no significant difference among measurements from the rest (night; n = 37), active (day; n = 14), and unspecified (n = 3) phases of the daily cycle (P > 0.10), and we pooled these measurements for analysis. The resulting power function (aMassb) for all waterfowl (swans, geese, and ducks) had an exponent (b; slope of the regression) of 0.74, indistinguishable from that determined with commonly used general equations for nonpasserine birds (0.72-0.73). In contrast, the mass proportionality coefficient (b; y-intercept at mass = 1 kg) of 422 exceeded that obtained from the nonpasserine equations by 29%-37%. Analyses using independent contrasts correcting for phylogeny did not substantially alter the equation. Our results suggest the waterfowl equation provides a more appropriate estimate of RMR for bioenergetics analyses of waterfowl than do the general nonpasserine equations. When adjusted with a multiple to account for energy costs of free living, the waterfowl equation better estimates daily energy expenditure. Using this equation, we estimated that the extent of wetland habitat required to support wintering waterfowl populations could be 37%-50% higher than previously predicted using general nonpasserine equations. ?? The Cooper Ornithological Society 2006.

  19. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119

  20. Probabilistic climate change predictions applying Bayesian model averaging.

    PubMed

    Min, Seung-Ki; Simonis, Daniel; Hense, Andreas

    2007-08-15

    This study explores the sensitivity of probabilistic predictions of the twenty-first century surface air temperature (SAT) changes to different multi-model averaging methods using available simulations from the Intergovernmental Panel on Climate Change fourth assessment report. A way of observationally constrained prediction is provided by training multi-model simulations for the second half of the twentieth century with respect to long-term components. The Bayesian model averaging (BMA) produces weighted probability density functions (PDFs) and we compare two methods of estimating weighting factors: Bayes factor and expectation-maximization algorithm. It is shown that Bayesian-weighted PDFs for the global mean SAT changes are characterized by multi-modal structures from the middle of the twenty-first century onward, which are not clearly seen in arithmetic ensemble mean (AEM). This occurs because BMA tends to select a few high-skilled models and down-weight the others. Additionally, Bayesian results exhibit larger means and broader PDFs in the global mean predictions than the unweighted AEM. Multi-modality is more pronounced in the continental analysis using 30-year mean (2070-2099) SATs while there is only a little effect of Bayesian weighting on the 5-95% range. These results indicate that this approach to observationally constrained probabilistic predictions can be highly sensitive to the method of training, particularly for the later half of the twenty-first century, and that a more comprehensive approach combining different regions and/or variables is required. PMID:17569647

  1. Average g-Factors of Anisotropic Polycrystalline Samples

    SciTech Connect

    Fishman, Randy Scott; Miller, Joel S.

    2010-01-01

    Due to the lack of suitable single crystals, the average g-factor of anisotropic polycrystalline samples are commonly estimated from either the Curie-Weiss susceptibility or the saturation magnetization. We show that the average g-factor obtained from the Curie constant is always greater than or equal to the average g-factor obtained from the saturation magnetization. The average g-factors are equal only for a single crystal or an isotropic polycrystal. We review experimental results for several compounds containing the anisotropic cation [Fe(C5Me5)2]+ and propose an experiment to test this inequality using a compound with a spinless anion.

  2. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  3. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  4. Protein Requirements during Aging.

    PubMed

    Courtney-Martin, Glenda; Ball, Ronald O; Pencharz, Paul B; Elango, Rajavel

    2016-01-01

    Protein recommendations for elderly, both men and women, are based on nitrogen balance studies. They are set at 0.66 and 0.8 g/kg/day as the estimated average requirement (EAR) and recommended dietary allowance (RDA), respectively, similar to young adults. This recommendation is based on single linear regression of available nitrogen balance data obtained at test protein intakes close to or below zero balance. Using the indicator amino acid oxidation (IAAO) method, we estimated the protein requirement in young adults and in both elderly men and women to be 0.9 and 1.2 g/kg/day as the EAR and RDA, respectively. This suggests that there is no difference in requirement on a gender basis or on a per kg body weight basis between younger and older adults. The requirement estimates however are ~40% higher than the current protein recommendations on a body weight basis. They are also 40% higher than our estimates in young men when calculated on the basis of fat free mass. Thus, current recommendations may need to be re-assessed. Potential rationale for this difference includes a decreased sensitivity to dietary amino acids and increased insulin resistance in the elderly compared with younger individuals. PMID:27529275

  5. Protein Requirements during Aging

    PubMed Central

    Courtney-Martin, Glenda; Ball, Ronald O.; Pencharz, Paul B.; Elango, Rajavel

    2016-01-01

    Protein recommendations for elderly, both men and women, are based on nitrogen balance studies. They are set at 0.66 and 0.8 g/kg/day as the estimated average requirement (EAR) and recommended dietary allowance (RDA), respectively, similar to young adults. This recommendation is based on single linear regression of available nitrogen balance data obtained at test protein intakes close to or below zero balance. Using the indicator amino acid oxidation (IAAO) method, we estimated the protein requirement in young adults and in both elderly men and women to be 0.9 and 1.2 g/kg/day as the EAR and RDA, respectively. This suggests that there is no difference in requirement on a gender basis or on a per kg body weight basis between younger and older adults. The requirement estimates however are ~40% higher than the current protein recommendations on a body weight basis. They are also 40% higher than our estimates in young men when calculated on the basis of fat free mass. Thus, current recommendations may need to be re-assessed. Potential rationale for this difference includes a decreased sensitivity to dietary amino acids and increased insulin resistance in the elderly compared with younger individuals. PMID:27529275

  6. Modeling daily average stream temperature from air temperature and watershed area

    NASA Astrophysics Data System (ADS)

    Butler, N. L.; Hunt, J. R.

    2012-12-01

    Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7

  7. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    SciTech Connect

    Kimes, D.S.; Kerber, A.G.; Sellers, P.J. )

    1993-06-01

    The problems in moving from a radiance measurement made for a particular sun-target-sensor geometry to an accurate estimate of the hemispherical reflectance are considerable. A knowledge-based system called VEG was used in this study to infer hemispherical reflectance. Given directional reflectance(s) and the sun angle, VEG selects the most suitable inference technique(s) and estimates the surface hemispherical reflectance with an estimate of the error. Ideally, VEG is applied to homogeneous vegetation. However, what is typically done in GCM (global circulation model) models and related studies is to obtain an average hemispherical reflectance on a square grid cell on the order of 200 km x 200 km. All available directional data for a given cell are averaged (for each view direction), and then a particular technique for inferring hemispherical reflectance is applied to this averaged data. Any given grid cell can contain several surface types that directionally scatter radiation very differently. When averaging over a set of view angles, the resulting mean values may be atypical of the actual surface types that occur on the ground, and the resulting inferred hemispherical reflectance can be in error. These errors were explored by creating a simulated scene and applying VEG to estimate the area-averaged hemispherical reflectance using various sampling procedures. The reduction in the hemispherical reflectance errors provided by using VEG ranged from a factor of 2-4, depending on conditions. This improvement represents a shift from the calculation of a hemispherical reflectance product of relative value (errors of 20% or more), to a product that could be used quantitatively in global modeling applications, where the requirement is for errors to be limited to around 5-10 %.

  8. Accelerometer data requirements for reliable estimation of habitual physical activity and sedentary time of children during the early years - a worked example following a stepped approach.

    PubMed

    Bingham, Daniel D; Costa, Silvia; Clemes, Stacy A; Routen, Ash C; Moore, Helen J; Barber, Sally E

    2016-10-01

    This study presents a worked example of a stepped process to reliably estimate the habitual physical activity and sedentary time of a sample of young children. A total of 299 children (2.9 ± 0.6 years) were recruited. Outcome variables were daily minutes of total physical activity, sedentary time, moderate to vigorous physical activity and proportional values of each variable. In total, 282 (94%) provided 3 h of accelerometer data on ≥1 day and were included in a 6-step process: Step-1: determine minimum wear-time; Step-2: process 7-day-data; Step-3: determine the inclusion of a weekend day; Step-4: examine day-to-day variability; Step-5: calculate single day intraclass correlation (ICC) (2,1); Step-6: calculate number of days required to reach reliability. Following the process the results were, Step-1: 6 h was estimated as minimum wear-time of a standard day. Step-2: 98 (32%) children had ≥6 h wear on 7 days. Step-3: no differences were found between weekdays and weekend days (P ≥ 0.05). Step-4: no differences were found between day-to-day variability (P ≥ 0.05). Step-5: single day ICC's (2,1) ranged from 0.48 (total physical activity and sedentary time) to 0.53 (proportion of moderate to vigorous physical activity). Step-6: to reach reliability (ICC = 0.7), 3 days were required for all outcomes. In conclusion following a 7 day wear protocol, ≥6 h on any 3 days was found to have acceptable reliability. The stepped-process offers researchers a method to derive sample-specific wear-time criterion. PMID:26920123

  9. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  10. Model Averaging for Improving Inference from Causal Diagrams

    PubMed Central

    Hamra, Ghassan B.; Kaufman, Jay S.; Vahratian, Anjel

    2015-01-01

    Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as “wish bias”. Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives. PMID:26270672

  11. Estimation of adequate setup margins and threshold for position errors requiring immediate attention in head and neck cancer radiotherapy based on 2D image guidance

    PubMed Central

    2013-01-01

    Background We estimated sufficient setup margins for head-and-neck cancer (HNC) radiotherapy (RT) when 2D kV images are utilized for routine patient setup verification. As another goal we estimated a threshold for the displacements of the most important bony landmarks related to the target volumes requiring immediate attention. Methods We analyzed 1491 orthogonal x-ray images utilized in RT treatment guidance for 80 HNC patients. We estimated overall setup errors and errors for four subregions to account for patient rotation and deformation: the vertebrae C1-2, C5-7, the occiput bone and the mandible. Setup margins were estimated for two 2D image guidance protocols: i) imaging at first three fractions and weekly thereafter and ii) daily imaging. Two 2D image matching principles were investigated: i) to the vertebrae in the middle of planning target volume (PTV) (MID_PTV) and ii) minimizing maximal position error for the four subregions (MIN_MAX). The threshold for the position errors was calculated with two previously unpublished methods based on the van Herk’s formula and clinical data by retaining a margin of 5 mm sufficient for each subregion. Results Sufficient setup margins to compensate the displacements of the subregions were approximately two times larger than were needed to compensate setup errors for rigid target. Adequate margins varied from 2.7 mm to 9.6 mm depending on the subregions related to the target, applied image guidance protocol and early correction of clinically important systematic 3D displacements of the subregions exceeding 4 mm. The MIN_MAX match resulted in smaller margins but caused an overall shift of 2.5 mm for the target center. Margins ≤ 5mm were sufficient with the MID_PTV match only through application of daily 2D imaging and the threshold of 4 mm to correct systematic displacement of a subregion. Conclusions Adequate setup margins depend remarkably on the subregions related to the target volume. When the systematic 3D

  12. A methodological approach to estimate the lactation curve and net energy and protein requirements of beef cows using nonlinear mixed-effects modeling.

    PubMed

    Albertini, T Z; Medeiros, S R; Torres, R A A; Zocchi, S S; Oltjen, J W; Strathe, A B; Lanna, D P D

    2012-11-01

    The objective of this study was to evaluate methods to predict the secretion of milk and net energy and protein requirements of beef cows (Bos indicus and B. taurus) after approximately 1 mo postpartum using nonlinear mixed-effect modeling (NLME). Twenty Caracu × Nellore (CN) and 10 Nellore (NL) cows were inseminated to Red Angus bulls, and 10 Angus × Nellore (AN) were bred to Canchim bulls. Cows were evaluated from just after calving (25 ± 11 d) to weaning (220 d). Milk yield was estimated by weighing calves before and after suckling (WSW) and by machine milking (MM) methods at 25, 52, 80, 109, 136, 164, 193, and 220 ± 11 d of lactation. Brody and simple linear equations were consecutively fitted to the data and compared using information criteria. For the Brody equation, a NLME model was used to estimate all lactation profiles incorporating different sources of variation (calf sex and breed of cow, cow as a nested random effect, and within-cow auto-correlation). The CV for the MM method (29%) was less than WSW (45%). Consequently, the WSW method was responsible for reducing the variance about 1.5 times among individuals, which minimized the ability to detect differences among cows. As a result, only milk yield MM data were used in the NLME models. The Brody equation provided the best fit to this dataset, and inclusion of a continuous autoregressive process improved fit (P < 0.01). Milk, energy and protein yield at the beginning of lactation were affected by cow genotype and calf sex (P < 0.001). The exponential decay of the lactation curves was affected only by genotype (P < 0.001). Angus × Nellore cows produced 15 and 48% more milk than CN and NL during the trial, respectively (P < 0.05). Caracu × Nellore cows produced 29% more milk than NL (P < 0.05). The net energy and net protein requirements for milk yield followed a similar ranking. Male calves stimulated their dams to produce 11.7, 11.4, and 11.9% more milk, energy and protein, respectively (P < 0

  13. Method for detection and correction of errors in speech pitch period estimates

    NASA Technical Reports Server (NTRS)

    Bhaskar, Udaya (Inventor)

    1989-01-01

    A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.

  14. A Simplified Sensorless Vector Control Based on Average DC Bus Current for Fan Motor

    NASA Astrophysics Data System (ADS)

    Sumita, Satoshi; Tobari, Kazuaki; Aoyagi, Shigehisa; Maeda, Daisuke

    This paper describes a simplified sensorless vector control based on the average DC bus current for PMSM. This method can be used to design a drive control system at a relatively low cost because the microcontroller does not require a precise timer and the calculation load is slight. In the proposed method, one of the two possible current estimation processes is chosen according to the operation mode. First, the controller estimates d-axis current and identifies the back-EMF parameter in the synchronous operation mode at low speeds. The error in the back-EMF identification affects the efficiency of the proposed system, so it needs to be zero. Second, the controller estimates q-axis current in vector control mode. The identified parameter and q-axis current define voltage reference to realize high efficiency drive. The obtained experimental results confirm the effectiveness of the proposed method.

  15. Inferring average generation via division-linked labeling.

    PubMed

    Weber, Tom S; Perié, Leïla; Duffy, Ken R

    2016-08-01

    For proliferating cells subject to both division and death, how can one estimate the average generation number of the living population without continuous observation or a division-diluting dye? In this paper we provide a method for cell systems such that at each division there is an unlikely, heritable one-way label change that has no impact other than to serve as a distinguishing marker. If the probability of label change per cell generation can be determined and the proportion of labeled cells at a given time point can be measured, we establish that the average generation number of living cells can be estimated. Crucially, the estimator does not depend on knowledge of the statistics of cell cycle, death rates or total cell numbers. We explore the estimator's features through comparison with physiologically parameterized stochastic simulations and extrapolations from published data, using it to suggest new experimental designs. PMID:26733310

  16. Average power laser experiment (APLE) design

    NASA Astrophysics Data System (ADS)

    Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.

    1992-07-01

    We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 μm to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 μm with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.

  17. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...

  18. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...

  19. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...

  20. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...

  1. Depth perception in disparity-defined objects: finding the balance between averaging and segregation.

    PubMed

    Cammack, P; Harris, J M

    2016-06-19

    Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes' views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269601

  2. Depth perception in disparity-defined objects: finding the balance between averaging and segregation

    PubMed Central

    Cammack, P.

    2016-01-01

    Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes’ views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269601

  3. Two levels of Bayesian model averaging for optimal control of stochastic systems

    NASA Astrophysics Data System (ADS)

    Darwen, Paul J.

    2013-02-01

    Bayesian model averaging provides the best possible estimate of a model, given the data. This article uses that approach twice: once to get a distribution of plausible models of the world, and again to find a distribution of plausible control functions. The resulting ensemble gives control instructions different from simply taking the single best-fitting model and using it to find a single lowest-error control function for that single model. The only drawback is, of course, the need for more computer time: this article demonstrates that the required computer time is feasible. The test problem here is from flood control and risk management.

  4. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  5. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  6. Weighted Average Consensus-Based Unscented Kalman Filtering.

    PubMed

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453

  7. Rare events and the convergence of exponentially averaged work values

    NASA Astrophysics Data System (ADS)

    Jarzynski, Christopher

    2006-04-01

    Equilibrium free energy differences are given by exponential averages of nonequilibrium work values; such averages, however, often converge poorly, as they are dominated by rare realizations. I show that there is a simple and intuitively appealing description of these rare but dominant realizations. This description is expressed as a duality between “forward” and “reverse” processes, and provides both heuristic insights and quantitative estimates regarding the number of realizations needed for convergence of the exponential average. Analogous results apply to the equilibrium perturbation method of estimating free energy differences. The pedagogical example of a piston and gas [R.C. Lua and A.Y. Grosberg, J. Phys. Chem. B 109, 6805 (2005)] is used to illustrate the general discussion.

  8. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  9. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  10. RHIC BPM system average orbit calculations

    SciTech Connect

    Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

    2009-05-04

    RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

  11. This Kinetic, Bioavailability, and Metabolism Study of RRR-α-Tocopherol in Healthy Adults Suggests Lower Intake Requirements than Previous Estimates12

    PubMed Central

    Novotny, Janet A.; Fadel, James G.; Holstege, Dirk M.; Furr, Harold C.; Clifford, Andrew J.

    2012-01-01

    Kinetic models enable nutrient needs and kinetic behaviors to be quantified and provide mechanistic insights into metabolism. Therefore, we modeled and quantified the kinetics, bioavailability, and metabolism of RRR-α-tocopherol in 12 healthy adults. Six men and 6 women, aged 27 ± 6 y, each ingested 1.81 nmol of [5−14CH3]-(2R, 4′R, 8′R)-α-tocopherol; each dose had 3.70 kBq of 14C. Complete collections of urine and feces were made over the first 21 d from dosing. Serial blood samples were drawn over the first 70 d from dosing. All specimens were analyzed for RRR-α-tocopherol. Specimens were also analyzed for 14C using accelerator MS. From these data, we modeled and quantified the kinetics of RRR-α-tocopherol in vivo in humans. The model had 11 compartments, 3 delay compartments, and reservoirs for urine and feces. Bioavailability of RRR-α-tocopherol was 81 ± 1%. The model estimated residence time and half-life of the slowest turning-over compartment of α-tocopherol (adipose tissue) at 499 ± 702 d and 184 ± 48 d, respectively. The total body store of RRR-α-tocopherol was 25,900 ± 6=220 μmol (11 ± 3 g) and we calculated the adipose tissue level to be 1.53 μmol/g (657 μg/g). We found that a daily intake of 9.2 μmol (4 mg) of RRR-α-tocopherol maintained plasma RRR-α-tocopherol concentrations at 23 μmol/L. These findings suggest that the dietary requirement for vitamin E may be less than that currently recommended and these results will be important for future updates of intake recommendations. PMID:23077194

  12. A K-fold Averaging Cross-validation Procedure

    PubMed Central

    Jung, Yoonsuh; Hu, Jianhua

    2015-01-01

    Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.

  13. The use of difference spectra with a filtered rolling average background in mobile gamma spectrometry measurements

    NASA Astrophysics Data System (ADS)

    Cresswell, A. J.; Sanderson, D. C. W.

    2009-08-01

    The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.

  14. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  15. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in

  16. Average lifespan of radioelectronic equipment with allowance for resource limitations

    NASA Astrophysics Data System (ADS)

    Davydov, A. N.

    2011-12-01

    One of the reliability parameters of radioelectronic equipment is its average life span. The number of incidents during the operation of different items that make up the component base of radioelectronic equipment follows an exponential distribution. In general, the average life span for an exponential distribution is T mean = 1/λ, where λ is the rate of base incidents in a component per hour. This estimate is valid when considering the life span of radioelectronic equipment from zero to infinity. In reality, component base items and, correspondingly, radioelectronic equipment have resource limitations caused by the properties of their composing materials and manufacturing technique. The average life span of radioelectronic equipment will be different from the ideal life span of the equipment. This paper is aimed at calculating the average life span of radioelectronic equipment with allowance for resource limitations of constituent electronic component base items.

  17. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  18. Averaging procedures for flow within vegetation canopies

    NASA Astrophysics Data System (ADS)

    Raupach, M. R.; Shaw, R. H.

    1982-01-01

    Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.

  19. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  20. Optimizing Average Precision Using Weakly Supervised Data.

    PubMed

    Behl, Aseem; Mohapatra, Pritish; Jawahar, C V; Kumar, M Pawan

    2015-12-01

    Many tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (ap). Yet it is common practice to employ the support vector machine ( svm) classifier, which optimizes a surrogate 0-1 loss. The popularity of svmcan be attributed to its empirical performance. Specifically, in fully supervised settings, svm tends to provide similar accuracy to ap-svm, which directly optimizes an ap-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent ap-svm that minimizes a carefully designed upper bound on the ap-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection. PMID:26539857

  1. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  2. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  3. Estimation of Standardized Hospital Costs from Medicare Claims That Reflect Resource Requirements for Care: Impact for Cohort Studies Linked to Medicare Claims

    PubMed Central

    Schousboe, John T; Paudel, Misti L; Taylor, Brent C; Mau, Lih-Wen; Virnig, Beth A; Ensrud, Kristine E; Dowd, Bryan E

    2014-01-01

    Objective To compare cost estimates for hospital stays calculated using diagnosis-related group (DRG) weights to actual Medicare payments. Data Sources/Study Setting Medicare MedPAR files and DRG tables linked to participant data from the Study of Osteoporotic Fractures (SOF) from 1992 through 2010. Participants were women age 65 and older recruited in three metropolitan and one rural area of the United States. Study Design Costs were estimated using DRG payment weights for 1,397 hospital stays for 795 SOF participants for 1 year following a hip fracture. Medicare cost estimates included Medicare and secondary insurer payments, and copay and deductible amounts. Principal Findings The mean (SD) of inpatient DRG-based cost estimates per person-year were $16,268 ($10,058) compared with $19,937 ($15,531) for MedPAR payments. The correlation between DRG-based estimates and MedPAR payments was 0.71, and 51 percent of hospital stays were in different quintiles when costs were calculated based on DRG weights compared with MedPAR payments. Conclusions DRG-based cost estimates of hospital stays differ significantly from Medicare payments, which are adjusted by Medicare for facility and local geographic characteristics. DRG-based cost estimates may be preferable for analyses when facility and local geographic variation could bias assessment of associations between patient characteristics and costs. PMID:24461126

  4. Fatigue estimation using voice analysis.

    PubMed

    Greeley, Harold P; Berg, Joel; Friets, Eric; Wilson, John; Greenough, Glen; Picone, Joseph; Whitmore, Jeffrey; Nesthus, Thomas

    2007-08-01

    In the present article, we present a means to remotely and transparently estimate an individual's level of fatigue by quantifying changes in his or her voice characteristics. Using Voice analysis to estimate fatigue is unique from established cognitive measures in a number of ways: (1) speaking is a natural activity requiring no initial training or learning curve, (2) voice recording is a unobtrusive operation allowing the speakers to go about their normal work activities, (3) using telecommunication infrastructure (radio, telephone, etc.) a diffuse set of remote populations can be monitored at a central location, and (4) often, previously recorded voice data are available for post hoc analysis. By quantifying changes in the mathematical coefficients that describe the human speech production process, we were able to demonstrate that for speech sounds requiring a large average air flow, a speaker's voice changes in synchrony with both direct measures of fatigue and with changes predicted by the length of time awake. PMID:17958175

  5. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  6. Optimum orientation versus orientation averaging description of cluster radioactivity

    NASA Astrophysics Data System (ADS)

    Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.

    2016-07-01

    While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel–Kramers–Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske–Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.

  7. Comparison of total energy expenditure between the farming season and off farming season and accuracy assessment of estimated energy requirement prediction equation of Korean farmers

    PubMed Central

    Yeon, Seo-Eun; Lee, Sun-Hee; Choe, Jeong-Sook

    2015-01-01

    BACKGROUND/OBJECTIVES The purposes of this study were to compare total energy expenditure (including PAL and RMR) of Korean farmers between the farming season and off farming season and to assess the accuracy of estimated energy requirement (EER) prediction equation reported in KDRIs. SUBJECTS/METHODS Subjects were 72 Korean farmers (males 23, females 49) aged 30-64 years. Total energy expenditure was calculated by multiplying measured RMR by PAL. EER was calculated by using the prediction equation suggested in KDRIs 2010. RESULTS The physical activity level (PAL) was significantly higher (P < 0.05) in the farming season (male 1.77 ± 0.22, female 1.69 ± 0.24) than the off farming season (male 1.53 ± 0.32, female 1.52 ± 0.19). But resting metabolic rate was significantly higher (P < 0.05) in the off farming season (male 1,890 ± 233 kcal/day, female 1,446 ± 140 kcal/day) compared to the farming season (male 1,727 ± 163 kcal/day, female 1,356 ± 164 kcal/day). TEE (2,304 ± 497 kcal/day) of females was significantly higher in the farming season than that (2,183 ± 389 kcal/day) of the off farming season, but in males, there was no significant difference between two seasons in TEE. On the other hand, EER of male and female (2,825 ± 354 kcal/day and 2,115 ± 293 kcal/day) of the farming season was significantly higher (P < 0.05) than those (2,562 ± 339 kcal/day and 1,994 ± 224 kcal/day) of the off farming season. CONCLUSIONS This study indicates that there is a significant difference in PAL and TEE of farmers between farming and off farming seasons. And EER prediction equation proposed by KDRI 2010 underestimated TEE, thus EER prediction equation for farmers should be reviewed. PMID:25671071

  8. Estimation of the standardized ileal digestible valine to lysine ratio required for 25- to 120-kilogram pigs fed low crude protein diets supplemented with crystalline amino acids.

    PubMed

    Liu, X T; Ma, W F; Zeng, X F; Xie, C Y; Thacker, P A; Htoo, J K; Qiao, S Y

    2015-10-01

    .68 using a linear broken-line model and 0.72 using a quadratic model. Carcass traits and muscle quality were not influenced by SID Val:Lys ratio. In conclusion, the dietary SID Val:Lys ratios required for 26- to 46-, 49- to 70-, 71- to 92-, and 94- to 119-kg pigs were estimated to be 0.62, 0.66, 0.67, and 0.68, respectively, using a linear broken-line model and 0.71, 0.72, 0.73, and 0.72, respectively, using a quadratic model. PMID:26523569

  9. The ground-state average structure of methyl isocyanide

    NASA Astrophysics Data System (ADS)

    Mackenzie, M. W.; Duncan, J. L.

    The use of recently determined highly precise inertial data for various isotopic modifications of methyl isocyanide has enabled the ground-state average, or rz, structure to be determined to within very narrow limits. Harmonic corrections to ground-state rotational constants have been calculated using a high-quality, experimentally determined harmonic force field. The derived zero-point inertial constants are sufficiently accurate to enable changes in the CH bond length and NCH bond angle on deuteration to be determined. The present rz structure determination is believed to be a physically realistic estimate of the ground-state average geometry of methyl isocyanide.

  10. The ground-state average structure of methyl isocyanide

    NASA Astrophysics Data System (ADS)

    Mackenzie, M. W.; Duncan, J. L.

    1982-11-01

    The use of recently determined highly precise inertial data for various isotopic modifications of methyl isocyanide has enabled the ground-state average, or rz, structure to be determined to within very narrow limits. Harmonic corrections to ground-state rotational constants have been calculated using a high-quality, experimentally determined harmonic force field. The derived zero-point inertial constants are sufficiently accurate to enable changes in the CH bond length and NCH bond angle on deuteration to be determined. The present rz structure determination is believed to be a physically realistic estimate of the ground-state average geometry of methyl isocyanide.

  11. Characterizing average permeability in oil and gas formations

    SciTech Connect

    Rollins, J.B. ); Holditch, S.A.; Lee, W.J. )

    1992-03-01

    This paper reports that permeability in a formation frequently follows a unimodal probability distribution. In many formations, particularly sedimentary ones, the permeability distribution is similar to the log-normal distribution. Theoretical considerations, field cases, and a reservoir simulation example show that the median, rather than the arithmetic mean, is the appropriate measure of central tendency or average value of the permeability distribution in a formation. Use of the correct estimate of average permeability is of particular importance in the classification of tight gas formations under statues in the 1978 Natural Gas Policy Act (NGPA).

  12. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  13. Averaging of Backscatter Intensities in Compounds

    PubMed Central

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging.

  14. Neutron resonance averaging with filtered beams

    SciTech Connect

    Chrien, R.E.

    1985-01-01

    Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.

  15. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  16. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  17. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... such services in compliance with its geographic rate averaging and rate integration obligations... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED)...

  18. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  19. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  20. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  1. AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN

    EPA Science Inventory

    The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...

  2. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959...

  3. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  4. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  5. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  6. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  7. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...

  8. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

  9. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  10. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  11. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  12. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  13. SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...

  14. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  15. New results on averaging theory and applications

    NASA Astrophysics Data System (ADS)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  16. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  17. Spatial and frequency averaging techniques for a polarimetric scatterometer system

    SciTech Connect

    Monakov, A.A.; Stjernman, A.S.; Nystroem, A.K. ); Vivekanandan, J. )

    1994-01-01

    An accurate estimation of backscattering coefficients for various types of rough surfaces is the main theme of remote sensing. Radar scattering signals from distributed targets exhibit fading due to interference associated with coherent scattering from individual scatterers within the resolution volume. Uncertainty in radar measurements which arises as a result of fading is reduced by averaging independent samples. Independent samples are obtained by collecting the radar returns from nonoverlapping footprints (spatial averaging) and/or nonoverlapping frequencies (frequency agility techniques). An improved formulation of fading characteristics for the spatial averaging and frequency agility technique is derived by taking into account the rough surface scattering process. Kirchhoff's approximation is used to describe rough surface scattering. Expressions for fading decorrelation distance and decorrelation bandwidth are derived. Rough surface scattering measurements are performed between L and X bands. Measured frequency and spatial correlation coefficients show good agreement with theoretical results.

  18. The average chemical composition of the lunar surface

    NASA Technical Reports Server (NTRS)

    Turkevich, A. L.

    1973-01-01

    The available analytical data from twelve locations on the moon are used to estimate the average amounts of the principal chemical elements (O, Na, Mg, Al, Si, Ca, Ti, and Fe) in the mare, the terra, and the average lunar surface regolith. These chemical elements comprise about 99% of the atoms on the lunar surface. The relatively small variability in the amounts of these elements at different mare (or terra) sites, and the evidence from the orbital measurements of Apollo 15 and 16, suggest that the lunar surface is much more homogeneous than the surface of the earth. The average chemical composition of the lunar surface may now be known as well as, if not better than, that of the solid part of the earth's surface.

  19. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    ERIC Educational Resources Information Center

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  20. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    PubMed

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  1. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    PubMed

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  2. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging

    PubMed Central

    Brezis, Noam; Bronfman, Zohar Z.; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  3. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    NASA Technical Reports Server (NTRS)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  4. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  5. Retrieval of cloud fraction and height anomalies and their trend from temporally and spatially averaged infrared spectra observed from space

    NASA Astrophysics Data System (ADS)

    Kato, S.; Rose, F. G.; Liu, X.; Wielicki, B. A.; Mlynczak, M. G.

    2013-12-01

    Understanding how clouds and atmospheric properties change with time under radiative forcing is necessary to understand feedback. Generally, global clouds and atmospheric Understanding how clouds and atmospheric properties change with time under radiative forcing is necessary to understand feedback. Generally, global clouds and atmospheric properties are retrieved from satellite-based instruments. Subsequently, retrieved values from an instrument's field-of-view are averaged and the time rate of change of cloud or atmospheric properties can be inferred from averaged properties. This is simple in concept but identifying artifacts of the retrieval is difficult in practice. An alternative way to derive a trend of cloud and atmospheric properties is tying their property change directly to the observed radiance change. This average-then-retrieve approach directly utilizes instrument stability but requires separating cloud and atmospheric property changes contributing to the highly spatially and temporally averaged observed radiance change. In this presentation, we demonstrate the average-then-retrieve approach by simulating the retrieval of cloud fraction and height anomalies from highly averaged longwave spectra. We use 28 years of reanalysis (Modern Era Retrospective-Analysis for Research MERRA) for the simulation and retrieve annual 10° zonal cloud fraction and height anomalies, as well as temperature and water vapor amount anomalies. The error in retrieved anomalies is estimated based on the method discussed in Kato et al. (2011). The uncertainty in the trend estimated from retrieved anomalies is also discussed. Reference Kato, S., B. A. Wielicki, F. G. Rose, X. Liu, P. C. Taylor, D. P. Kratz, M. G. Mlynczak, D. F. Young, N. Phojanamongkolkij, S. Sun-Mack, W. F. Miller, Y. Chen, 2011b, Detection of atmospheric changes in spatially and temporally averaged infrared spectra observed from space, J Climate, 24, 6392-6407, Doi: 10.1175/JCLI-D-10-05005.1.

  6. Cosmic Inhomogeneities and Averaged Cosmological Dynamics

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Singh, T. P.

    2008-10-01

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.

  7. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793

  8. Average Shape of Transport-Limited Aggregates

    NASA Astrophysics Data System (ADS)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  9. The weight of nations: an estimation of adult human biomass

    PubMed Central

    2012-01-01

    Background The energy requirement of species at each trophic level in an ecological pyramid is a function of the number of organisms and their average mass. Regarding human populations, although considerable attention is given to estimating the number of people, much less is given to estimating average mass, despite evidence that average body mass is increasing. We estimate global human biomass, its distribution by region and the proportion of biomass due to overweight and obesity. Methods For each country we used data on body mass index (BMI) and height distribution to estimate average adult body mass. We calculated total biomass as the product of population size and average body mass. We estimated the percentage of the population that is overweight (BMI > 25) and obese (BMI > 30) and the biomass due to overweight and obesity. Results In 2005, global adult human biomass was approximately 287 million tonnes, of which 15 million tonnes were due to overweight (BMI > 25), a mass equivalent to that of 242 million people of average body mass (5% of global human biomass). Biomass due to obesity was 3.5 million tonnes, the mass equivalent of 56 million people of average body mass (1.2% of human biomass). North America has 6% of the world population but 34% of biomass due to obesity. Asia has 61% of the world population but 13% of biomass due to obesity. One tonne of human biomass corresponds to approximately 12 adults in North America and 17 adults in Asia. If all countries had the BMI distribution of the USA, the increase in human biomass of 58 million tonnes would be equivalent in mass to an extra 935 million people of average body mass, and have energy requirements equivalent to that of 473 million adults. Conclusions Increasing population fatness could have the same implications for world food energy demands as an extra half a billion people living on the earth. PMID:22709383

  10. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  11. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  12. 40 CFR 91.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset by positive credits from engine families below the applicable emission standard, as allowed under the provisions of this subpart. Averaging of credits in this manner is used to determine...

  13. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  14. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  15. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  16. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  17. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  18. Total-pressure averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  19. Stochastic Averaging of Duhem Hysteretic Systems

    NASA Astrophysics Data System (ADS)

    YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.

    2002-06-01

    The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.

  20. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  1. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  2. Book Trade Research and Statistics. Prices of U.S. and Foreign Published Materials; Book Title Output and Average Prices: 2000 Final and 2001 Preliminary Figures; Book Sales Statistics, 2001: AAP Preliminary Estimates; U.S. Book Exports and Imports: 2001; Number of Book Outlets in the United States and Canada; Review Media Statistics.

    ERIC Educational Resources Information Center

    Sullivan, Sharon G.; Barr, Catherine; Grabois, Andrew

    2002-01-01

    Includes six articles that report on prices of U.S. and foreign published materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and review media statistics. (LRW)

  3. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  4. Heuristic approach to capillary pressures averaging

    SciTech Connect

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  5. Average Soil Water Retention Curves Measured by Neutron Radiography

    SciTech Connect

    Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  6. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  7. The solar UV exposure time required for vitamin D3 synthesis in the human body estimated by numerical simulation and observation in Japan

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideaki; Miyauchi, Masaatsu; Hirai, Chizuko

    2013-04-01

    After the discovery of Antarctic ozone hole, the negative effect of exposure of human body to harmful solar ultraviolet (UV) radiation is widely known. However, there is positive effect of exposure to UV radiation, i.e., vitamin D synthesis. Although the importance of solar UV radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been well determined. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha, in Japan) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 2.3 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin. This quantity of vitamin D represents the recommended intake for an adult by the Ministry of Health, Labour and Welfare, and the 2010 Japanese Dietary Reference Intakes (DRIs). In contrast, it took 49.5 min to produce the same amount of vitamin D3 at Sapporo in the northern part of Japan in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 14.5 min were required, but at 09:00 68.7 min were required and at 15:00 175.8 min were required for the same meteorological conditions. Naha receives high levels of UV radiation allowing vitamin D3 synthesis almost throughout the year. According to our results, we are further developing an index to quantify the necessary time of UV radiation exposure to produce required amount of vitamin D3 from a UV radiation data.

  8. Greenhouse Gas Emissions and the Australian Diet—Comparing Dietary Recommendations with Average Intakes

    PubMed Central

    Hendrie, Gilly A.; Ridoutt, Brad G.; Wiedmann, Thomas O.; Noakes, Manny

    2014-01-01

    Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe) for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e) per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet). Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day) and energy-dense, nutrient poor “non-core” foods (3.9 kg CO2e). Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe. PMID:24406846

  9. The CAIRN method: automated, reproducible calculation of catchment-averaged denudation rates from cosmogenic nuclide concentrations

    NASA Astrophysics Data System (ADS)

    Marius Mudd, Simon; Harel, Marie-Alice; Hurst, Martin D.; Grieve, Stuart W. D.; Marrero, Shasta M.

    2016-08-01

    We report a new program for calculating catchment-averaged denudation rates from cosmogenic nuclide concentrations. The method (Catchment-Averaged denudatIon Rates from cosmogenic Nuclides: CAIRN) bundles previously reported production scaling and topographic shielding algorithms. In addition, it calculates production and shielding on a pixel-by-pixel basis. We explore the effect of sampling frequency across both azimuth (Δθ) and altitude (Δϕ) angles for topographic shielding and show that in high relief terrain a relatively high sampling frequency is required, with a good balance achieved between accuracy and computational expense at Δθ = 8° and Δϕ = 5°. CAIRN includes both internal and external uncertainty analysis, and is packaged in freely available software in order to facilitate easily reproducible denudation rate estimates. CAIRN calculates denudation rates but also automates catchment averaging of shielding and production, and thus can be used to provide reproducible input parameters for the CRONUS family of online calculators.

  10. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  11. Response of rats to 50% of the estimated dietary magnesium requirement changes with length of deprivation and different dietary fat sources

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Magnesium deprivation increased the inflammatory neuropeptide substance P and the inflammatory cytokines TNFa and IL-1ß in bone of rats; the effects of deprivation were more marked at 6 months than 3 months in rats fed 50% of the magnesium requirement (Rude et al., Ostoporosos Int. 17:1022, 2006). D...

  12. Explicit cosmological coarse graining via spatial averaging

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Singh, T. P.

    2008-01-01

    The present matter density of the Universe, while highly inhomogeneous on small scales, displays approximate homogeneity on large scales. We propose that whereas it is justified to use the Friedmann Lemaître Robertson Walker (FLRW) line element (which describes an exactly homogeneous and isotropic universe) as a template to construct luminosity distances in order to compare observations with theory, the evolution of the scale factor in such a construction must be governed not by the standard Einstein equations for the FLRW metric, but by the modified Friedmann equations derived by Buchert (Gen Relat Gravit 32:105, 2000; 33:1381, 2001) in the context of spatial averaging in Cosmology. Furthermore, we argue that this scale factor, defined in the spatially averaged cosmology, will correspond to the effective FLRW metric provided the size of the averaging domain coincides with the scale at which cosmological homogeneity arises. This allows us, in principle, to compare predictions of a spatially averaged cosmology with observations, in the standard manner, for instance by computing the luminosity distance versus red-shift relation. The predictions of the spatially averaged cosmology would in general differ from standard FLRW cosmology, because the scale-factor now obeys the modified FLRW equations. This could help determine, by comparing with observations, whether or not cosmological inhomogeneities are an alternative explanation for the observed cosmic acceleration.

  13. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  14. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  15. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  16. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  17. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  18. Air/superfund national technical guidance study series. Air emissions from area sources: Estimating soil and soil-gas sample number requirements. Final report

    SciTech Connect

    Westbrook, W.

    1993-03-01

    The document provides guidance regarding the necessary number of soil gas or soil samples needed to estimate air emissions from area sources. The Manual relies heavily on statistical methods discussed in Appendix C of Volume II of Air/Superfund National Technical Guidance Study Series (EPA 1990) and Chapter 9 of SW-846 (EPA 1986). The techniques in the manual are based on recognizing the inhomgeniety of an area, by observation or screening samples, before samples are taken. Each of the identified zones are then sampled, using random sampling techniques, and statistics calculated separately for each zone before combining the statistics to provide an estimate for the entire area. The statistical techniques presented may also be used to analyze other types of data and provide measures such as mean, variance, and standard deviation. The methods presented in the Manual are based on small sample methods. Application of the methods to data which are appropriately analyzed by large sample methods or to data which is not normally distributed will give erroneous results.

  19. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  20. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  1. Evaluating Methods for Constructing Average High-Density Electrode Positions

    PubMed Central

    Richards, John E.; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M.C.

    2014-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel “Geodesic Sensor Net” (GSN; EGI, Inc.), 38 participants with the 128 channel “Hydrocel Geodesic Sensor Net” (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants’ original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants) PMID:25234713

  2. Evaluating methods for constructing average high-density electrode positions.

    PubMed

    Richards, John E; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M C

    2015-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel "Geodesic Sensor Net" (GSN; EGI, Inc.), 38 participants with the 128 channel "Hydrocel Geodesic Sensor Net" (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants' original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants). PMID:25234713

  3. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  4. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  5. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  6. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  7. Discrete Models of Fluids: Spatial Averaging, Closure, and Model Reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre; Cooper, Kevin

    2014-03-06

    The main question addressed in the paper is how to obtain closed form continuum equations governing spatially averaged dynamics of semi-discrete ODE models of fluid flow. In the presence of multiple small scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy balance equations of mass, momentum and energy. These equations are exact, but they do not form a continuum model in the true sense of the word because calculation of stress and heat flux requires solving the underlying ODE system. To produce continuum equations that can be simulated without resolving micro-scale dynamics, we developed a closure method based on the use of regularized deconvolutions. We mostly deal with non-linear averaging suitable for Lagrangian particle solvers, but consider Eulerian linear averaging where appropriate. The results of numerical experiments show good agreement between our closed form flux approximations and their exact counterparts.

  8. Ensemble crowd perception: A viewpoint-invariant mechanism to represent average crowd identity

    PubMed Central

    Yamanashi Leib, Allison; Fischer, Jason; Liu, Yang; Qiu, Sang; Robertson, Lynn; Whitney, David

    2014-01-01

    Individuals can rapidly and precisely judge the average of a set of similar items, including both low-level (Ariely, 2001) and high-level objects (Haberman & Whitney, 2007). However, to date, it is unclear whether ensemble perception is based on viewpoint-invariant object representations. Here, we tested this question by presenting participants with crowds of sequentially presented faces. The number of faces in each crowd and the viewpoint of each face varied from trial to trial. This design required participants to integrate information from multiple viewpoints into one ensemble percept. Participants reported the mean identity of crowds (e.g., family resemblance) using an adjustable, forward-oriented test face. Our results showed that participants accurately perceived the mean crowd identity even when required to incorporate information across multiple face orientations. Control experiments showed that the precision of ensemble coding was not solely dependent on the length of time participants viewed the crowd. Moreover, control analyses demonstrated that observers did not simply sample a subset of faces in the crowd but rather integrated many faces into their estimates of average crowd identity. These results demonstrate that ensemble perception can operate at the highest levels of object recognition after 3-D viewpoint-invariant faces are represented. PMID:25074904

  9. Ensemble crowd perception: a viewpoint-invariant mechanism to represent average crowd identity.

    PubMed

    Yamanashi Leib, Allison; Fischer, Jason; Liu, Yang; Qiu, Sang; Robertson, Lynn; Whitney, David

    2014-01-01

    Individuals can rapidly and precisely judge the average of a set of similar items, including both low-level (Ariely, 2001) and high-level objects (Haberman & Whitney, 2007). However, to date, it is unclear whether ensemble perception is based on viewpoint-invariant object representations. Here, we tested this question by presenting participants with crowds of sequentially presented faces. The number of faces in each crowd and the viewpoint of each face varied from trial to trial. This design required participants to integrate information from multiple viewpoints into one ensemble percept. Participants reported the mean identity of crowds (e.g., family resemblance) using an adjustable, forward-oriented test face. Our results showed that participants accurately perceived the mean crowd identity even when required to incorporate information across multiple face orientations. Control experiments showed that the precision of ensemble coding was not solely dependent on the length of time participants viewed the crowd. Moreover, control analyses demonstrated that observers did not simply sample a subset of faces in the crowd but rather integrated many faces into their estimates of average crowd identity. These results demonstrate that ensemble perception can operate at the highest levels of object recognition after 3-D viewpoint-invariant faces are represented. PMID:25074904

  10. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    PubMed

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  11. The allometric relationship between resting metabolic rate and body mass in wild waterfowl (Anatidae) and an application to estimation of winter habitat requirements

    USGS Publications Warehouse

    Miller, M.R.; Eadie, J. McA

    2006-01-01

    Breeding densities and migration periods of Common Snipe in Colorado were investigated in 1974-75. Sites studied were near Fort Collins and in North Park, both in north central Colorado; in the Yampa Valley in northwestern Colorado; and in the San Luis Valley in south central Colorado....Estimated densities of breeding snipe based on censuses conducted during May 1974 and 1975 were, by region: 1.3-1.7 snipe/ha near Fort Collins; 0.6 snipe/ha in North Park; 0.5-0.7 snipe/ha in the Yampa Valley; and 0.5 snipe/ha in the San Luis Valley. Overall mean densities were 06 and 0.7 snipe/ha in 1974 and 1975 respectively. On individual study sites, densities of snipe ranged from 0.2 to 2.1 snipe/ha. Areas with shallow, stable, discontinuous water levels, sparse, short vegetation, and soft organic soils had the highest densities.....Twenty-eight nests were located having a mean clutch size of 3.9 eggs. Estimated onset of incubation ranged from 2 May through 4 July. Most nests were initiated in May.....Spring migration extended from late March through early May. Highest densities of snipe were recorded in all regions during l&23 April. Fall migration was underway by early September and was completed by mid-October with highest densities occurring about the third week in September. High numbers of snipe noted in early August may have been early migrants or locally produced juveniles concentrating on favorable feeding areas.

  12. Averaging cross section data so we can fit it

    SciTech Connect

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  13. Space debris collision and production analytic estimates

    SciTech Connect

    Canavan, G.H.

    1996-08-01

    Analytic estimates provide useful estimates of collision rates, fragment production rates, and average collision masses and numbers in good agreement with analytic and numerical estimates for the principal quantities of interest.

  14. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...

  15. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  16. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733

  17. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  18. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  19. Averaging models for linear piezostructural systems

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.

    2009-03-01

    In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.

  20. A Measure of the Average Intercorrelation

    ERIC Educational Resources Information Center

    Meyer, Edward P.

    1975-01-01

    Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)

  1. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  2. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  3. Reformulation of Ensemble Averages via Coordinate Mapping.

    PubMed

    Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A

    2016-04-12

    A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263

  4. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...

  5. Average configuration of the induced venus magnetotail

    SciTech Connect

    McComas, D.J.; Spence, H.E.; Russell, C.T.

    1985-01-01

    In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

  6. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  7. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  8. Orbit Averaging in Perturbed Planetary Rings

    NASA Astrophysics Data System (ADS)

    Stewart, Glen R.

    2015-11-01

    The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.

  9. The role of the harmonic vector average in motion integration

    PubMed Central

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  10. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  11. Individualization of transfer function in estimation of central aortic pressure from the peripheral pulse is not required in patients at rest.

    PubMed

    Westerhof, Berend E; Guelen, Ilja; Stok, Wim J; Lasance, Han A J; Ascoop, Carl A P L; Wesseling, Karel H; Westerhof, Nico; Bos, Willem Jan W; Stergiopulos, Nikos; Spaan, Jos A E

    2008-12-01

    Central aortic pressure gives better insight into ventriculo-arterial coupling and better prognosis of cardiovascular complications than peripheral pressures. Therefore transfer functions (TF), reconstructing aortic pressure from peripheral pressures, are of great interest. Generalized TFs (GTF) give useful results, especially in larger study populations, but detailed information on aortic pressure might be improved by individualization of the TF. We found earlier that the time delay, representing the travel time of the pressure wave between measurement site and aorta is the main determinant of the TF. Therefore, we hypothesized that the TF might be individualized (ITF) using this time delay. In a group of 50 patients at rest, aged 28-66 yr (43 men), undergoing diagnostic angiography, ascending aortic pressure was 119 +/- 20/70 +/- 9 mmHg (systolic/diastolic). Brachial pressure, almost simultaneously measured using catheter pullback, was 131 +/- 18/67 +/- 9 mmHg. We obtained brachial-to-aorta ITFs using time delays optimized for the individual and a GTF using averaged delay. With the use of ITFs, reconstructed aortic pressure was 121 +/- 19/69 +/- 9 mmHg and the root mean square error (RMSE), as measure of difference in wave shape, was 4.1 +/- 2.0 mmHg. With the use of the GTF, reconstructed pressure was 122 +/- 19/69 +/- 9 mmHg and RMSE 4.4 +/- 2.0 mmHg. The augmentation index (AI) of the measured aortic pressure was 26 +/- 13%, and with ITF and GTF the AIs were 28 +/- 12% and 30 +/- 11%, respectively. Details of the wave shape were reproduced slightly better with ITF but not significantly, thus individualization of pressure transfer is not effective in resting patients. PMID:18845775

  12. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  13. High average power diode pumped solid state lasers for CALIOPE

    SciTech Connect

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

  14. Simulation Framework to Estimate the Performance of CO2 and O2 Sensing from Space and Airborne Platforms for the ASCENDS Mission Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Plitau, Denis; Prasad, Narasimha S.

    2012-01-01

    The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.

  15. Lidar uncertainty and beam averaging correction

    NASA Astrophysics Data System (ADS)

    Giyanani, A.; Bierbooms, W.; van Bussel, G.

    2015-05-01

    Remote sensing of the atmospheric variables with the use of Lidar is a relatively new technology field for wind resource assessment in wind energy. A review of the draft version of an international guideline (CD IEC 61400-12-1 Ed.2) used for wind energy purposes is performed and some extra atmospheric variables are taken into account for proper representation of the site. A measurement campaign with two Leosphere vertical scanning WindCube Lidars and metmast measurements is used for comparison of the uncertainty in wind speed measurements using the CD IEC 61400-12-1 Ed.2. The comparison revealed higher but realistic uncertainties. A simple model for Lidar beam averaging correction is demonstrated for understanding deviation in the measurements. It can be further applied for beam averaging uncertainty calculations in flat and complex terrain.

  16. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  17. Estimation of Radar Cross Section of a Target under Track

    NASA Astrophysics Data System (ADS)

    Jung, Young-Hun; Hong, Sun-Mog; Choi, Seung Ho

    2010-12-01

    In allocating radar beam for tracking a target, it is attempted to maintain the signal-to-noise ratio (SNR) of signal returning from the illuminated target close to an optimum value for efficient track updates. An estimate of the average radar cross section (RCS) of the target is required in order to adjust transmitted power based on the estimate such that a desired SNR can be realized. In this paper, a maximum-likelihood (ML) approach is presented for estimating the average RCS, and a numerical solution to the approach is proposed based on a generalized expectation maximization (GEM) algorithm. Estimation accuracy of the approach is compared to that of a previously reported procedure.

  18. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  19. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  20. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…