Science.gov

Sample records for estimated average requirement

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  3. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  4. Dynamic consensus estimation of weighted average on directed graphs

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Guo, Yi

    2015-07-01

    Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.

  5. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  6. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  7. Estimating Health Services Requirements

    NASA Technical Reports Server (NTRS)

    Alexander, H. M.

    1985-01-01

    In computer program NOROCA populations statistics from National Center for Health Statistics used with computational procedure to estimate health service utilization rates, physician demands (by specialty) and hospital bed demands (by type of service). Computational procedure applicable to health service area of any size and even used to estimate statewide demands for health services.

  8. Doubly robust estimation of the local average treatment effect curve

    PubMed Central

    Ogburn, Elizabeth L.; Rotnitzky, Andrea; Robins, James M.

    2014-01-01

    Summary We consider estimation of the causal effect of a binary treatment on an outcome, conditionally on covariates, from observational studies or natural experiments in which there is a binary instrument for treatment. We describe a doubly robust, locally efficient estimator of the parameters indexing a model for the local average treatment effect conditionally on covariates V when randomization of the instrument is only true conditionally on a high dimensional vector of covariates X, possibly bigger than V. We discuss the surprising result that inference is identical to inference for the parameters of a model for an additive treatment effect on the treated conditionally on V that assumes no treatment–instrument interaction. We illustrate our methods with the estimation of the local average effect of participating in 401(k) retirement programs on savings by using data from the US Census Bureau's 1991 Survey of Income and Program Participation. PMID:25663814

  9. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  10. Geodesic estimation for large deformation anatomical shape averaging and interpolation.

    PubMed

    Avants, Brian; Gee, James C

    2004-01-01

    The goal of this research is to promote variational methods for anatomical averaging that operate within the space of the underlying image registration problem. This approach is effective when using the large deformation viscous framework, where linear averaging is not valid, or in the elastic case. The theory behind this novel atlas building algorithm is similar to the traditional pairwise registration problem, but with single image forces replaced by average forces. These group forces drive an average transport ordinary differential equation allowing one to estimate the geodesic that moves an image toward the mean shape configuration. This model gives large deformation atlases that are optimal with respect to the shape manifold as defined by the data and the image registration assumptions. We use the techniques in the large deformation context here, but they also pertain to small deformation atlas construction. Furthermore, a natural, inherently inverse consistent image registration is gained for free, as is a tool for constant arc length geodesic shape interpolation. The geodesic atlas creation algorithm is quantitatively compared to the Euclidean anatomical average to elucidate the need for optimized atlases. The procedures generate improved average representations of highly variable anatomy from distinct populations. PMID:15501083

  11. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  12. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  13. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  14. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Estimating storm areal average rainfall intensity in field experiments

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, Christa D.; Wood, Eric F.

    1994-07-01

    Estimates of areal mean precipitation intensity derived from rain gages are commonly used to assess the performance of rainfall radars and satellite rainfall retrieval algorithms. Areal mean precipitation time series collected during short-duration climate field studies are also used as inputs to water and energy balance models which simulate land-atmosphere interactions during the experiments. In two recent field experiments (1987 First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) and the Multisensor Airborne Campaign for Hydrology 1990 (MAC-HYDRO '90)) designed to investigate the climatic signatures of land-surface forcings and to test airborne sensors, rain gages were placed over the watersheds of interest. These gages provide the sole means for estimating storm precipitation over these areas, and the gage densities present during these experiments indicate that there is a large uncertainty in estimating areal mean precipitation intensity for single storm events. Using a theoretical model of time- and area-averaged space- time rainfall and a model rainfall generator, the error structure of areal mean precipitation intensity is studied for storms statistically similar to those observed in the FIFE and MAC-HYDRO field experiments. Comparisons of the error versus gage density trade-off curves to those calculated using the storm observations show that the rainfall simulator can provide good estimates of the expected measurement error given only the expected intensity, coefficient of variation, and rain cell diameter or correlation length scale, and that these errors can quickly become very large (in excess of 20%) for certain storms measured with a network whose size is below a "critical" gage density. Because the mean storm rainfall error is particularly sensitive to the correlation length, it is important that future field experiments include radar and/or dense rain gage networks capable of accurately characterizing the

  16. Rainfall Estimation Over Tropical Oceans. 1; Area Average Rain Rate

    NASA Technical Reports Server (NTRS)

    Cuddapah, Prabhakara; Cadeddu, Maria; Meneghini, R.; Short, David A.; Yoo, Jung-Moon; Dalu, G.; Schols, J. L.; Weinman, J. A.

    1997-01-01

    Multichannel dual polarization microwave radiometer SSM/I observations over oceans do not contain sufficient information to differentiate quantitatively the rain from other hydrometeors on a scale comparable to the radiometer field of view (approx. 30 km). For this reason we have developed a method to retrieve average rain rate over a mesoscale grid box of approx. 300 x 300 sq km area over the TOGA COARE region where simultaneous radiometer and radar observations are available for four months (Nov. 92 to Feb. 93). The rain area in the grid box, inferred from the scattering depression due to hydrometeors in the 85 Ghz brightness temperature, constitutes a key parameter in this method. Then the spectral and polarization information contained in all the channels of the SSM/I is utilized to deduce a second parameter. This is the ratio S/E of scattering index S, and emission index E calculated from the SSM/I data. The rain rate retrieved from this method over the mesoscale area can reproduce the radar observed rain rate with a correlation coefficient of about 0.85. Furthermore monthly total rainfall estimated from this method over that area has an average error of about 15%.

  17. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time. PMID:26093410

  18. Fringe-Orientation Estimation by use of a Gaussian Gradient Filter and Neighboring-Direction Averaging

    NASA Astrophysics Data System (ADS)

    Zhou, Xiang; Baird, John P.; Arnold, John F.

    1999-02-01

    We analyze the effect of image noise on the estimation of fringe orientation in principle and interpret the application of a texture-analysis technique to the problem of estimating fringe orientation in interferograms. The gradient of a Gaussian filter and neighboring-direction averaging are shown to meet the requirements of fringe-orientation estimation by reduction of the effects of low-frequency background and contrast variances as well as high-frequency random image noise. The technique also improves inaccurate orientation estimation at low-modulation points, such as fringe centers and broken fringes. Experiments demonstrate that the scales of the Gaussian gradient filter and the direction averaging should be chosen according to the fringe spacings of the interferograms.

  19. Effect of wind averaging time on wind erosivity estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...

  20. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and

  1. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  2. Experimental estimation of average fidelity of a Clifford gate on a 7-qubit quantum processor.

    PubMed

    Lu, Dawei; Li, Hang; Trottier, Denis-Alexandre; Li, Jun; Brodutch, Aharon; Krismanich, Anthony P; Ghavami, Ahmad; Dmitrienko, Gary I; Long, Guilu; Baugh, Jonathan; Laflamme, Raymond

    2015-04-10

    One of the major experimental achievements in the past decades is the ability to control quantum systems to high levels of precision. To quantify the level of control we need to characterize the dynamical evolution. Full characterization via quantum process tomography is impractical and often unnecessary. For most practical purposes, it is enough to estimate more general quantities such as the average fidelity. Here we use a unitary 2-design and twirling protocol for efficiently estimating the average fidelity of Clifford gates, to certify a 7-qubit entangling gate in a nuclear magnetic resonance quantum processor. Compared with more than 10^{8} experiments required by full process tomography, we conducted 1656 experiments to satisfy a statistical confidence level of 99%. The average fidelity of this Clifford gate in experiment is 55.1%, and rises to at least 87.5% if the signal's decay due to decoherence is taken into account. The entire protocol of certifying Clifford gates is efficient and scalable, and can easily be extended to any general quantum information processor with minor modifications. PMID:25910102

  3. Estimates of zonally averaged tropical diabatic heating in AMIP GCM simulations. PCMDI report No. 25

    SciTech Connect

    Boyle, J.S.

    1995-07-01

    An understanding of the processess that generate the atmospheric diabatic heating rates is basic to an understanding of the time averaged general circulation of the atmosphere and also circulation anomalies. Knowledge of the sources and sinks of atmospheric heating enables a fuller understanding of the nature of the atmospheric circulation. An actual assesment of the diabatic heating rates in the atmosphere is a difficult problem that has been approached in a number of ways. One way is to estimate the total diabatic heating by estimating individual components associated with the radiative fluxes, the latent heat release, and sensible heat fluxes. An example of this approach is provided by Newell. Another approach is to estimate the net heating rates from consideration of the balance required of the mass and wind variables as routinely observed and analyzed. This budget computation has been done using the thermodynamic equation and more recently done by using the vorticity and thermodynamic equations. Schaak and Johnson compute the heating rates through the integration of the isentropic mass continuity equation. The estimates of heating arrived at all these methods are severely handicapped by the uncertainties in the observational data and analyses. In addition the estimates of the individual heating components suffer an additional source of error from the parameterizations used to approximate these quantities.

  4. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    SciTech Connect

    Verdin, Kristine L.

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  5. Double robust estimator of average causal treatment effect for censored medical cost data.

    PubMed

    Wang, Xuan; Beste, Lauren A; Maier, Marissa M; Zhou, Xiao-Hua

    2016-08-15

    In observational studies, estimation of average causal treatment effect on a patient's response should adjust for confounders that are associated with both treatment exposure and response. In addition, the response, such as medical cost, may have incomplete follow-up. In this article, a double robust estimator is proposed for average causal treatment effect for right censored medical cost data. The estimator is double robust in the sense that it remains consistent when either the model for the treatment assignment or the regression model for the response is correctly specified. Double robust estimators increase the likelihood the results will represent a valid inference. Asymptotic normality is obtained for the proposed estimator, and an estimator for the asymptotic variance is also derived. Simulation studies show good finite sample performance of the proposed estimator and a real data analysis using the proposed method is provided as illustration. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26818601

  6. A comparison of spatial averaging and Cadzow's method for array wavenumber estimation

    SciTech Connect

    Harris, D.B.; Clark, G.A.

    1989-10-31

    We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.

  7. Weighted interframe averaging-based channel estimation for orthogonal frequency division multiplexing passive optical network

    NASA Astrophysics Data System (ADS)

    Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan

    2015-10-01

    Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.

  8. Estimation of the exertion requirements of coal mining work

    SciTech Connect

    Harber, P.; Tamimie, J.; Emory, J.

    1984-02-01

    The work requirements of coal mining work were estimated by studying a group of 12 underground coal miners. A two level (rest, 300 kg X m/min) test was performed to estimate the linear relationship between each subject's heart rate and oxygen consumption. Then, heart rates were recorded during coal mining work with a Holter type recorder. From these data, the distributions of oxygen consumptions during work were estimated, allowing characterization of the range of exertion throughout the work day. The average median estimated oxygen consumption was 3.3 METS, the average 70th percentile was 4.3 METS, and the average 90th percentile was 6.3 METS. These results should be considered when assessing an individual's occupational fitness.

  9. Esophageal pressure as an estimate of average pleural pressure with lung or chest distortion in rats.

    PubMed

    Pecchiari, Matteo; Loring, Stephen H; D'Angelo, Edgardo

    2013-04-01

    Pressure-volume curves of the lungs and chest wall require knowledge of an effective 'average' pleural pressure (Pplav), and are usually estimated using esophageal pressure as Ples-V and Pwes-V curves. Such estimates could be misleading when Ppl becomes spatially non-uniform with lung lavage or shape distortion of the chest. We therefore measured Ples-V and Pwes-V curves in conditions causing spatial non-uniformity of Ppl in rats. Ples-V curves of normal lungs were unchanged by chest removal. Lung lavage depressed PLes-V but not Pwes-V curves to lower volumes, and chest removal after lavage increased volumes at PL≥15cmH2O by relieving distortion of the mechanically heterogeneous lungs. Chest wall distortion by ribcage compression or abdominal distension depressed Pwes-V curves and Ples-V curves of normal lungs only at Pl≥3cmH2O. In conclusion, Pes reflects Pplav with normal and mechanically heterogeneous lungs. With chest wall distortion and dependent deformation of the normal lung, changes of Ples-V curves are qualitatively consistent with greater work of inflation. PMID:23416404

  10. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  11. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  12. Estimation of average treatment effect with incompletely observed longitudinal data: Application to a smoking cessation study

    PubMed Central

    Chen, Hua Yun; Gao, Shasha

    2010-01-01

    We study the problem of estimation and inference on the average treatment effect in a smoking cessation trial where an outcome and some auxiliary information were measured longitudinally, and both were subject to missing values. Dynamic generalized linear mixed effects models linking the outcome, the auxiliary information, and the covariates are proposed. The maximum likelihood approach is applied to the estimation and inference on the model parameters. The average treatment effect is estimated by the G-computation approach, and the sensitivity of the treatment effect estimate to the nonignorable missing data mechanisms is investigated through the local sensitivity analysis approach. The proposed approach can handle missing data that form arbitrary missing patterns over time. We applied the proposed method to the analysis of the smoking cessation trial. PMID:19462416

  13. Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory

    ERIC Educational Resources Information Center

    Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena

    2013-01-01

    This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…

  14. Estimating average cellular turnover from 5-bromo-2'-deoxyuridine (BrdU) measurements.

    PubMed Central

    De Boer, Rob J; Mohri, Hiroshi; Ho, David D; Perelson, Alan S

    2003-01-01

    Cellular turnover rates in the immune system can be determined by labelling dividing cells with 5-bromo-2'-deoxyuridine (BrdU) or deuterated glucose ((2)H-glucose). To estimate the turnover rate from such measurements one has to fit a particular mathematical model to the data. The biological assumptions underlying various models developed for this purpose are controversial. Here, we fit a series of different models to BrdU data on CD4(+) T cells from SIV(-) and SIV(+) rhesus macaques. We first show that the parameter estimates obtained using these models depend strongly on the details of the model. To resolve this lack of generality we introduce a new parameter for each model, the 'average turnover rate', defined as the cellular death rate averaged over all subpopulations in the model. We show that very different models yield similar estimates of the average turnover rate, i.e. ca. 1% day(-1) in uninfected monkeys and ca. 2% day(-1) in SIV-infected monkeys. Thus, we show that one can use BrdU data from a possibly heterogeneous population of cells to estimate the average turnover rate of that population in a robust manner. PMID:12737664

  15. Estimation of genetic parameters for average daily gain using models with competition effects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Components of variance for ADG with models including competition effects were estimated from data provided by Pig Improvement Company on 11,235 pigs from 4 selected lines of swine. Fifteen pigs with average age of 71 d were randomly assigned to a pen by line and sex and taken off test after approxi...

  16. How ants use quorum sensing to estimate the average quality of a fluctuating resource

    PubMed Central

    Franks, Nigel R.; Stuttard, Jonathan P.; Doran, Carolina; Esposito, Julian C.; Master, Maximillian C.; Sendova-Franks, Ana B.; Masuda, Naoki; Britton, Nicholas F.

    2015-01-01

    We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures. PMID:26153535

  17. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  18. Optimal estimators and asymptotic variances for nonequilibrium path-ensemble averages

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Chodera, John D.

    2009-10-01

    Existing optimal estimators of nonequilibrium path-ensemble averages are shown to fall within the framework of extended bridge sampling. Using this framework, we derive a general minimal-variance estimator that can combine nonequilibrium trajectory data sampled from multiple path-ensembles to estimate arbitrary functions of nonequilibrium expectations. The framework is also applied to obtain asymptotic variance estimates, which are a useful measure of statistical uncertainty. In particular, we develop asymptotic variance estimates pertaining to Jarzynski's equality for free energies and the Hummer-Szabo expressions for the potential of mean force, calculated from uni- or bidirectional path samples. These estimators are demonstrated on a model single-molecule pulling experiment. In these simulations, the asymptotic variance expression is found to accurately characterize the confidence intervals around estimators when the bias is small. Hence, the confidence intervals are inaccurately described for unidirectional estimates with large bias, but for this model it largely reflects the true error in a bidirectional estimator derived by Minh and Adib.

  19. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  20. Inverse groundwater modeling for hydraulic conductivity estimation using Bayesian model averaging and variance window

    NASA Astrophysics Data System (ADS)

    Tsai, Frank T.-C.; Li, Xiaobao

    2008-09-01

    This study proposes a Bayesian model averaging (BMA) method to address parameter estimation uncertainty arising from nonuniqueness in parameterization methods. BMA is able to incorporate multiple parameterization methods for prediction through the law of total probability and to obtain an ensemble average of hydraulic conductivity estimates. Two major issues in applying BMA to hydraulic conductivity estimation are discussed. The first problem is using Occam's window in usual BMA applications to measure approximated posterior model probabilities. Occam's window only accepts models in a very narrow range, tending to single out the best method and discard other good methods. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the Kashyap information criterion (KIC) in the approximated posterior model probabilities, which tends to prefer highly uncertain parameterization methods by considering the Fisher information matrix. With sufficient amounts of observation data, the Bayesian information criterion (BIC) is a good approximation and is able to avoid controversial results from using KIC. This study adopts multiple generalized parameterization (GP) methods such as the BMA models to estimate spatially correlated hydraulic conductivity. Numerical examples illustrate the issues of using KIC and Occam's window and show the advantages of using BIC and the variance window in BMA application. Finally, we apply BMA to the hydraulic conductivity estimation of the "1500-foot" sand in East Baton Rouge Parish, Louisiana.

  1. Estimating the path-average rainwater content and updraft speed along a microwave link

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  2. Estimation of annual average daily traffic for off-system roads in Florida. Final report

    SciTech Connect

    Shen, L.D.; Zhao, F.; Ospina, D.I.

    1999-07-28

    Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination of roadway geometry, congestion management, pavement design, safety considerations, etc. AADT is also used to estimate state wide vehicle miles traveled on all the roads and is used by local governments and the environmental protection agencies to determine compliance with the 1990 Clean Air Act Amendment. Additionally, AADT is reported annually by the Florida Department of transportation (FDOT) to the Federal Highway Administration. In the past, considerable efforts have been made in obtaining traffic counts to estimate AADT on state roads. However, traffic counts are often not available on off-system roads, and less attention has been paid to the estimation of AADT in the absence of counts. Current estimates rely on comparisons with roads that are subjectively considered to be similar. Such comparisons are inherently subject to large errors, and also may not be repeated often enough to remain current. Therefore, a better method is needed for estimating AADT for off-system roads in Florida. This study investigates the possibility of establishing one or more models for estimating AADT for off-system roads in Florida.

  3. Model-averaged benchmark concentration estimates for continuous response data arising from epidemiological studies

    SciTech Connect

    Noble, R.B.; Bailer, A.J.; Park, R.

    2009-04-15

    Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.

  4. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  5. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528

  6. Performance and production requirements for the optical components in a high-average-power laser system

    SciTech Connect

    Chow, R.; Doss, F.W.; Taylor, J.R.; Wong, J.N.

    1999-07-02

    Optical components needed for high-average-power lasers, such as those developed for Atomic Vapor Laser Isotope Separation (AVLIS), require high levels of performance and reliability. Over the past two decades, optical component requirements for this purpose have been optimized and performance and reliability have been demonstrated. Many of the optical components that are exposed to the high power laser light affect the quality of the beam as it is transported through the system. The specifications for these optics are described including a few parameters not previously reported and some component manufacturing and testing experience. Key words: High-average-power laser, coating efficiency, absorption, optical components

  7. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    SciTech Connect

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell.

  8. Unmanned Aerial Vehicles unique cost estimating requirements

    NASA Astrophysics Data System (ADS)

    Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.

    Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.

  9. An Estimate of the Average Number of Recessive Lethal Mutations Carried by Humans

    PubMed Central

    Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly

    2015-01-01

    The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes. PMID:25697177

  10. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  11. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  12. Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.

    PubMed

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  13. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  14. [Estimation of average traffic emission factor based on synchronized incremental traffic flow and air pollutant concentration].

    PubMed

    Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng

    2014-04-01

    On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors. PMID:24946571

  15. Estimating ensemble average power delivered by a piezoelectric patch actuator to a non-deterministic subsystem

    NASA Astrophysics Data System (ADS)

    Muthalif, Asan G. A.; Wahid, Azni N.; Nor, Khairul A. M.

    2014-02-01

    Engineering systems such as aircraft, ships and automotive are considered built-up structures. Dynamically they are taught of as being fabricated from many components that are classified as 'deterministic subsystems' (DS) and 'non-deterministic subsystems' (Non-DS). Structures' response of the DS is deterministic in nature and analysed using deterministic modelling methods such as finite element (FE) method. The response of Non-DS is statistical in nature and estimated using statistical modelling technique such as statistical energy analysis (SEA). SEA method uses power balance equation, in which any external input to the subsystem must be represented in terms of power. Often, input force is taken as point force and ensemble average power delivered by point force is already well-established. However, the external input can also be applied in the form of moments exerted by a piezoelectric (PZT) patch actuator. In order to be able to apply SEA method for input moments, a mathematical representation for moment generated by PZT patch in the form of average power is needed, which is attempted in this paper. A simply-supported plate with attached PZT patch is taken as a benchmark model. Analytical solution to estimate average power is derived using mobility approach. Ensemble average of power given by the PZT patch actuator to the benchmark model when subjected to structural uncertainties is also simulated using Lagrangian method and FEA software. The analytical estimation is compared with the Lagrangian model and FE method for validation. The effects of size and location of the PZT actuators on the power delivered to the plate are later investigated.

  16. Estimates of average annual tributary inflow to the lower Colorado River, Hoover Dam to Mexico

    USGS Publications Warehouse

    Owen-Joyce, Sandra J.

    1987-01-01

    Estimates of tributary inflow by basin or area and by surface water or groundwater are presented in this report and itemized by subreaches in tabular form. Total estimated average annual tributary inflow to the Colorado River between Hoover Dam and Mexico, excluding the measured tributaries, is 96,000 acre-ft or about 1% of the 7.5 million acre-ft/yr of Colorado River water apportioned to the States in the lower Colorado River basin. About 62% of the tributary inflow originates in Arizona, 30% in California, and 8% in Nevada. Tributary inflow is a small component in the water budget for the river. Most of the quantities of unmeasured tributary inflow were estimated in previous studies and were based on mean annual precipitation for 1931-60. Because mean annual precipitation for 1951-80 did not differ significantly from that of 1931-60, these tributary inflow estimates are assumed to be valid for use in 1984. Measured average annual runoff per unit drainage area on the Bill Williams River has remained the same. Surface water inflow from unmeasured tributaries is infrequent and is not captured in surface reservoirs in any of the States; it flows to the Colorado River gaging stations. Estimates of groundwater inflow to the Colorad River valley. Average annual runoff can be used in a water budget; although in wet years, runoff may be large enough to affect the calculation of consumptive use and to be estimated from hydrographs for the Colorado River valley are based on groundwater recharge estimates in the bordering areas, which have not significantly changed through time. In most areas adjacent to the Colorado River valley, groundwater pumpage is small and pumping has not significantly affected the quantity of groundwater discharged to the Colorado River valley. In some areas where groundwater pumpage exceeds the quantity of groundwater discharge and water levels have declined, the quantity of discharge probably has decreased and groundwater inflow to the Colorado

  17. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  18. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  19. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  20. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  1. Planning and Estimation of Operations Support Requirements

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon

    2010-01-01

    Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D

  2. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  3. Nonlinear models for estimating GSFC travel requirements

    NASA Technical Reports Server (NTRS)

    Buffalano, C.; Hagan, F. J.

    1974-01-01

    A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.

  4. Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.

    2015-05-01

    A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.

  5. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition.

    PubMed

    Taylor, Brian A; Hwang, Ken-Pin; Hazle, John D; Stafford, R Jason

    2009-03-01

    The authors investigated the performance of the iterative Steiglitz-McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (< or equal 16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer-Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR) > or =5 for echo train lengths (ETLs) > or =4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and/or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with > or =4 echoes and for T2*(<1.0%) with > or =7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire < or =16 echoes for one- and two-peak systems. Preliminary ex vivo

  6. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition

    PubMed Central

    Taylor, Brian A.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason

    2009-01-01

    The authors investigated the performance of the iterative Steiglitz–McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (≤16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer–Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR)≥5 for echo train lengths (ETLs)≥4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and∕or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with ≥4 echoes and for T2* (<1.0%) with ≥7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire ≤16 echoes for one- and two-peak systems. Preliminary ex vivo and in vivo

  7. A Method for the Estimation of p-Mode Parameters from Averaged Solar Oscillation Power Spectra

    NASA Astrophysics Data System (ADS)

    Reiter, J.; Rhodes, E. J., Jr.; Kosovichev, A. G.; Schou, J.; Scherrer, P. H.; Larson, T. P.

    2015-04-01

    A new fitting methodology is presented that is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from m-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the “Windowed, MuLTiple-Peak, averaged-spectrum” or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run, using weights from a leakage matrix that takes into account observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method, which employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure, which is based upon 6366 modes that we computed using the WMLTP method on the 66 day 2010 Solar and Heliospheric Observatory/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion, we developed a new procedure for the identification and correction of outliers in a frequency dataset. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle 24 during mid-2010.

  8. Radiometric Approach for Estimating Relative Changes in Intra-Glacier Average Temperature

    NASA Astrophysics Data System (ADS)

    Jezek, K. C.; Johnson, J.; Aksoy, M.

    2012-12-01

    NASA's IceBridge Project uses a suite of airborne instruments to characterize most of the important variables necessary to understand current ice sheet behavior and to predict future changes in ice sheet volume. Derived geophysical quantities include: ice sheet surface elevation; ice sheet thickness; surface accumulation rate; internal layer stratigraphy; ocean bathymetry; basal geology. At present, internal ice sheet temperature is absent from the parameters list, yet temperature is a primary factor in determining the ease at which ice deforms internally and also the rate at which the ice flows across the base. In this paper, we present calculations to show that radiometry may provide clues to relative and perhaps absolute variations in ice sheet internal temperatures. We assume the Debye dielectric dispersion model driven by temperatures estimated using the Robin model to compute radio frequency loss through the ice. We discretely layer the ice sheet to compute local emission, estimate interference effects and also take into account reflectivity at the surface and the base of the ice sheet. At this stage, we ignore scattering in the firn and we also ignore higher frequency dielectric dispersions along with direct current resistivities. We find some sensitivity between the depth-integrated brightness temperature and average internal temperature depending on the ice thickness and surface accumulation rate. Further, we observe that changing from a frozen to a water based ice sheet alters the measured brightness temperature again to a degree depending on the modeled ice sheet configuration. We go on to present SMOS satellite data acquired over Lake Vostok, Antarctica. The SMOS data suggest a relationship between relatively cool brightness temperatures and the location of the lake. We conclude with comments concerning the practicality and advantage of adding radiometry to the IceBridge instrument suite.

  9. Estimate of effective recombination rate and average selection coefficient for HIV in chronic infection

    PubMed Central

    Batorsky, Rebecca; Kearney, Mary F.; Palmer, Sarah E.; Maldarelli, Frank; Rouzine, Igor M.; Coffin, John M.

    2011-01-01

    HIV adaptation to a host in chronic infection is simulated by means of a Monte-Carlo algorithm that includes the evolutionary factors of mutation, positive selection with varying strength among sites, random genetic drift, linkage, and recombination. By comparing two sensitive measures of linkage disequilibrium (LD) and the number of diverse sites measured in simulation to patient data from one-time samples of pol gene obtained by single-genome sequencing from representative untreated patients, we estimate the effective recombination rate and the average selection coefficient to be on the order of 1% per genome per generation (10−5 per base per generation) and 0.5%, respectively. The adaptation rate is twofold higher and fourfold lower than predicted in the absence of recombination and in the limit of very frequent recombination, respectively. The level of LD and the number of diverse sites observed in data also range between the values predicted in simulation for these two limiting cases. These results demonstrate the critical importance of finite population size, linkage, and recombination in HIV evolution. PMID:21436045

  10. 31 CFR 205.23 - What requirements apply to estimates?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What requirements apply to estimates... Treasury-State Agreement § 205.23 What requirements apply to estimates? The following requirements apply when we and a State negotiate a mutually agreed upon funds transfer procedure based on an estimate...

  11. Homology-based prediction of interactions between proteins using Averaged One-Dependence Estimators

    PubMed Central

    2014-01-01

    Background Identification of protein-protein interactions (PPIs) is essential for a better understanding of biological processes, pathways and functions. However, experimental identification of the complete set of PPIs in a cell/organism (“an interactome”) is still a difficult task. To circumvent limitations of current high-throughput experimental techniques, it is necessary to develop high-performance computational methods for predicting PPIs. Results In this article, we propose a new computational method to predict interaction between a given pair of protein sequences using features derived from known homologous PPIs. The proposed method is capable of predicting interaction between two proteins (of unknown structure) using Averaged One-Dependence Estimators (AODE) and three features calculated for the protein pair: (a) sequence similarities to a known interacting protein pair (FSeq), (b) statistical propensities of domain pairs observed in interacting proteins (FDom) and (c) a sum of edge weights along the shortest path between homologous proteins in a PPI network (FNet). Feature vectors were defined to lie in a half-space of the symmetrical high-dimensional feature space to make them independent of the protein order. The predictability of the method was assessed by a 10-fold cross validation on a recently created human PPI dataset with randomly sampled negative data, and the best model achieved an Area Under the Curve of 0.79 (pAUC0.5% = 0.16). In addition, the AODE trained on all three features (named PSOPIA) showed better prediction performance on a separate independent data set than a recently reported homology-based method. Conclusions Our results suggest that FNet, a feature representing proximity in a known PPI network between two proteins that are homologous to a target protein pair, contributes to the prediction of whether the target proteins interact or not. PSOPIA will help identify novel PPIs and estimate complete PPI networks. The method

  12. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  13. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  14. Sharp spherically averaged Strichartz estimates for the Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Guo, Zihua

    2016-05-01

    We prove generalized Strichartz estimates with weaker angular integrability for the Schrödinger equation. Our estimates are sharp except some endpoints. Then we apply these new estimates to prove scattering for the 3D Zakharov system with small data in the energy space with low angular regularity. Our results improve the results obtained recently in Guo Z et al (2014 Generalized Strichartz estimates and scattering for 3D Zakharov system Commun. Math. Phys. 331 239–59).

  15. The Average Distance between Item Values: A Novel Approach for Estimating Internal Consistency

    ERIC Educational Resources Information Center

    Sturman, Edward D.; Cribbie, Robert A.; Flett, Gordon L.

    2009-01-01

    This article presents a method for assessing the internal consistency of scales that works equally well with short and long scales, namely, the average proportional distance. The method provides information on the average distance between item scores for a particular scale. In this article, we sought to demonstrate how this relatively simple…

  16. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2012) (a)...

  17. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2012) (a)...

  18. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (FEB 2012) (a)...

  19. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2006) (a)...

  20. Estimates of Adequate School Spending by State Based on National Average Service Levels.

    ERIC Educational Resources Information Center

    Miner, Jerry

    1983-01-01

    Proposes a method for estimating expenditures per student needed to provide educational adequacy in each state. Illustrates the method using U.S., Arkansas, New York, Texas, and Washington State data, covering instruction, special needs, operations and maintenance, administration, and other costs. Estimates ratios of "adequate" to actual spending…

  1. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    SciTech Connect

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.

  2. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGESBeta

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  3. Average fetal depth in utero: data for estimation of fetal absorbed radiation dose

    SciTech Connect

    Ragozzino, M.W.; Breckle, R.; Hill, L.M.; Gray, J.E.

    1986-02-01

    To estimate fetal absorbed dose from radiographic examinations, the depth from the anterior maternal surface to the midline of the fetal skull and abdomen was measured by ultrasound in 97 pregnant women. The relationships between fetal depth, fetal presentation, and maternal parameters of height, weight, anteroposterior (AP) thickness, gestational age, placental location, and bladder volume were analyzed. Maternal AP thickness (MAP) can be estimated from gestational age, maternal height, and maternal weight. Fetal midskull and abdominal depths were nearly equal. Fetal depth normalized to MAP was independent or nearly independent of maternal parameters and fetal presentation. These data enable a reasonable estimation of absorbed dose to fetal brain, abdomen, and whole body.

  4. Estimation of the average surface heat flux over an inhomogeneous terrain from the vertical velocity variance

    NASA Technical Reports Server (NTRS)

    Eilts, M. D.; Sundara-Rajan, A.; Evans, R. J.

    1987-01-01

    An indirect method of estimating the surface heat flux from observations of vertical velocity variance at the lower mid-levels of the convective atmospheric boundary layer is described. Comparison of surface heat flux estimates with those from boundary-layer heating rates is good, and this method seems to be especially suitable for inhomogeneous terrain for which the surface-layer profile method cannot be used.

  5. Another Failure to Replicate Lynn's Estimate of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    In his comment on our literature review of data on the performance of sub-Saharan Africans on Raven's Progressive Matrices, Lynn (this issue) criticized our selection of samples of primary and secondary school students. On the basis of the samples he deemed representative, Lynn concluded that the average IQ of sub-Saharan Africans stands at 67…

  6. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  7. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  8. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine. PMID:25693855

  9. Does the orbit-averaged theory require a scale separation between periodic orbit size and perturbation correlation length?

    SciTech Connect

    Zhang, Wenlu; Lin, Zhihong

    2013-10-15

    Using the canonical perturbation theory, we show that the orbit-averaged theory only requires a time-scale separation between equilibrium and perturbed motions and verifies the widely accepted notion that orbit averaging effects greatly reduce the microturbulent transport of energetic particles in a tokamak. Therefore, a recent claim [Hauff and Jenko, Phys. Rev. Lett. 102, 075004 (2009); Jenko et al., ibid. 107, 239502 (2011)] stating that the orbit-averaged theory requires a scale separation between equilibrium orbit size and perturbation correlation length is erroneous.

  10. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  11. The effect of antagonistic pleiotropy on the estimation of the average coefficient of dominance of deleterious mutations.

    PubMed

    Fernández, B; García-Dorado, A; Caballero, A

    2005-12-01

    We investigate the impact of antagonistic pleiotropy on the most widely used methods of estimation of the average coefficient of dominance of deleterious mutations from segregating populations. A proportion of the deleterious mutations affecting a given studied fitness component are assumed to have an advantageous effect on another one, generating overdominance on global fitness. Using diffusion approximations and transition matrix methods, we obtain the distribution of gene frequencies for nonpleiotropic and pleiotropic mutations in populations at the mutation-selection-drift balance. From these distributions we build homozygous and heterozygous chromosomes and assess the behavior of the estimators of dominance. A very small number of deleterious mutations with antagonistic pleiotropy produces substantial increases on the estimate of the average degree of dominance of mutations affecting the fitness component under study. For example, estimates are increased three- to fivefold when 2% of segregating loci are over-dominant for fitness. In contrast, strengthening pleiotropy, where pleiotropic effects are assumed to be also deleterious, has little effect on the estimates of the average degree of dominance, supporting previous results. The antagonistic pleiotropy model considered, applied under mutational parameters described in the literature, produces patterns for the distribution of chromosomal viabilities, levels of genetic variance, and homozygous mutation load generally consistent with those observed empirically for viability in Drosophila melanogaster. PMID:16118193

  12. Areally averaged estimates of surface heat flux from ARM field studies

    SciTech Connect

    Coulter, R.L.; Martin, T.J.; Cook, D.R.

    1993-08-01

    The determination of areally averaged surface fluxes is a problem of fundamental interest to the Atmospheric Radiation Measurement (ARM) program. The Cloud And Radiation Testbed (CART) sites central to the ARM program will provide high-quality data for input to and verification of General Circulation Models (GCMs). The extension of several point measurements of surface fluxes within the heterogeneous CART sites to an accurate representation of the areally averaged surface fluxes is not straightforward. Two field studies designed to investigate these problems, implemented by ARM science team members, took place near Boardman, Oregon, during June of 1991 and 1992. The site was chosen to provide strong contrasts in surface moisture while minimizing the differences in topography. The region consists of a substantial dry steppe (desert) upwind of an extensive area of heavily irrigated farm land, 15 km in width and divided into 800-m-diameter circular fields in a close packed array, in which wheat, alfalfa, corn, or potatoes were grown. This region provides marked contrasts, not only on the scale of farm-desert (10--20 km) but also within the farm (0.1--1 km), because different crops transpire at different rates, and the pivoting irrigation arms provide an ever-changing pattern of heavy surface moisture throughout the farm area. This paper primarily discusses results from the 1992 field study.

  13. Estimation of heat load in waste tanks using average vapor space temperatures

    SciTech Connect

    Crowe, R.D.; Kummerer, M.; Postma, A.K.

    1993-12-01

    This report describes a method for estimating the total heat load in a high-level waste tank with passive ventilation. This method relates the total heat load in the tank to the vapor space temperature and the depth of waste in the tank. Q{sub total} = C{sub f} (T{sub vapor space {minus}} T{sub air}) where: C{sub f} = Conversion factor = (R{sub o}k{sub soil}{sup *}area)/(z{sub tank} {minus} z{sub surface}); R{sub o} = Ratio of total heat load to heat out the top of the tank (function of waste height); Area = cross sectional area of the tank; k{sub soil} = thermal conductivity of soil; (z{sub tank} {minus} z{sub surface}) = effective depth of soil covering the top of tank; and (T{sub vapor space} {minus} T{sub air}) = mean temperature difference between vapor space and the ambient air at the surface. Three terms -- depth, area and ratio -- can be developed from geometrical considerations. The temperature difference is measured for each individual tank. The remaining term, the thermal conductivity, is estimated from the time-dependent component of the temperature signals coming from the periodic oscillations in the vapor space temperatures. Finally, using this equation, the total heat load for each of the ferrocyanide Watch List tanks is estimated. This provides a consistent way to rank ferrocyanide tanks according to heat load.

  14. Estimation of the diffuse radiation fraction for hourly, daily and monthly-average global radiation

    NASA Astrophysics Data System (ADS)

    Erbs, D. G.; Klein, S. A.; Duffie, J. A.

    1982-01-01

    Hourly pyrheliometer and pyranometer data from four U.S. locations are used to establish a relationship between the hourly diffuse fraction and the hourly clearness index. This relationship is compared to the relationship established by Orgill and Hollands (1977) and to a set of data from Highett, Australia, and agreement is within a few percent in both cases. The transient simulation program TRNSYS is used to calculate the annual performance of solar energy systems using several correlations. For the systems investigated, the effect of simulating the random distribution of the hourly diffuse fraction is negligible. A seasonally dependent daily diffuse correlation is developed from the data, and this daily relationship is used to derive a correlation for the monthly-average diffuse fraction.

  15. Approximate sample sizes required to estimate length distributions

    USGS Publications Warehouse

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  16. Generalized propensity score for estimating the average treatment effect of multiple treatments.

    PubMed

    Feng, Ping; Zhou, Xiao-Hua; Zou, Qing-Ming; Fan, Ming-Yu; Li, Xiao-Song

    2012-03-30

    The propensity score method is widely used in clinical studies to estimate the effect of a treatment with two levels on patient's outcomes. However, due to the complexity of many diseases, an effective treatment often involves multiple components. For example, in the practice of Traditional Chinese Medicine (TCM), an effective treatment may include multiple components, e.g. Chinese herbs, acupuncture, and massage therapy. In clinical trials involving TCM, patients could be randomly assigned to either the treatment or control group, but they or their doctors may make different choices about which treatment component to use. As a result, treatment components are not randomly assigned. Rosenbaum and Rubin proposed the propensity score method for binary treatments, and Imbens extended their work to multiple treatments. These authors defined the generalized propensity score as the conditional probability of receiving a particular level of the treatment given the pre-treatment variables. In the present work, we adopted this approach and developed a statistical methodology based on the generalized propensity score in order to estimate treatment effects in the case of multiple treatments. Two methods were discussed and compared: propensity score regression adjustment and propensity score weighting. We used these methods to assess the relative effectiveness of individual treatments in the multiple-treatment IMPACT clinical trial. The results reveal that both methods perform well when the sample size is moderate or large. PMID:21351291

  17. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  18. Accounting for Uncertainty in Confounder and Effect Modifier Selection when Estimating Average Causal Effects in Generalized Linear Models

    PubMed Central

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-01-01

    Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155

  19. Estimation of Average Shear Strength Parameters along the Slip Surface Based on the Shear Strength Diagram of Landslide Soils

    NASA Astrophysics Data System (ADS)

    Kimura, Sho; Gibo, Seiichi; Nakamura, Shinya

    The average shear strength parameters along the slip surface (c´, φ´) of the four Shimajiri-mudstone landslides having different slide patterns have been obtained by two methods involving an estimation method using the shear strength diagram of landslide soils and an ordinary method using the results of laboratory shear tests of soil samples. The deference of the two average shear strengths was small in the case of the landslides where the residual and fractured-mudstone peak strengths had been mobilized, while the two methods produced close agreement in case of the landslides where the residual and fully softened strengths had been mobilized. Although, the determination of appropriate c´, φ´ is done using the measured shear strength of slip surface soil as a fundamental rule, when it is difficult to do it due to certain restrictions, c´, φ´ can be effectively estimated using the shear strength diagram.

  20. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    PubMed

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  1. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  2. Application of Network-averaged Teleseismic P-wave Spectra to Seismic Yield Estimation of Underground Nuclear Explosions

    NASA Astrophysics Data System (ADS)

    Murphy, J. R.; Barker, B. W.

    - A set of procedures is described for estimating network-averaged teleseismic P-wave spectra for underground nuclear explosions and for analytically inverting these spectra to obtain estimates of mb/yield relations and individual yields for explosions at previously uncalibrated test sites. These procedures are then applied to the analyses of explosions at the former Soviet test sites at Shagan River, Degelen Mountain, Novaya Zemlya and Azgir, as well as at the French Sahara, U.S. Amchitka and Chinese Lop Nor test sites. It is demonstrated that the resulting seismic estimates of explosion yield and mb/yield relations are remarkably consistent with a variety of other available information for a number of these test sites. These results lead us to conclude that the network-averaged teleseismic P-wave spectra provide considerably more diagnostic information regarding the explosion seismic source than do the corresponding narrowband magnitude measures such as mb, Ms and mb(Lg), and, therefore, that they are to be preferred for applications to seismic yield estimation for explosions at previously uncalibrated test sites.

  3. The GAAS metagenomic tool and its estimations of viral and microbial average genome size in four major biomes.

    PubMed

    Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest

    2009-12-01

    Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions

  4. Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching.

    PubMed

    Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J

    2016-09-20

    In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27087478

  5. Estimating average dissolved-solids yield from basins drained by ephemeral and intermittent streams, Green River basin, Wyoming

    USGS Publications Warehouse

    DeLong, L.L.; Wells, D.K.

    1988-01-01

    A method was developed to determine the average dissolved-solids yield contributed by small basins characterized by ephemeral and intermittent streams in the Green River basin in Wyoming. The method is different from that commonly used for perennial streams. Estimates of dissolved-solids discharge at eight water quality sampling stations operated by the U.S. Geological Survey in cooperation with the U.S. Bureau of Land Management range from less than 2 to 95 tons/day. The dissolved-solids yield upstream from the sampling stations ranges from 0.023 to 0.107 tons/day/sq mi. However, estimates of dissolved solids yield contributed by drainage areas between paired stations on Bitter, Salt Wells, Little Muddy, and Muddy creeks, based on dissolved-solids discharge versus drainage area, range only from 0.081 to 0.092 tons/day/sq mi. (USGS)

  6. Estimated average annual ground-water pumpage in the Portland Basin, Oregon and Washington 1987-88

    USGS Publications Warehouse

    Collins, C.A.; Broad, T.M.

    1993-01-01

    Data for ground-water pumpage were collected during an inventory of wells in 1987-88 in the Portland Basin located in northwestern Oregon and southwestern Washington. Estimates of annual ground-water pumpage were made for the three major categories of use: public supply, industry, and irrigation. A large rapidly expanding metropolitan area is situated within the Portland Basin, along with several large industries that use significant quantities of ground water. The estimated total average annual ground-water pumpage for 1987 was about 127,800 acre-feet. Of this quantity, about 50 percent was pumped for industrial use, about 40 percent for public supply and about 10 percent for irrigation. Domestic use from individual wells is a small part of the total and is not included.

  7. Estimating Watershed-Averaged Precipitation and Evapotranspiration Fluxes using Streamflow Measurements in a Semi-Arid, High Altitude Montane Catchment

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.

    2014-12-01

    Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.

  8. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    PubMed

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-01

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective. PMID:23548030

  9. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. PMID:27494960

  10. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  11. Is the Whole Really More than the Sum of Its Parts? Estimates of Average Size and Orientation Are Susceptible to Object Substitution Masking

    ERIC Educational Resources Information Center

    Jacoby, Oscar; Kamke, Marc R.; Mattingley, Jason B.

    2013-01-01

    We have a remarkable ability to accurately estimate average featural information across groups of objects, such as their average size or orientation. It has been suggested that, unlike individual object processing, this process of "feature averaging" occurs automatically and relatively early in the course of perceptual processing, without the need…

  12. Estimation of the average exchanges in momentum and latent heat between the atmosphere and the oceans with Seasat observations

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1983-01-01

    Ocean-surface momentum flux and latent heat flux are determined from Seasat-A data from 1978 and compared with ship observations. Momentum flux was measured using the Seasat-A scatterometer system (SASS) heat flux, with the scanning multichannel MW radiometer (SMMR). Ship measurements were quality selected and averaged to increase their reliability. The fluxes were computed using a bulk parameterization technique. It is found that although SASS effectively measures momentum flux, variations in atmospheric stability and sea-surface temperature cause deviations which are not accounted for by the present data-processing algorithm. The SMMR-latent-heat-flux algorithm, while needing refinement, is shown to given estimations to within 35 W/sq m in its present form, which removes systematic error and uses an empirically determined transfer coefficient.

  13. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  14. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  15. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    PubMed

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape. PMID:23202273

  16. Estimated water requirements for gold heap-leach operations

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water necessary for conventional gold heap-leach operations. Water is required for drilling and dust suppression during mining, for agglomeration and as leachate during ore processing, to support the workforce (requires water in potable form and for sanitation), for minesite reclamation, and to compensate for water lost to evaporation and leakage. Maintaining an adequate water balance is especially critical in areas where surface and groundwater are difficult to acquire because of unfavorable climatic conditions [arid conditions and (or) a high evaporation rate]; where there is competition with other uses, such as for agriculture, industry, and use by municipalities; and where compliance with regulatory requirements may restrict water usage. Estimating the water consumption of heap-leach operations requires an understanding of the heap-leach process itself. The task is fairly complex because, although they all share some common features, each gold heap-leach operation is unique. Also, estimating the water consumption requires a synthesis of several fields of science, including chemistry, ecology, geology, hydrology, and meteorology, as well as consideration of economic factors.

  17. Calcium requirement: new estimations for men and women by cross-sectional statistical analyses of metabolic calcium balance data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To provide new estimates of the average Ca requirement for men and women, we determined the dietary Ca intake required to maintain neutral Ca balance. Ca balance data (Ca intake - [fecal Ca + urinary Ca]) were collected from 154 subjects (females: n=73, weight=77.1±18.5 kg, age=47.0±18.5 y [range: 2...

  18. Estimates of galactic cosmic ray shielding requirements during solar minimum

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.

    1990-01-01

    Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.

  19. Estimation of Rate of Strain Magnitude and Average Viscosity in Turbulent Flow of Shear Thinning and Yield Stress Fluids

    NASA Astrophysics Data System (ADS)

    Sawko, Robert; Thompson, Chris P.

    2010-09-01

    This paper presents a series of numerical simulations of non-Newtonian fluids in high Reynolds number flows in circular pipes. The fluids studied in the computations have shear-thinning and yield stress properties. Turbulence is described using the Reynolds-Averaged Navier-Stokes (RANS) equations with the Boussinesq eddy viscosity hypothesis. The evaluation of standard, two-equation models led to some observations regarding the order of magnitude as well as probabilistic information about the rate of strain. We argue that an accurate estimate of the rate of strain tensor is essential in capturing important flow features. It is first recognised that an apparent viscosity comprises two flow dependant components: one originating from rheology and the other from the turbulence model. To establish the relative significance of the terms involved, an order of magnitude analysis has been performed. The main observation supporting further discussion is that in high Reynolds number regimes the magnitudes of fluctuating rates of strain and fluctuating vorticity dominate the magnitudes of their respective averages. Since these quantities are included in the rheological law, the values of viscosity obtained from the fluctuating and mean velocity fields are different. Validation against Direct Numerical Simulation data shows at least an order of magnitude discrepancy in some regions of the flow. Moreover, the predictions of the probabilistic analysis show a favourable agreement with statistics computed from DNS data. A variety of experimental, as well as computational data has been collected. Data come from the latest experiments by Escudier et al. [1], DNS from Rudman et al. [2] and zeroth-order turbulence models of Pinho [3]. The fluid rheologies are described by standard power-law and Herschel-Bulkley models which make them suitable for steady state calculations of shear flows. Suitable regularisations are utilised to secure numerical stability. Two new models have been

  20. Estimating resource costs of compliance with EU WFD ecological status requirements at the river basin scale

    NASA Astrophysics Data System (ADS)

    Riegels, Niels; Jensen, Roar; Bensasson, Lisa; Banou, Stella; Møller, Flemming; Bauer-Gottwein, Peter

    2011-01-01

    SummaryResource costs of meeting EU WFD ecological status requirements at the river basin scale are estimated by comparing net benefits of water use given ecological status constraints to baseline water use values. Resource costs are interpreted as opportunity costs of water use arising from water scarcity. An optimization approach is used to identify economically efficient ways to meet WFD requirements. The approach is implemented using a river basin simulation model coupled to an economic post-processor; the simulation model and post-processor are run from a central controller that iterates until an allocation is found that maximizes net benefits given WFD requirements. Water use values are estimated for urban/domestic, agricultural, industrial, livestock, and tourism water users. Ecological status is estimated using metrics that relate average monthly river flow volumes to the natural hydrologic regime. Ecological status is only estimated with respect to hydrologic regime; other indicators are ignored in this analysis. The decision variable in the optimization is the price of water, which is used to vary demands using consumer and producer water demand functions. The price-based optimization approach minimizes the number of decision variables in the optimization problem and provides guidance for pricing policies that meet WFD objectives. Results from a real-world application in northern Greece show the suitability of the approach for use in complex, water-stressed basins. The impact of uncertain input values on model outcomes is estimated using the Info-Gap decision analysis framework.

  1. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    NASA Astrophysics Data System (ADS)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  2. Estimates of the best approximations of periodic functions by trigonometric polynomials in terms of averaged differences and the multidimensional Jackson's theorem

    NASA Astrophysics Data System (ADS)

    Pustovoitov, N. N.

    1997-10-01

    In the first section the best approximations of periodic functions of one real variable by trigonometric polynomials are studied. Estimates of these approximations in terms of averaged differences are obtained. A multidimensional generalization of these estimates is presented in the second section. As a consequence. The multidimensional Jackson's theorem is proved.

  3. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  4. A Combined Approach for Estimating Health Staff Requirements

    PubMed Central

    FAKHRI, Ali; SEYEDIN, Hesam; DAVIAUD, Emmanuelle

    2014-01-01

    Abstract Background Many studies have been carried out and many methods have been used for estimating health staff re-quirements in health facilities or system, each have different advantages and disadvantages. Differences in the extent to which utilization matches needs in different conditions intensify the limitations of each approach when used in iso-lation. Is the utilization-based approach efficient in a situation of over servicing? Is it sufficient in a situation of under-utilization? These questions can be similarly asked about the needs-based approach. This study is looking for a flexible approach to estimate the health staff requirements efficiently in these different conditions. Method This study was carried out in 2011 in some stages: It was conducted in order to identify the formula used in the different approaches. The basic formulas used in the utilization-based approach and the needs-based approach were identified and then combined using simple mathematical principles to develop a new formula. Finally, the new formula was piloted by assessing family health staff requirements in the health posts in Kashan City, Iran. Results Comparison of the two formulas showed that the basic formulas used in the two approaches can be com-bined by including the variable ‘Coverage’. The pilot study confirmed the role of coverage in the suggested combined approach. Conclusions The variables in the developed formula allow combining needs-based, target-based and utilization-based approaches. A limitation of this approach is applicability to a given service package. PMID:26060687

  5. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    SciTech Connect

    Supanich, MP

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.

  6. Estimates of the maximum time required to originate life

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Fogleman, Guy

    1989-01-01

    Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.

  7. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy. PMID:24420554

  8. Maps to estimate average streamflow and headwater limits for streams in U.S. Army Corps of Engineers, Mobile District, Alabama and adjacent states

    USGS Publications Warehouse

    Nelson, George H., Jr.

    1984-01-01

    U.S. Army Corps of Engineers permits are required for discharges of dredged or fill-material downstream from the ' headwaters ' of specified streams. The term ' headwaters ' is defined as the point of a freshwater (non-tidal) stream above which the average flow is less than 5 cu ft/s. Maps of the Mobile District area showing (1) lines of equal average streamflow, and (2) lines of equal drainage areas required to produce an average flow of 5 cu ft/s are contained in this report. These maps are for use by the Corps of Engineers in their permitting program. (USGS)

  9. Estimation of the path-averaged wind velocity by cross-correlation of the received power and the shift of laser beam centroid

    NASA Astrophysics Data System (ADS)

    Marakasov, Dmitri A.; Tsvyk, Ruvim S.

    2015-11-01

    We consider the problem of estimation of the average wind speed on atmospheric path from measurements of time series of average power of the laser radiation detected through the receiving aperture and the position of the centroid of the image of the laser beam. It is shown that the mutual correlation function of these series has a maximum, whose position characterizes the average speed of the cross wind on the path. The dependence of the coordinates and magnitude of the maximum of the correlation function from the size of the receiving aperture and the distribution of turbulence along the atmospheric path.

  10. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Contractor's policies, procedures, and practices for budgeting and planning controls, and generating...) Flow of work, coordination, and communication; and (5) Budgeting, planning, estimating methods... personnel have sufficient training, experience, and guidance to perform estimating and budgeting tasks...

  11. Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA.

    PubMed

    Wagner, Todd H; Chen, Shuo; Barnett, Paul G

    2003-09-01

    The U.S. Department of Veterans Affairs (VA) maintains discharge abstracts, but these do not include cost information. This article describes the methods the authors used to estimate the costs of VA medical-surgical hospitalizations in fiscal years 1998 to 2000. They estimated a cost regression with 1996 Medicare data restricted to veterans receiving VA care in an earlier year. The regression accounted for approximately 74 percent of the variance in cost-adjusted charges, and it proved to be robust to outliers and the year of input data. The beta coefficients from the cost regression were used to impute costs of VA medical-surgical hospital discharges. The estimated aggregate costs were reconciled with VA budget allocations. In addition to the direct medical costs, their cost estimates include indirect costs and physician services; both of these were allocated in proportion to direct costs. They discuss the method's limitations and application in other health care systems. PMID:15095543

  12. Technical Methods Report: Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs. NCEE 2009-4040

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley

    2009-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This report uses a causal inference and instrumental variables framework to examine the…

  13. Estimating reach-averaged discharge for the River Severn from measurements of river water surface elevation and slope

    NASA Astrophysics Data System (ADS)

    Durand, Michael; Neal, Jeffrey; Rodríguez, Ernesto; Andreadis, Konstantinos M.; Smith, Laurence C.; Yoon, Yeosang

    2014-04-01

    An algorithm is presented that calculates a best estimate of river bathymetry, roughness coefficient, and discharge based on input measurements of river water surface elevation (h) and slope (S) using the Metropolis algorithm in a Bayesian Markov Chain Monte Carlo scheme, providing an inverse solution to the diffusive approximation to the shallow water equations. This algorithm has potential application to river h and S measurements from the forthcoming Surface Water and Ocean Topography (SWOT) satellite mission. The algorithm was tested using in situ data as a proxy for satellite measurements along a 22.4 km reach of the River Severn, UK. First, the algorithm was run with gage measurements of h and S during a small, in-bank event in June 2007. Second, the algorithm was run with measurements of h and S estimated from four remote sensing images during a major out-of-bank flood event in July 2007. River width was assumed to be known for both events. Algorithm-derived estimates of river bathymetry were validated using in situ measurements, and estimates of roughness coefficient were compared to those used in an operational hydraulic model. Algorithm-derived estimates of river discharge were evaluated using gaged discharge. For the in-bank event, when lateral inflows from smaller tributaries were assumed to be known, the method provided an accurate discharge estimate (10% RMSE). When lateral inflows were assumed unknown, discharge RMSE increased to 36%. Finally, if just one of the three river reaches was assumed to be have known bathymetry, solutions for bathymetry, roughness and discharge for all three reaches were accurately retrieved, with a corresponding discharge RMSE of 15.6%. For the out-of-bank flood event, the lateral inflows were unknown, and the final discharge RMSE was 19%. These results suggest that it should be possible to estimate river discharge via SWOT observations of river water surface elevation, slope and width.

  14. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    SciTech Connect

    Ishida, Hideshi

    2014-06-15

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. These deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.

  15. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    NASA Astrophysics Data System (ADS)

    Ishida, Hideshi

    2014-06-01

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. These deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.

  16. Towards the estimation of reach-averaged discharge from SWOT data using a Manning's equation derived algorithm. Application to the Garonne River between Tonneins-La Reole

    NASA Astrophysics Data System (ADS)

    Berthon, Lucie; Biancamaria, Sylvain; Goutal, Nicole; Ricci, Sophie; Durand, Michael

    2014-05-01

    The future NASA-CNES-CSA Surface Water and Ocean Topogragraphy (SWOT) satellite mission will be launched in 2020 and will deliver maps of water surface elevation, slope and extent with an un-precedented resolution of 100m. A river discharge algorithm was proposed by Durand et al. 2013, based on Manning's equation to estimate reach-averaged discharge from SWOT data. In the present study, this algorithm was applied to a 50-km reach on the Garonne River with an averaged slope of 2.8m per 10000m, averaged width of 180m between Tonneins and La Reole. The dynamics of this reach is satisfyingly represented by the 1D model MASCARET and validated against in-situ water level observations in Marmande. Major assumptions of permanent flow and uniform conditions lie under the Manning's equation choice. Here, we aim at highlighting the limits of validity of these assumptions for the Garonne River during a typical flood event in order to estimate the applicability of the discharge algorithm over averaged reach. Manning-estimated and MASCARET discharges are compared for non-permanent and permanent flow for different reach averaging (100m to 10 km). It was shown that the Manning equation increasingly over-estimates the MASCARET discharge as the reach averaging length increases. It is shown that the Manning overestimate is due to the effect of the sub-reach parameter covariances. In order to further explain these results, this comparison was carried out for a simplified case study with a parametric bathymetry described either by a flat bottom ; constant slope or local slope variations.

  17. Technical Methods Report: The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions. NCEE 2009-0061 rev.

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2009-01-01

    This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…

  18. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-01-01

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  19. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation

    PubMed Central

    Lee, Eugenia E.; Stewart, Barclay; Zha, Yuanting A.; Groen, Thomas A.; Burkle, Frederick M.; Kushner, Adam L.

    2016-01-01

    Background: Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   Methods: The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Results: Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People’s Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. Conclusion: As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies. PMID:27617165

  20. How well can we estimate areal-averaged spectral surface albedo from ground-based transmission in the Atlantic coastal area?

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina

    2015-10-01

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  1. Estimating average base flow at low-flow partial-record stations on the south shore of Long Island, New York

    USGS Publications Warehouse

    Buxton, H.T.

    1985-01-01

    Base flows of the 29 major streams in southeast Nassau and southwest Suffolk Counties, New York, were statistically analyzed to discern the correlation among flows of adjacent streams. Concurrent base-flow data from a partial-record and a nearby continuous-record station were related; the data were from 1968-75, a period near hydrologic equilibrium on Long Island. The average base flow at each partial-record station was estimated from a regression equation and average measured base flow for the period at the continuous-record stations. Regression analyses are presented for the 20 streams with partial-record stations. Average base flow of the nine streams with a continuous record totaled 90 cu ft/sec; the predicted average base flow for the 20 streams with a partial record was 73 cu ft/sec (with a 95% confidence interval of 63 to 84 cu ft/sec.) Results indicate that this method provides reliable estimates of average low flow for streams such as those on Long Island, which consist mostly of base flow and are geomorphically similar. (USGS)

  2. How Well Can We Estimate Areal-Averaged Spectral Surface Albedo from Ground-Based Transmission in an Atlantic Coastal Area?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.

    2015-10-15

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  3. FIRST ORDER ESTIMATES OF ENERGY REQUIREMENTS FOR POLLUTION CONTROL

    EPA Science Inventory

    This report presents estimates of the energy demand attributable to environmental control of pollution from 'stationary point sources.' This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes 'mobile s...

  4. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps. PMID:26243476

  5. Estimates of Average Glandular Dose with Auto-modes of X-ray Exposures in Digital Breast Tomosynthesis

    PubMed Central

    Kamal, Izdihar; Chelliah, Kanaga K.; Mustafa, Nawal

    2015-01-01

    Objectives: The aim of this research was to examine the average glandular dose (AGD) of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. Methods: This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50) and 20% glandular and 80% adipose tissue (20/80) commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA) with auto-time, auto-filter and auto-kilovolt modes. Results: The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy) for two dimension (2D) and 2.48 mGy for three dimensional (3D) images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. Conclusion: The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error. PMID:26052465

  6. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  7. Using cone-beam CT projection images to estimate the average and complete trajectory of a fiducial marker moving with respiration

    NASA Astrophysics Data System (ADS)

    Becker, N.; Smith, W. L.; Quirk, S.; Kay, I.

    2010-12-01

    Stereotactic body radiotherapy of lung cancer often makes use of a static cone-beam CT (CBCT) image to localize a tumor that moves during the respiratory cycle. In this work, we developed an algorithm to estimate the average and complete trajectory of an implanted fiducial marker from the raw CBCT projection data. After labeling the CBCT projection images based on the breathing phase of the fiducial marker, the average trajectory was determined by backprojecting the fiducial position from images of similar phase. To approximate the complete trajectory, a 3D fiducial position is estimated from its position in each CBCT project image as the point on the source-image ray closest to the average position at the same phase. The algorithm was tested with computer simulations as well as phantom experiments using a gold seed implanted in a programmable phantom capable of variable motion. Simulation testing was done on 120 realistic breathing patterns, half of which contained hysteresis. The average trajectory was reconstructed with an average root mean square (rms) error of less than 0.1 mm in all three directions, and a maximum error of 0.5 mm. The complete trajectory reconstruction had a mean rms error of less than 0.2 mm, with a maximum error of 4.07 mm. The phantom study was conducted using five different respiratory patterns with the amplitudes of 1.3 and 2.6 cm programmed into the motion phantom. These complete trajectories were reconstructed with an average rms error of 0.4 mm. There is motion information present in the raw CBCT dataset that can be exploited with the use of an implanted fiducial marker to sub-millimeter accuracy. This algorithm could ultimately supply the internal motion of a lung tumor at the treatment unit from the same dataset currently used for patient setup.

  8. Estimating the Average Diameter of a Population of Spheres from Observed Diameters of Random Two-Dimensional Sections

    NASA Technical Reports Server (NTRS)

    Kong, Maiying; Bhattacharya, Rabi N.; James, Christina; Basu, Abhijit

    2003-01-01

    Size distributions of chondrules, volcanic fire-fountain or impact glass spherules, or of immiscible globules in silicate melts (e.g., in basaltic mesostasis, agglutinitic glass, impact melt sheets) are imperfectly known because the spherical objects are usually so strongly embedded in the bulk samples that they are nearly impossible to separate. Hence, measurements are confined to two-dimensional sections, e.g. polished thin sections that are commonly examined under reflected light optical or backscattered electron microscopy. Three kinds of approaches exist in the geologic literature for estimating the mean real diameter of a population of 3D spheres from 2D observations: (1) a stereological approach with complicated calculations; (2) an empirical approach in which independent 3D size measurements of a population of spheres separated from their parent sample and their 2D cross sectional diameters in thin sections have produced an array of somewhat contested conversion equations; and (3) measuring pairs of 2D diameters of upper and lower surfaces of cross sections each sphere in thin sections using transmitted light microscopy. We describe an entirely probabilistic approach and propose a simple factor of 4/x (approximately equal to 1.27) to convert the 2D mean size to 3D mean size.

  9. ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING

    EPA Science Inventory

    A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...

  10. FIELD INFORMATION-BASED SYSTEM FOR ESTIMATING FISH TEMPERATURE REQUIREMENTS

    EPA Science Inventory

    In 1979, Biesinger et al. described a technique for spatial and temporal matching of records of stream temperatures and fish sampling events to obtain estimates of yearly temperature regimes for freshwater fishes of the United States. his article describes the state of this Fish ...

  11. 48 CFR 2452.216-77 - Estimated quantities-requirements contract.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Estimated quantities... Provisions and Clauses 2452.216-77 Estimated quantities—requirements contract. As prescribed in 2416.506-70(c), insert the following provision: Estimated Quantities—Requirements Contract (FEB 2006) In accordance...

  12. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  13. Calcium requirement: new estimations for men and women by cross-sectional statistical analyses of calcium balance data from metabolic studies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: Low intakes of calcium (Ca) are associated with increased risk of both osteoporosis and cardiovascular disease. Objective: To provide new estimates of the average Ca requirement for men and women, we determined the dietary Ca intake required to maintain neutral Ca balance. Design: Ca bal...

  14. Comparison of pooled standard deviation and standardized-t bootstrap methods for estimating uncertainty about average methane emission from rice cultivation

    NASA Astrophysics Data System (ADS)

    Kang, Namgoo; Jung, Min-Ho; Jeong, Hyun-Cheol; Lee, Yung-Seop

    2015-06-01

    The general sample standard deviation and the Monte-Carlo methods as an estimate of confidence interval is frequently being used for estimates of uncertainties with regard to greenhouse gas emission, based on the critical assumption that a given data set follows a normal (Gaussian) or statistically known probability distribution. However, uncertainty estimated using those methods are severely limited in practical applications where it is challenging to assume the probability distribution of a data set or where the real data distribution form appears to deviate significantly from statistically known probability distribution models. In order to solve these issues encountered especially in reasonable estimation of uncertainty about the average of greenhouse gas emission, we present two statistical methods, the pooled standard deviation method (PSDM) and the standardized-t bootstrap method (STBM) based upon statistical theories. We also report interesting results of the uncertainties about the average of a data set of methane (CH4) emission from rice cultivation under the four different irrigation conditions in Korea, measured by gas sampling and subsequent gas analysis. Results from the applications of the PSDM and the STBM to these rice cultivation methane emission data sets clearly demonstrate that the uncertainties estimated by the PSDM were significantly smaller than those by the STBM. We found that the PSDM needs to be adopted in many cases where a data probability distribution form appears to follow an assumed normal distribution with both spatial and temporal variations taken into account. However, the STBM is a more appropriate method widely applicable to practical situations where it is realistically impossible with the given data set to reasonably assume or determine a probability distribution model with a data set showing evidence of fairly asymmetric distribution but severely deviating from known probability distribution models.

  15. EURRECA-Estimating zinc requirements for deriving dietary reference values.

    PubMed

    Lowe, Nicola M; Dykes, Fiona C; Skinner, Anna-Louise; Patel, Sujata; Warthon-Medina, Marisol; Decsi, Tamás; Fekete, Katalin; Souverein, Olga W; Dullemeijer, Carla; Cavelaars, Adriënne E; Serra-Majem, Lluis; Nissensohn, Mariela; Bel, Silvia; Moreno, Luis A; Hermoso, Maria; Vollhardt, Christiane; Berti, Cristiana; Cetin, Irene; Gurinovic, Mirjana; Novakovic, Romana; Harvey, Linda J; Collings, Rachel; Hall-Moran, Victoria

    2013-01-01

    Zinc was selected as a priority micronutrient for EURRECA, because there is significant heterogeneity in the Dietary Reference Values (DRVs) across Europe. In addition, the prevalence of inadequate zinc intakes was thought to be high among all population groups worldwide, and the public health concern is considerable. In accordance with the EURRECA consortium principles and protocols, a series of literature reviews were undertaken in order to develop best practice guidelines for assessing dietary zinc intake and zinc status. These were incorporated into subsequent literature search strategies and protocols for studies investigating the relationships between zinc intake, status and health, as well as studies relating to the factorial approach (including bioavailability) for setting dietary recommendations. EMBASE (Ovid), Cochrane Library CENTRAL, and MEDLINE (Ovid) databases were searched for studies published up to February 2010 and collated into a series of Endnote databases that are available for the use of future DRV panels. Meta-analyses of data extracted from these publications were performed where possible in order to address specific questions relating to factors affecting dietary recommendations. This review has highlighted the need for more high quality studies to address gaps in current knowledge, in particular the continued search for a reliable biomarker of zinc status and the influence of genetic polymorphisms on individual dietary requirements. In addition, there is a need to further develop models of the effect of dietary inhibitors of zinc absorption and their impact on population dietary zinc requirements. PMID:23952091

  16. Estimated water requirements for the conventional flotation of copper ores

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water used by a conventional copper flotation plant. Water is required for many activities at a mine-mill site, including ore production and beneficiation, dust and fire suppression, drinking and sanitation, and minesite reclamation. The water required to operate a flotation plant may outweigh all of the other uses of water at a mine site, [however,] and the need to maintain a water balance is critical for the plant to operate efficiently. Process water may be irretrievably lost or not immediately available for reuse in the beneficiation plant because it has been used in the production of backfill slurry from tailings to provide underground mine support; because it has been entrapped in the tailings stored in the TSF, evaporated from the TSF, or leaked from pipes and (or) the TSF; and because it has been retained as moisture in the concentrate. Water retained in the interstices of the tailings and the evaporation of water from the surface of the TSF are the two most significant contributors to water loss at a conventional flotation circuit facility.

  17. Quantitative Estimates of Temporal Mixing across a 4th-order Depositional Sequence: Variation in Time-averaging along the Holocene Marine Succession of the Po Plain, Italy

    NASA Astrophysics Data System (ADS)

    Scarponi, D.; Kaufman, D.; Bright, J.; Kowalewski, M.

    2009-04-01

    Single fossiliferous beds contain biotic remnants that commonly vary in age over a time span of hundreds to thousands of years. Multiple recent studies suggest that such temporal mixing is a widespread phenomenon in marine depositional systems. This research focuses on quantitative estimates of temporal mixing obtained by direct dating of individual corbulid bivalve shells (Lentidium mediterraneum and Corbula gibba) from Po plain marine units of the Holocene 4th-order depositional sequence, including Transgressive Systems Tract [TST] and Highstand Systems Tract [HST]. These units displays a distinctive succession of facies consisting of brackish to marginal marine retrogradational deposits, (early TST), overlain by fully marine fine to coarse gray sands (late TST), and capped with progradational deltaic clays and sands (HST). More than 300 corbulid specimens, representing 19 shell-rich horizons evenly distributed along the depositional sequence and sampled from 9 cores, have been dated by means of aspartic acid racemization calibrated using 23 AMS-radiocarbon dates (14 dates for Lentidium mediterraneum and 9 dates for Corbula gibba, respectively). The results indicate that the scale of time-averaging is comparable when similar depositional environments from the same systems tract are compared across cores. However, time averaging is notably different when similar depositional environments from TST and HST segments of the sequence are compared. Specifically, late HST horizons (n=8) display relatively low levels of time-averaging: the mean within-horizon range of shell ages is 537 years and standard deviation averages 165 years. In contrast, late TST horizons (n=7) are dramatically more time-averaged: mean range of 5104 years and mean standard deviations of 1420 years. Thus, late TST horizons experience a 1 order of magnitude higher time-averaging than environmentally comparable late HST horizons. In conclusion the HST and TST systems tracts of the Po Plain display

  18. Estimation of daily average net radiation from MODIS data and DEM over the Baiyangdian watershed in North China for clear sky days

    NASA Astrophysics Data System (ADS)

    Long, Di; Gao, Yanchun; Singh, Vijay P.

    2010-07-01

    SummaryDaily average net radiation (DANR) is a critical variable for estimation of daily evapotranspiration (ET) from remote sensing techniques at watershed or regional scales, and in turn for hydrological modeling and water resources management. This study attempts to comprehensively analyze physical mechanisms governing the variation of each component of DANR during a day, with the objective to improve parameterization schemes for daily average net shortwave radiation (DANSR) and daily average net longwave radiation (DANLR) using MODIS (MODerate Resolution Imaging Spectroradiometer) data products, DEM, and minimum meteorological data in order to map spatially consistent and reasonably distributed DANR at watershed scales for clear sky days. First, a geometric model for simulating daily average direct solar radiation by accounting for the effects of terrain factors (slope, azimuth and elevation) on the availability of direct solar radiation for sloping land surfaces is adopted. Specifically, the magnitudes of sunrise and sunset angles, the frequencies of a sloping surface being illuminated as well as the potential sunshine duration for a given sloping surface are computed on a daily basis. The geometric model is applied to the Baiyangdian watershed in North China, with showing the capability to distinctly characterize the spatial pattern of daily average direct solar radiation for sloping land surfaces. DANSR can then be successfully derived from simulated daily average direct solar radiation by means of the geometric model and the characteristics of nearly invariant diffuse solar radiation during daytime in conjunction with MCD43A1 albedo products. Second, four observations of Terra-MODIS and Aqua-MODIS land surface temperature (LST) and surface emissivities in band 31 and band 32 from MOD11A1, MYD11A1 and MOD11_L2 data products for six clear sky days from April to September in the year 2007, are utilized to simulate daily average LST to improve the accuracy of

  19. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  20. Ground-water pumpage and artificial recharge estimates for calendar year 2000 and average annual natural recharge and interbasin flow by hydrographic area, Nevada

    USGS Publications Warehouse

    Lopes, Thomas J.; Evetts, David M.

    2004-01-01

    Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth

  1. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  2. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  3. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  4. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  5. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  6. Estimating pollutant removal requirements for landfills in the UK: II. Model development.

    PubMed

    Hall, D H; Drury, D; Gronow, J R; Rosevear, A; Pollard, S J T; Smith, R

    2006-12-01

    A modelling methodology using a leachate source term has been produced for estimating the timescales for achieving environmental equilibrium status for landfilled waste. Results are reported as the period of active management required for modelled scenarios of non-flushed and flushed sites for a range of pre-filling treatments. The base scenario against which results were evaluated was raw municipal solid waste (MSW) for which only cadmium failed to reach equilibrium. Flushed raw MSW met our criteria for stabilisation with active leachate management for 40 years, subject to each of the leachate species being present at or below their average UK concentrations. Stable non-reactive wastes, meeting EU waste acceptance criteria, fared badly in the non-flushed scenario, with only two species stabilising after a management period within 1000 years and the majority requiring > 2000 years of active leachate management. The flushing scenarios showed only a marginal improvement, with arsenic still persisting beyond 2000 years management even with an additional 500 mm y(-1) of infiltration. The stabilisation time for mechanically sorted organic residues (without flushing) was high, and even with flushing, arsenic and chromium appeared to remain a problem. Two mechanical biological treatment (MBT) scenarios were examined, with medium and high intensity composting. Both were subjected to the non-flushing and flushing scenarios. The non-flushing case of both options fell short of the basic requirements of achieving equilibrium within decades. The intense composting option with minimal flushing appeared to create a scenario where equilibrium could be achieved. For incinerator bottom ash (raw and subjected to various treatments), antimony, copper, chloride and sulphate were the main controls on achieving equilibrium, irrespective of treatment type. Flushing at higher flushing rates (500 mm y(-1)) failed to demonstrate a significant reduction in the management period required. PMID

  7. Feasibility of non-invasive temperature estimation by the assessment of the average gray-level content of B-mode images.

    PubMed

    Teixeira, C A; Alvarenga, A V; Cortela, G; von Krüger, M A; Pereira, W C A

    2014-08-01

    This paper assesses the potential of the average gray-level (AVGL) from ultrasonographic (B-mode) images to estimate temperature changes in time and space in a non-invasive way. Experiments were conducted involving a homogeneous bovine muscle sample, and temperature variations were induced by an automatic temperature regulated water bath, and by therapeutic ultrasound. B-mode images and temperatures were recorded simultaneously. After data collection, regions of interest (ROIs) were defined, and the average gray-level variation computed. For the selected ROIs, the AVGL-Temperature relation were determined and studied. Based on uniformly distributed image partitions, two-dimensional temperature maps were developed for homogeneous regions. The color-coded temperature estimates were first obtained from an AVGL-Temperature relation extracted from a specific partition (where temperature was independently measured by a thermocouple), and then extended to the other partitions. This procedure aimed to analyze the AVGL sensitivity to changes not only in time but also in space. Linear and quadratic relations were obtained depending on the heating modality. We found that the AVGL-Temperature relation is reproducible over successive heating and cooling cycles. One important result was that the AVGL-Temperature relations extracted from one region might be used to estimate temperature in other regions (errors inferior to 0.5 °C) when therapeutic ultrasound was applied as a heating source. Based on this result, two-dimensional temperature maps were developed when the samples were heated in the water bath and also by therapeutic ultrasound. The maps were obtained based on a linear relation for the water bath heating, and based on a quadratic model for the therapeutic ultrasound heating. The maps for the water bath experiment reproduce an acceptable heating/cooling pattern, and for the therapeutic ultrasound heating experiment, the maps seem to reproduce temperature profiles

  8. A comparative study of two-dimensional multifractal detrended fluctuation analysis and two-dimensional multifractal detrended moving average algorithm to estimate the multifractal spectrum

    NASA Astrophysics Data System (ADS)

    Xi, Caiping; Zhang, Shunning; Xiong, Gang; Zhao, Huichang

    2016-07-01

    Multifractal detrended fluctuation analysis (MFDFA) and multifractal detrended moving average (MFDMA) algorithm have been established as two important methods to estimate the multifractal spectrum of the one-dimensional random fractal signal. They have been generalized to deal with two-dimensional and higher-dimensional fractal signals. This paper gives a brief introduction of the two-dimensional multifractal detrended fluctuation analysis (2D-MFDFA) and two-dimensional multifractal detrended moving average (2D-MFDMA) algorithm, and a detailed description of the application of the two-dimensional fractal signal processing by using the two methods. By applying the 2D-MFDFA and 2D-MFDMA to the series generated from the two-dimensional multiplicative cascading process, we systematically do the comparative analysis to get the advantages, disadvantages and the applicabilities of the two algorithms for the first time from six aspects such as the similarities and differences of the algorithm models, the statistical accuracy, the sensitivities of the sample size, the selection of scaling range, the choice of the q-orders and the calculation amount. The results provide a valuable reference on how to choose the algorithm from 2D-MFDFA and 2D-MFDMA, and how to make the schemes of the parameter settings of the two algorithms when dealing with specific signals in practical applications.

  9. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    SciTech Connect

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)

  10. Visual Estimation of Spatial Requirements for Locomotion in Novice Wheelchair Users

    ERIC Educational Resources Information Center

    Higuchi, Takahiro; Takada, Hajime; Matsuura, Yoshifusa; Imanaka, Kuniyasu

    2004-01-01

    Locomotion using a wheelchair requires a wider space than does walking. Two experiments were conducted to test the ability of nonhandicapped adults to estimate the spatial requirements for wheelchair use. Participants judged from a distance whether doorlike apertures of various widths were passable or not passable. Experiment 1 showed that…

  11. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  12. Estimation of the hydraulic conductivity of a two-dimensional fracture network using effective medium theory and power-law averaging

    NASA Astrophysics Data System (ADS)

    Zimmerman, R. W.; Leung, C. T.

    2009-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through generated stochastically two-dimensional fracture networks. The centers and orientations of the fractures are uniformly distributed, whereas their lengths follow a lognormal distribution. The aperture of each fracture is correlated with its length, either through direct proportionality, or through a nonlinear relationship. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this value by starting with the individual fracture conductances, and using various upscaling methods. Kirkpatrick’s effective medium approximation, which works well for pore networks on a core scale, generally underestimates the conductivity of the fracture networks. We attribute this to the fact that the conductances of individual fracture segments (between adjacent intersections with other fractures) are correlated with each other, whereas Kirkpatrick’s approximation assumes no correlation. The power-law averaging approach proposed by Desbarats for porous media is able to match the numerical value, using power-law exponents that generally lie between 0 (geometric mean) and 1 (harmonic mean). The appropriate exponent can be correlated with statistical parameters that characterize the fracture density.

  13. Quaternary estimates of average slip-rates for active faults in the Mongolian Altay Mountains: the advantages and assumptions of multiple dating techniques

    NASA Astrophysics Data System (ADS)

    Gregory, L. C.; Walker, R. T.; Thomas, A. L.; Amgaa, T.; Bayasgalan, G.; Amgalan, B.; West, A.

    2010-12-01

    Active faults in the Altay Mountains, western Mongolia, produce surface expressions that are generally well-preserved due to the arid central-Asian climate. Motion along the right-lateral strike-slip and oblique-reverse faults has displaced major river systems by kilometres over millions of years and there are clear scarps and linear features in the landscape along the surface traces of active fault strands. With combined remote sensing and field work, we have identified sites with surface features that have been displaced by tens of metres as a result of cumulative motion along faults. In an effort to accurately quantify an average slip-rate for the faults, we used multiple dating techniques to provide an age constraint for the displaced landscapes. At one site on the Olgiy fault, we applied 10Be terrestrial cosmogenic nuclides (TCN) and uranium-series geochronology on boulder tops and in-situ formed carbonate rinds, respectively. Based on a displacement of approximately 17m, and geochronology results that range from 20-60ky, we resolve a slip-rate of less than 1 mm/yr. We have also applied optically stimulated luminescence (OSL), 10Be TCN, and U-series methods on the Ar Hotol fault. Each of these dating techniques provides unique constraints on the relationship between the ‘age’ of a displaced surface and the actual amount of displacement, and each has inherent assumptions. We will consider the advantages and assumptions made in utilising these techniques in western Mongolia- e.g. U-series dating of carbonate rinds can provide a minimum age for alluvial fan deposition, and inheritance must be considered when using TCN techniques on boulder tops. This will be put into the context of estimating accurate and geologically relevant slip-rates, and improving our understanding of the active deformation of the Mongolian Altay.

  14. ESTIMATED DAILY AVERAGE PER CAPITA WATER INGESTION BY CHILD AND ADULT AGE CATEGORIES BASED ON USDA'S 1994-96 AND 1998 CONTINUING SURVEY OF FOOD INTAKES BY INDIVIDUALS (JOURNAL ARTICLE)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The ...

  15. [Estimating the impacts of future climate change on water requirement and water deficit of winter wheat in Henan Province, China].

    PubMed

    Ji, Xing-jie; Cheng, Lin; Fang, Wen-song

    2015-09-01

    Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future. PMID:26785550

  16. Estimation of Managerial and Technical Personnel Requirements in Selected Industries. Training for Industry Series, No. 2.

    ERIC Educational Resources Information Center

    United Nations Industrial Development Organization, Vienna (Austria).

    The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…

  17. Estimating N requirements for corn using indices developed from a canopy reflectance sensor

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the increasing cost of fertilizer N, there is a renewed emphasis on developing new technologies for quantifying in-season N requirements for corn. The objectives of this research are (i) to evaluate different vegetative indices derived from an active reflectance sensor in estimating in-season N...

  18. Evaluating Multiple Indices from a Canopy Reflectance Sensor to Estimate Corn N Requirements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the increasing cost of fertilizer N, there is a renewed emphasis on developing new technologies for quantifying in-season N requirements for corn. The objectives of this research are (i) to evaluate different vegetative indices derived from an active reflectance sensor in estimating in-season N...

  19. Brief to the Committee on University Affairs. Estimates of Operating Grant Requirements for 1970-71.

    ERIC Educational Resources Information Center

    Committee of Presidents of Universities of Ontario, Toronto.

    This brief contains a refinement and amplification of preliminary estimates of operating fund requirements of the provincially assisted universities of Ontario for 1970-71. Part B of the report contains quantitative descriptors of university operations including budgeted operating expenditures for 1969-70, faculty income unit ratios in 1969-70,…

  20. Use Of Crop Canopy Size To Estimate Water Requirements Of Vegetable Crops

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Planting time, plant density, variety, and cultural practices vary widely for horticultural crops. It is difficult to estimate crop water requirements for crops with these variations. Canopy size, or factional ground cover, as an indicator of intercepted sunlight, is related to crop water use. We...

  1. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  2. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Astrophysics Data System (ADS)

    Peffley, Al F.

    1991-04-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  3. Averaging the inhomogeneous universe

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2012-03-01

    A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.

  4. On the Berdichevsky average

    NASA Astrophysics Data System (ADS)

    Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi

    2016-04-01

    Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.

  5. Spent fuel disassembly hardware and other non-fuel bearing components: characterization, disposal cost estimates, and proposed repository acceptance requirements

    SciTech Connect

    Luksic, A.T.; McKee, R.W.; Daling, P.M.; Konzek, G.J.; Ludwick, J.D.; Purcell, W.L.

    1986-10-01

    There are two categories of waste considered in this report. The first is the spent fuel disassembly (SFD) hardware. This consists of the hardware remaining after the fuel pins have been removed from the fuel assembly. This includes end fittings, spacer grids, water rods (BWR) or guide tubes (PWR) as appropriate, and assorted springs, fasteners, etc. The second category is other non-fuel-bearing (NFB) components the DOE has agreed to accept for disposal, such as control rods, fuel channels, etc., under Appendix E of the standard utiltiy contract (10 CFR 961). It is estimated that there will be approximately 150 kg of SFD and NFB waste per average metric ton of uranium (MTU) of spent uranium. PWR fuel accounts for approximately two-thirds of the average spent-fuel mass but only 50 kg of the SFD and NFB waste, with most of that being spent fuel disassembly hardware. BWR fuel accounts for one-third of the average spent-fuel mass and the remaining 100 kg of the waste. The relatively large contribution of waste hardware in BWR fuel, will be non-fuel-bearing components, primarily consisting of the fuel channels. Chapters are devoted to a description of spent fuel disassembly hardware and non-fuel assembly components, characterization of activated components, disposal considerations (regulatory requirements, economic analysis, and projected annual waste quantities), and proposed acceptance requirements for spent fuel disassembly hardware and other non-fuel assembly components at a geologic repository. The economic analysis indicates that there is a large incentive for volume reduction.

  6. Data requirements for using combined conductivity mass balance and recursive digital filter method to estimate groundwater recharge in a small watershed, New Brunswick, Canada

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Xing, Zisheng; Danielescu, Serban; Li, Sheng; Jiang, Yefang; Meng, Fan-Rui

    2014-04-01

    Estimation of baseflow and groundwater recharge rates is important for hydrological analysis and modelling. A new approach which combines recursive digital filter (RDF) model with conductivity mass balance (CMB) method was considered to be reliable for baseflow separation because the combined method takes advantages of the reduced data requirement for RDF method and the reliability of CMB method. However, it is not clear what the minimum data requirements for producing acceptable estimates of the RDF model parameters are. In this study, 19-year record of stream discharge and water conductivity collected from the Black Brook Watershed (BBW), NB, Canada were used to test the combined baseflow separation method and assess the variability of parameters in the model over seasons. The data requirements and potential bias in estimated baseflow index (BFI) were evaluated using conductivity data for different seasons and/or resampled data segments at various sampling durations. Results indicated that the data collected during ground-frozen season are more suitable to estimate baseflow conductivity (Cbf) and data during snow-melting period are more suitable to estimate runoff conductivity (Cro). Relative errors of baseflow estimation were inversely proportional to the number of conductivity data records. A minimum of six-month discharge and conductivity data is required to obtain reliable parameters for current method with acceptable errors. We further found that the average annual recharge rate for the BBW was 322 mm in the past twenty years.

  7. Establishing a method for estimating crop water requirements using the SEBAL method in Cyprus

    NASA Astrophysics Data System (ADS)

    Papadavid, G.; Toulios, L.; Hadjimitsis, D.; Kountios, G.

    2014-08-01

    Water allocation to crops has always been of great importance in agricultural process. In this context, and under the current conditions, where Cyprus is facing a severe drought the last five years, purpose of this study is basically to estimate the needed crop water requirements for supporting irrigation management and monitoring irrigation on a systematic basis for Cyprus using remote sensing techniques. The use of satellite images supported by ground measurements has provided quite accurate results. Intended purpose of this paper is to estimate the Evapotranspiration (ET) of specific crops which is the basis for irrigation scheduling and establish a procedure for monitoring and managing irrigation water over Cyprus, using remotely sensed data from Landsat TM/ ETM+ and a sound methodology used worldwide, the Surface Energy Balance Algorithm for Land (SEBAL). The methodology set in this paper refers to COST action ES1106 (Agri-Wat) for determining crop water requirements as part of the water footprint and virtual water-trade.

  8. Bioenergetics model for estimating food requirements of female Pacific walruses (Odobenus rosmarus divergens)

    USGS Publications Warehouse

    Noren, S.R.; Udevitz, M.S.; Jay, C.V.

    2012-01-01

    Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.

  9. The effects of the variations in sea surface temperature and atmospheric stability in the estimation of average wind speed by SEASAT-SASS

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1984-01-01

    The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.

  10. Space Station: Estimated total US funding requirements. Report to Congressional Requesters

    NASA Astrophysics Data System (ADS)

    1995-06-01

    This report reviews current estimated costs of the NASA space station, in particular the total U.S. funding requirements for the program and program uncertainties that may affect those requirements. U.S. funds required to design, launch, and operate the International Space Station will total about $94 billion through 2012 (about $77 billion in fiscal year 1995 constant dollars). This total may decrease to the extent NASA accomplishes its goal for achieving station operational efficiencies over the period 2003 to 2012, or efficiencies currently being studied in the space shuttle program materialize. Despite major progress, the program faces formidable challenges in completing all its tasks on schedule and within its budget. The program estimates through fiscal year 1997 show limited annual financial reserves - about 6 percent to 11 percent of estimated costs. Inadequate reserves would hinder program managers' ability to cope with unanticipated technical problems. In addition, the space station's current launch and assembly schedule is ambitious, and the shuttle program may have difficulty supporting it. Moreover, the prime contract target cost could increase if the contractor is unable to negotiate subcontractor agreements for the expected price.

  11. Model requirements for estimating and reporting soil C stock changes in national greenhouse gas inventories

    NASA Astrophysics Data System (ADS)

    Didion, Markus; Blujdea, Viorel; Grassi, Giacomo; Hernández, Laura; Jandl, Robert; Kriiska, Kaie; Lehtonen, Aleksi; Saint-André, Laurent

    2016-04-01

    Globally, soils are the largest terrestrial store of carbon (C) and small changes may contribute significantly to the global C balance. Due to the potential implications for climate change, accurate and consistent estimates of C fluxes at the large-scale are important as recognized, for example, in international agreements such as the United Nations Framework Convention on Climate Change (UNFCCC). Under the UNFCCC and also under the Kyoto Protocol it is required to report C balances annually. Most measurement-based soil inventories are currently not able to detect annual changes in soil C stocks consistently across space and representative at national scales. The use of models to obtain relevant estimates is considered an appropriate alternative under the UNFCCC and the Kyoto Protocol. Several soil carbon models have been developed but few models are suitable for a consistent application across larger-scales. Consistency is often limited by the lack of input data for models, which can result in biased estimates and, thus, the reporting criteria of accuracy (i.e., emission and removal estimates are systematically neither over nor under true emissions or removals) may be met. Based on a qualitative assessment of the ability to meet criteria established for GHG reporting under the UNFCCC including accuracy, consistency, comparability, completeness, and transparency, we identified the suitability of commonly used simulation models for estimating annual C stock changes in mineral soil in European forests. Among six discussed simulation models we found a clear trend toward models for providing quantitative precise site-specific estimates which may lead to biased estimates across space. To meet reporting needs for national GHG inventories, we conclude that there is a need for models producing qualitative realistic results in a transparent and comparable manner. Based on the application of one model along a gradient from Boreal forests in Finland to Mediterranean forests

  12. Irrigation Requirement Estimation using MODIS Vegetation Indices and Inverse Biophysical Modeling; A Case Study for Oran, Algeria

    NASA Technical Reports Server (NTRS)

    Bounoua, L.; Imhoff, M.L.; Franks, S.

    2008-01-01

    the study site, for the month of July, spray irrigation resulted in an irrigation amount of about 1.4 mm per occurrence with an average frequency of occurrence of 24.6 hours. The simulated total monthly irrigation for July was 34.85 mm. In contrast, the drip irrigation resulted in less frequent irrigation events with an average water requirement about 57% less than that simulated during the spray irrigation case. The efficiency of the drip irrigation method rests on its reduction of the canopy interception loss compared to the spray irrigation method. When compared to a country-wide average estimate of irrigation water use, our numbers are quite low. We would have to revise the reported country level estimates downward to 17% or less

  13. Capital requirements for the transportation of energy materials: 1979 arc estimates

    SciTech Connect

    Not Available

    1980-08-29

    Summaries of transportation investment requirements through 1990 are given for the low, medium and high scenarios. Total investment requirements for the three modes and the three energy commodities can accumulate to a $46.3 to $47.0 billion range depending on the scenario. The high price of oil, following the evidence of the last year, is projected to hold demand for oil below the recent past. Despite the overall decrease in traffic some investment in crude oil and LPG pipelines is necessary to reach new sources of supply. Although natural gas production and consumption is projected to decline through 1990, new investments in carrying capacity also are required due to locational shifts in supply. The Alaska Natural Gas Transportation System is the dominant investment for energy transportation in the next ten years. This year's report focuses attention on waterborne coal transportation to the northeast states in keeping with a return to significant coal consumption projected for this area. A resumption of such shipments will require a completely new fleet. The investment estimates given in this report identify capital required to transport projected energy supplies to market. The requirement is strategic in the sense that other reasonable alternatives do not exist or that a shared load of new growth can be expected. Not analyzed or forecasted are investments in transportation facilities made in response to local conditions. The total investment figures, therefore, represent a minimum necessary capital improvement to respond to changes in interregional supply conditions.

  14. An examination of population exposure to traffic related air pollution: Comparing spatially and temporally resolved estimates against long-term average exposures at the home location.

    PubMed

    Shekarrizfard, Maryam; Faghih-Imani, Ahmadreza; Hatzopoulou, Marianne

    2016-05-01

    Air pollution in metropolitan areas is mainly caused by traffic emissions. This study presents the development of a model chain consisting of a transportation model, an emissions model, and atmospheric dispersion model, applied to dynamically evaluate individuals' exposure to air pollution by intersecting daily trajectories of individuals and hourly spatial variations of air pollution across the study domain. This dynamic approach is implemented in Montreal, Canada to highlight the advantages of the method for exposure analysis. The results for nitrogen dioxide (NO2), a marker of traffic related air pollution, reveal significant differences when relying on spatially and temporally resolved concentrations combined with individuals' daily trajectories compared to a long-term average NO2 concentration at the home location. We observe that NO2 exposures based on trips and activity locations visited throughout the day were often more elevated than daily NO2 concentrations at the home location. The percentage of all individuals with a lower 24-hour daily average at home compared to their 24-hour mobility exposure is 89.6%, of which 31% of individuals increase their exposure by more than 10% by leaving the home. On average, individuals increased their exposure by 23-44% while commuting and conducting activities out of home (compared to the daily concentration at home), regardless of air quality at their home location. We conclude that our proposed dynamic modelling approach significantly improves the results of traditional methods that rely on a long-term average concentration at the home location and we shed light on the importance of using individual daily trajectories to understand exposure. PMID:26970897

  15. Preliminary estimate of environmental flow requirements of the Rusape River, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Love, Faith; Madamombe, Elisha; Marshall, Brian; Kaseke, Evans

    Environmental flow requirements for the Rusape River, a tributary of the Save River, in Zimbabwe, were estimated using a rapid results approach. Thirty years of hydrological data with daily time steps from gauging stations upstream and downstream of the Rusape Dam were analysed using DRIFT Software. The dam appeared to have caused an increase in intra-annual and inter-annual flood events downstream compared to upstream, including significant dry season releases, while inter-annual floods were larger. The water releases from the dam differ from the natural flow in both volume and frequency, especially in the dry season and may have had a negative impact on the local ecosystem and subsistence farmers. The building block method (BMM) was applied, using the hydrological analyses performed, in order to estimate environmental flow requirements, which are presented in mean monthly flows. The flow regime that is recommended for the Rusape River should reduce or reverse these impacts, whilst ensuring sufficient water resources are released for economic needs. The EFR proposed can be achieved within mean monthly flows observed. However, it should be stressed that the EFR proposed have been developed from a rapid method, and are only a first estimate of the EFR for the Rusape River. This study represents a step in developing a management plan for the Save Basin, shared between Zimbabwe and Mozambique.

  16. Determining the required accuracy of LST products for estimating surface energy fluxes

    NASA Astrophysics Data System (ADS)

    Pinheiro, A. C.; Reichle, R.; Sujay, K.; Arsenault, K.; Privette, J. L.; Yu, Y.

    2006-12-01

    Land Surface Temperature (LST) is an important parameter to assess the energy state of a surface. Synoptic satellite observations of LST must be used when attempting to estimate fluxes over large spatial scales. Due to the close coupling between LST, root level water availability, and mass and energy fluxes at the surface, LST is particularly useful over agricultural areas to help determine crop water demands and facilitate water management decisions (e.g., irrigation). Further, LST can be assimilated into land surface models to help improve estimates of latent and sensible heat fluxes. However, the accuracy of LST products and its impact on surface flux estimation is not well known. In this study, we quantify the uncertainty limits in LST products for accurately estimating latent heat fluxes over agricultural fields in the Rio Grande River basin of central New Mexico. We use the Community Land Model (CLM) within the Land Information Systems (LIS), and adopt an Ensemble Kalman Filter approach to assimilate the LST fields into the model. We evaluate the LST and assimilation performance against field measurements of evapotranspiration collected at two eddy-covariance towers in semi-arid cropland areas. Our results will help clarify sensor and LST product requirements for future remote sensing systems.

  17. Estimating sugarcane water requirements for biofuel feedstock production in Maui, Hawaii using satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Anderson, R. G.; Wang, D.

    2011-12-01

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop evapotranspiration (ETc). Generic Kc values are available for many crop types but not for sugarcane in Maui, Hawaii, which grows on a relatively unstudied biennial cycle. In this study, an algorithm is being developed to estimate sugarcane Kc using normalized difference vegetation index (NDVI) derived from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery. A series of ASTER NDVI maps were used to depict canopy development over time or fractional canopy cover (fc) which was measured with a handheld multispectral camera in the fields during satellite overpass days. Canopy cover was correlated with NDVI values. Then the NDVI based canopy cover was used to estimate Kc curves for sugarcane plants. The remotely estimated Kc and ETc values were compared and validated with ground-truth ETc measurements. The approach is a promising tool for large scale estimation of evapotranspiration of sugarcane or other biofuel crops.

  18. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging...). Multiply the result of Step (1) with the result of Step (3). Step (6). Multiply the result of Step (2) with the result of Step (4). Step (7). Add the result of Step (5) and the result of Step (6). This is...

  19. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging...). Multiply the result of Step (1) with the result of Step (3). Step (6). Multiply the result of Step (2) with the result of Step (4). Step (7). Add the result of Step (5) and the result of Step (6). This is...

  20. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging...). Multiply the result of Step (1) with the result of Step (3). Step (6). Multiply the result of Step (2) with the result of Step (4). Step (7). Add the result of Step (5) and the result of Step (6). This is...

  1. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  2. Toxic Release Inventory reporting requirement: Estimating volatile organic compound releases from industrial wastewater treatment facilities

    SciTech Connect

    Hall, F.E. Jr.

    1997-12-31

    In production/maintenance processes at the Oklahoma City Air Logistics Center, industrial wastewater streams are generated which contain organic compounds. These wastewaters are collected and treated in a variety of ways. Some of these collection and treatment steps result in the release of volatile organic compounds (VOC) from the wastewater to the ambient air. This paper provides a discussion of the potential VOC emission sources and presents estimates of emissions for an Industrial Wastewater Treatment Plant (IWTP). As regulatory reporting requirements become increasingly more stringent, Air Force installations are being required to quantify and report VOC releases to the environment. The computer software described in this paper was used to identify and quantify VOC discharges to the environment. The magnitude of VOC emissions depends greatly on many factors such as the physical properties of the pollutants, the temperature of the wastewater, and the design of the individual collection and treatment process units. IWTP VOC releases can be estimated using a computer model designed by the Environmental Protection Agency. The Surface Impoundment Model System (SIMS) model utilizes equipment information to predict air emissions discharged from each individual process unit. SIMS utilizes mass transfer expressions, process unit information, in addition to chemical/physical property data for the interested chemicals. By inputting process conditions and constraints, SIMS determines the effluent concentrations along with the air emissions discharged from each individual process unit. The software is user-friendly with the capable of estimating effluent concentration and ambient air releases. The SIMS software was used by Tinker AFB chemical engineers to predict VOC releases to satisfy the Toxic Release Inventory reporting requirements.

  3. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  4. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  5. Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.

  6. Competing Conservation Objectives for Predators and Prey: Estimating Killer Whale Prey Requirements for Chinook Salmon

    PubMed Central

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  7. Competing conservation objectives for predators and prey: estimating killer whale prey requirements for Chinook salmon.

    PubMed

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  8. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  9. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions.

    PubMed

    Ware, Colin; Trites, Andrew W; Rosen, David A S; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal-and they should also

  10. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions

    PubMed Central

    Trites, Andrew W.; Rosen, David A. S.; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion—an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric—Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal—and they should

  11. Preliminary estimates of galactic cosmic ray shielding requirements for manned interplanetary missions

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Nealy, John E.

    1988-01-01

    Estimates of radiation risk to the blood forming organs from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different constituents per layer. Calculated galactic cosmic ray doses and dose equivalents behind various thicknesses of aluminum and water shielding are presented for solar maximum and solar minimum periods. Estimates of risk to the blood forming organs are made using 5 cm depth dose/dose equivalent values for water. These results indicate that at least 5 g/sq cm (5 cm) of water of 6.5 g/sq cm (2.4 cm) of aluminum shield is required to reduce annual exposure below the current recommended limit of 50 rem. Because of the large uncertainties in fragmentation parameters, and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as 70 percent. Therefore, more detailed analyses with improved inputs could indicate the need for additional shielding.

  12. Estimation of crop water requirements using remote sensing for operational water resources management

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Spiliotopoulos, Marios; Tzabiras, John; Loukas, Athanasios; Mylopoulos, Nikitas

    2015-06-01

    An integrated modeling system, developed in the framework of "Hydromentor" research project, is applied to evaluate crop water requirements for operational water resources management at Lake Karla watershed, Greece. The framework includes coupled components for operation of hydrotechnical projects (reservoir operation and irrigation works) and estimation of agricultural water demands at several spatial scales using remote sensing. The study area was sub-divided into irrigation zones based on land use maps derived from Landsat 5 TM images for the year 2007. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used to derive actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat TM imagery. Agricultural water needs were estimated using the FAO method for each zone and each control node of the system for a number of water resources management strategies. Two operational strategies of hydro-technical project development (present situation without operation of the reservoir and future situation with the operation of the reservoir) are coupled with three water demand strategies. In total, eight (8) water management strategies are evaluated and compared. The results show that, under the existing operational water resources management strategies, the crop water requirements are quite large. However, the operation of the proposed hydro-technical projects in Lake Karla watershed coupled with water demand management measures, like improvement of existing water distribution systems, change of irrigation methods, and changes of crop cultivation could alleviate the problem and lead to sustainable and ecological use of water resources in the study area.

  13. Updated estimates of long-term average dissolved-solids loading in streams and rivers of the Upper Colorado River Basin

    USGS Publications Warehouse

    Tillman, Fred D; Anning, David W.

    2014-01-01

    The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating over 4.5 million acres of farmland, and annually generating about 12 billion kilowatt hours of hydroelectric power. The Upper Colorado River Basin, part of the Colorado River Basin, encompasses more than 110,000 mi2 and is the source of much of more than 9 million tons of dissolved solids that annually flows past the Hoover Dam. High dissolved-solids concentrations in the river are the cause of substantial economic damages to users, primarily in reduced agricultural crop yields and corrosion, with damages estimated to be greater than 300 million dollars annually. In 1974, the Colorado River Basin Salinity Control Act created the Colorado River Basin Salinity Control Program to investigate and implement a broad range of salinity control measures. A 2009 study by the U.S. Geological Survey, supported by the Salinity Control Program, used the Spatially Referenced Regressions on Watershed Attributes surface-water quality model to examine dissolved-solids supply and transport within the Upper Colorado River Basin. Dissolved-solids loads developed for 218 monitoring sites were used to calibrate the 2009 Upper Colorado River Basin Spatially Referenced Regressions on Watershed Attributes dissolved-solids model. This study updates and develops new dissolved-solids loading estimates for 323 Upper Colorado River Basin monitoring sites using streamflow and dissolved-solids concentration data through 2012, to support a planned Spatially Referenced Regressions on Watershed Attributes modeling effort that will investigate the contributions to dissolved-solids loads from irrigation and rangeland practices.

  14. Capital requirements for the transportation of energy materials: 1979 ARC estimates. Draft final report

    SciTech Connect

    Not Available

    1980-08-13

    This report contains TERA's estimates of capital requirements to transport natural gas, crude oil, petroleum products, and coal in the United States by 1990. The low, medium, and high world-oil-price scenarios from the EIA's Mid-range Energy Forecasting System (MEFS), as used in the 1979 Annual Report to Congress (ARC), were provided as a basis for the analysis and represent three alternative futures. TERA's approach varies by energy commodity to make best use of the information and analytical tools available. Summaries of transportation investment requirements through 1990 are given. Total investment requirements for three modes (pipelines, rails, waterways and the three energy commodities can accumulate to a $49.9 to $50.9 billion range depending on the scenario. The scenarios are distinguished primarily by the world price of oil which, given deregulation of domestic oil prices, affects US oil prices even more profoundly than in the past. The high price of oil, following the evidence of the last year, is projected to hold demand for oil below the recent past.

  15. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    NASA Astrophysics Data System (ADS)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  16. A Method for Automated Classification of Parkinson's Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI.

    PubMed

    Banerjee, Monami; Okun, Michael S; Vaillancourt, David E; Vemuri, Baba C

    2016-01-01

    Parkinson's disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  17. A Method for Automated Classification of Parkinson’s Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI

    PubMed Central

    Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.

    2016-01-01

    Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  18. Assessing potential of vertical average soil moisture (0-40cm) estimation for drought monitoring using MODIS data: a case study

    NASA Astrophysics Data System (ADS)

    Ma, Jianwei; Huang, Shifeng; Li, Jiren; Li, Xiaotao; Song, Xiaoning; Leng, Pei; Sun, Yayong

    2015-12-01

    Soil moisture is an important parameter in the research of hydrology, agriculture, and meteorology. The present study is designed to produce a near real time soil moisture estimation algorithm by linking optical/IR measurements to ground measured soil moisture, and then used to monitoring region drought. It has been found that the Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST) are related to surface soil moisture. Therefore, a relationship between ground measurement soil moisture and NDVI and LST can be developed. Six days' NDVI and LST data calculated from Terra Moderate Resolution Imaging Spectroradiometer (MODIS) of Shandong province during October in 2009 to May in 2010 were combined with ground measured volumetric soil moisture in different depth (10cm, 20cm, 40cm, and mean in vertical (0-40cm)) and different soil type to determine regression relationships at a 1 km scale. Based on the regression relationships, mean volumetric soil moisture in vertical (0-40cm) at 1 km resolution can be calculated over the Shandong province, and then drought maps were obtained. The result shows that significantly relationship exists between the NDVI and LST and soil moisture at different soil depths, and regression relationships are soil type dependent. What is more, the drought monitoring results agree well with actual situation.

  19. EURRECA-Estimating vitamin D requirements for deriving dietary reference values.

    PubMed

    Cashman, Kevin D; Kiely, Mairead

    2013-01-01

    The time course of the EURRECA from 2008 to 2012, overlapped considerably with the timeframe of the process undertaken by the North American Institute of Medicine (IOM) to revise dietary reference intakes for vitamin D and calcium (published November 2010). Therefore the aims of the vitamin D-related activities in EURRECA were formulated to address knowledge requirements that would complement the activities undertaken by the IOM and provide additional resources for risk assessors and risk management agencies charged with the task of setting dietary reference values for vitamin D. A total of three systematic reviews were carried out. The first, which pre-dated the IOM review process, identified and evaluated existing and novel biomarkers of vitamin D status and confirmed that circulating 25-hydroxyvitamin D (25(OH)D) concentrations is a robust and reliable marker of vitamin D status. The second systematic review conducted a meta-analysis of the dose-response of serum 25(OH)D to vitamin D intake from randomized controlled trials (RCT) among adults to explore the most appropriate model of the vitamin D intake-serum 25(OH)D) relationship to estimate requirements. The third review also carried out a meta-analysis to evaluate evidence of efficacy from RCT using foods fortified with vitamin D, and found they increased circulating 25(OH)D concentrations in a dose-dependent manner but identified a need for stronger data on the efficacy of vitamin D-fortified food on deficiency prevention and potential health outcomes, including adverse effects. Finally, narrative reviews provided estimates of the prevalence of inadequate intakes of vitamin D in adults and children from international dietary surveys, as well as a compilation of research requirements for vitamin D to inform current and future assessments of vitamin D requirements. [Supplementary materials are available for this article. Go to the publisher's onilne edition of Critical Reviews in Food Science and Nutrion for

  20. Estimating Irrigation Water Requirements using MODIS Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah

    2007-01-01

    An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.

  1. Estimation of crop water requirements: extending the one-step approach to dual crop coefficients

    NASA Astrophysics Data System (ADS)

    Lhomme, J. P.; Boudhina, N.; Masmoudi, M. M.; Chehbouni, A.

    2015-07-01

    Crop water requirements are commonly estimated with the FAO-56 methodology based upon a two-step approach: first a reference evapotranspiration (ET0) is calculated from weather variables with the Penman-Monteith equation, then ET0 is multiplied by a tabulated crop-specific coefficient (Kc) to determine the water requirement (ETc) of a given crop under standard conditions. This method has been challenged to the benefit of a one-step approach, where crop evapotranspiration is directly calculated from a Penman-Monteith equation, its surface resistance replacing the crop coefficient. Whereas the transformation of the two-step approach into a one-step approach has been well documented when a single crop coefficient (Kc) is used, the case of dual crop coefficients (Kcb for the crop and Ke for the soil) has not been treated yet. The present paper examines this specific case. Using a full two-layer model as a reference, it is shown that the FAO-56 dual crop coefficient approach can be translated into a one-step approach based upon a modified combination equation. This equation has the basic form of the Penman-Monteith equation but its surface resistance is calculated as the parallel sum of a foliage resistance (replacing Kcb) and a soil surface resistance (replacing Ke). We also show that the foliage resistance, which depends on leaf stomatal resistance and leaf area, can be inferred from the basal crop coefficient (Kcb) in a way similar to the Matt-Shuttleworth method.

  2. A new remote sensing procedure for the estimation of crop water requirements

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, M.; Loukas, A.; Mylopoulos, N.

    2015-06-01

    The objective of this work is the development of a new approach for the estimation of water requirements for the most important crops located at Karla Watershed, central Greece. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used as a basis for the derivation of actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat ETM+ imagery. MODIS imagery has been also used, and a spatial downscaling procedure is followed between the two sensors for the derivation of a new NDVI product with a spatial resolution of 30 m x 30 m. GER 1500 spectro-radiometric measurements are additionally conducted during 2012 growing season. Cotton, alfalfa, corn and sugar beets fields are utilized, based on land use maps derived from previous Landsat 7 ETM+ images. A filtering process is then applied to derive NDVI values after acquiring Landsat ETM+ based reflectance values from the GER 1500 device. ETrF vs NDVI relationships are produced and then applied to the previous satellite based downscaled product in order to finally derive a 30 m x 30 m daily ETrF map for the study area. CropWat model (FAO) is then applied, taking as an input the new crop coefficient values with a spatial resolution of 30 m x 30 m available for every crop. CropWat finally returns daily crop water requirements (mm) for every crop and the results are analyzed and discussed.

  3. MANPOWER REQUIREMENTS AND DEMAND IN AGRICULTURE BY REGIONS AND NATIONALLY, WITH ESTIMATION OF VOCATIONAL TRAINING AND EDUCATIONAL NEEDS AND PRODUCTIVITY.

    ERIC Educational Resources Information Center

    ARCUS, PETER; HEADY, EARL O.

    THE PURPOSE OF THIS STUDY IS TO ESTIMATE THE MANPOWER REQUIREMENTS FOR THE NATION FOR 144 REGIONS THE TYPES OF SKILLS AND WORK ABILITIES REQUIRED BY AGRICULTURE IN THE NEXT 15 YEARS, AND THE TYPES AND AMOUNTS OF EDUCATION NEEDED. THE QUANTITATIVE ANALYSIS IS BEING MADE BY METHODS APPROPRIATE TO THE PHASES OF THE STUDY--(1) INTERRELATIONS AMONG…

  4. Evaluation of a method estimating real-time individual lysine requirements in two lines of growing-finishing pigs.

    PubMed

    Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J

    2015-04-01

    The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be

  5. Assessment of radar resolution requirements for soil moisture estimation from simulated satellite imagery. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.

    1982-01-01

    Radar simulations were performed at five-day intervals over a twenty-day period and used to estimate soil moisture from a generalized algorithm requiring only received power and the mean elevation of a test site near Lawrence, Kansas. The results demonstrate that the soil moisture of about 90% of the 20-m by 20-m pixel elements can be predicted with an accuracy of + or - 20% of field capacity within relatively flat agricultural portions of the test site. Radar resolutions of 93 m by 100 m with 23 looks or coarser gave the best results, largely because of the effects of signal fading. For the distribution of land cover categories, soils, and elevation in the test site, very coarse radar resolutions of 1 km by 1 km and 2.6 km by 3.1 km gave the best results for wet moisture conditions while a finer resolution of 93 m by 100 m was found to yield superior results for dry to moist soil conditions.

  6. Electrofishing effort required to estimate biotic condition in Southern Idaho Rivers

    USGS Publications Warehouse

    Maret, T.R.; Ott, D.S.; Herlihy, A.T.

    2007-01-01

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions. ?? Copyright by the American Fisheries Society 2007.

  7. Estimating the Reliability of Dynamic Variables Requiring Rater Judgment: A Generalizability Paradigm.

    ERIC Educational Resources Information Center

    Webber, Larry; And Others

    Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool. Systolic…

  8. Number of trials required to estimate a free-energy difference, using fluctuation relations.

    PubMed

    Yunger Halpern, Nicole; Jarzynski, Christopher

    2016-05-01

    The difference ΔF between free energies has applications in biology, chemistry, and pharmacology. The value of ΔF can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a ΔF estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006)PLEEE81539-375510.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of ΔF. Estimating ΔF from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations. PMID:27300866

  9. Number of trials required to estimate a free-energy difference, using fluctuation relations

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Jarzynski, Christopher

    2016-05-01

    The difference Δ F between free energies has applications in biology, chemistry, and pharmacology. The value of Δ F can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a Δ F estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006), 10.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of Δ F . Estimating Δ F from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations.

  10. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  11. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  12. Physically-based Methods for the Estimation of Crop Water Requirements from E.O. Optical Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The estimation of evapotranspiration (ET) represent the basic information for the evaluation of crop water requirements. A widely used method to compute ET is based on the so-called "crop coefficient" (Kc), defined as the ratio of total evapotranspiration by reference evapotranspiration ET0. The val...

  13. Reported energy intake by weight status, day and estimated energy requirement among adults: NHANES 2003-2008

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Objective: To describe energy intake reporting by gender, weight status, and interview sequence and to compare reported intakes to the Estimated Energy Requirement at different levels of physical activity. Methods: Energy intake was self-reported by 24-hour recall on two occasions (day 1 and day 2)...

  14. Utility of multi temporal satellite images for crop water requirements estimation and irrigation management in the Jordan Valley

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Identifying the spatial and temporal distribution of crop water requirements is a key for successful management of water resources in the dry areas. Climatic data were obtained from three automated weather stations to estimate reference evapotranspiration (ETO) in the Jordan Valley according to the...

  15. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model

    PubMed Central

    Jiang, Shengyu; Wang, Chun; Weiss, David J

    2016-01-01

    Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916

  16. Estimated quantitative amino acid requirements for Florida pompano reared in low-salinity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As with most marine carnivores, Florida pompano require relatively high crude protein diets to obtain optimal growth. Precision formulations to match the dietary indispensable amino acid (IAA) pattern to a species’ requirements can be used to lower the overall dietary protein. However IAA requirem...

  17. Minimizing instrumentation requirement for estimating crop water stress index and transpiration of maize

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Research was conducted in northern Colorado in 2011 to estimate the Crop Water Stress Index (CWSI) and actual water transpiration (Ta) of maize under a range of irrigation regimes. The main goal was to obtain these parameters with minimum instrumentation and measurements. The results confirmed that ...

  18. EVALUATION OF SAMPLING FREQUENCIES REQUIRED TO ESTIMATE NUTRIENT AND SUSPENDED SEDIMENT LOADS IN LARGE RIVERS

    EPA Science Inventory

    Nutrients and suspended sediments in streams and large rivers are two major issues facing state and federal agencies. Accurate estimates of nutrient and sediment loads are needed to assess a variety of important water-quality issues including total maximum daily loads, aquatic ec...

  19. A Method to Estimate the Number of House Officers Required in Teaching Hospitals.

    ERIC Educational Resources Information Center

    Chan, Linda S.; Bernstein, Sol

    1980-01-01

    A method of estimating the number of house officers needed for direct patient care in teaching hospitals is discussed. An application of the proposed method is illustrated for 11 clinical services at the Los Angeles County-University of Southern California Medical Center. (Author/MLW)

  20. Development of Procedures for Generating Alternative Allied Health Manpower Requirements and Supply Estimates.

    ERIC Educational Resources Information Center

    Applied Management Sciences, Inc., Silver Spring, MD.

    This report presents results of a project to assess the adequacy of existing data sources on the supply of 21 allied health occupations in order to develop improved data collection strategies and improved procedures for estimation of manpower needs. Following an introduction, chapter 2 provides a discussion of the general phases of the project and…

  1. SAMPLING AND CALIBRATION REQUIREMENTS FOR SOIL PROPERTY ESTIMATION USING NIR SPECTROSCOPY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil physical and chemical properties are important in crop production since they control the availability of plant water and nutrients. Optical diffuse reflectance sensing is a potential approach for rapid and reliable on-site estimation of soil properties. One issue with this sensing approach is w...

  2. Sampling and Calibration Requirements for Soil Property Estimation Using Reflectance Spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Optical diffuse reflectance sensing is a potential approach for rapid and reliable on-site estimation of soil properties. One issue with this sensing approach is whether additional calibration is necessary when the sensor is applied under conditions (e.g., soil types or ambient conditions) different...

  3. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  4. Shadow Radiation Shield Required Thickness Estimation for Space Nuclear Power Units

    NASA Astrophysics Data System (ADS)

    Voevodina, E. V.; Martishin, V. M.; Ivanovsky, V. A.; Prasolova, N. O.

    The paper concerns theoretical possibility of visiting orbital transport vehicles based on nuclear power unit and electric propulsion system on the Earth's orbit by astronauts to maintain work with payload from the perspective of radiation safety. There has been done estimation of possible time of the crew's staying in the area of payload of orbital transport vehicles for different reactor powers, which is a consistent part of nuclear power unit.

  5. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  6. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  7. Biology, population structure, and estimated forage requirements of lake trout in Lake Michigan

    USGS Publications Warehouse

    Eck, Gary W.; Wells, LaRue

    1983-01-01

    Data collected during successive years (1971-79) of sampling lake trout (Salvelinus namaycush) in Lake Michigan were used to develop statistics on lake trout growth, maturity, and mortality, and to quantify seasonal lake trout food and food availability. These statistics were then combined with data on lake trout year-class strengths and age-specific food conversion efficiencies to compute production and forage fish consumption by lake trout in Lake Michigan during the 1979 growing season (i.e., 15 May-1 December). An estimated standing stock of 1,486 metric tons (t) at the beginning of the growing season produced an estimated 1,129 t of fish flesh during the period. The lake trout consumed an estimated 3,037 t of forage fish, to which alewives (Alosa pseudoharengus) contributed about 71%, rainbow smelt (Osmerus mordax) 18%, and slimy sculpins (Cottus cognatus) 11%. Seasonal changes in bathymetric distributions of lake trout with respect to those of forage fish of a suitable size for prey were major determinants of the size and species compositions of fish in the seasonal diet of lake trout.

  8. Estimating resting energy expenditure in patients requiring nutritional support: a survey of dietetic practice.

    PubMed

    Green, A J; Smith, P; Whelan, K

    2008-01-01

    Estimation of resting energy expenditure (REE) involves predicting basal metabolic rate (BMR) plus adjustment for metabolic stress. The aim of this study was to investigate the methods used to estimate REE and to identify the impact of the patient's clinical condition and the dietitians' work profile on the stress factor assigned. A random sample of 115 dietitians from the United Kingdom with an interest in nutritional support completed a postal questionnaire regarding the estimation of REE for 37 clinical conditions. The Schofield equation was used by the majority (99%) of dietitians to calculate BMR; however, the stress factors assigned varied considerably with coefficients of variation ranging from 18.5 (cancer with cachexia) to 133.9 (HIV). Dietitians specializing in gastroenterology assigned a higher stress factor to decompensated liver disease than those not specializing in gastroenterology (19.3 vs 10.7, P=0.004). The results of this investigation strongly suggest that there is wide inconsistency in the assignment of stress factors within specific conditions and gives rise to concern over the potential consequences in terms of under- or overfeeding that may ensue. PMID:17311053

  9. A comparison of methods to estimate nutritional requirements from experimental data.

    PubMed

    Pesti, G M; Vedenov, D; Cason, J A; Billard, L

    2009-01-01

    1. Research papers use a variety of methods for evaluating experiments designed to determine nutritional requirements of poultry. Growth trials result in a set of ordered pairs of data. Often, point-by-point comparisons are made between treatments using analysis of variance. This approach ignores that response variables (body weight, feed efficiency, bone ash, etc.) are continuous rather than discrete. Point-by-point analyses harvest much less than the total amount of information from the data. Regression models are more effective at gleaning information from data, but the concept of "requirements" is poorly defined by many regression models. 2. Response data from a study of the lysine requirements of young broilers was used to compare methods of determining requirements. In this study, multiple range tests were compared with quadratic polynomials (QP), broken line models with linear (BLL) or quadratic (BLQ) ascending portions, the saturation kinetics model (SK) a logistic model (LM) and a compartmental (CM) model. 3. The sum of total residuals squared was used to compare the models. The SK and LM were the best fit models, followed by the CM, BLL, BLQ, and QP models. A plot of the residuals versus nutrient intake showed clearly that the BLQ and SK models fitted the data best in the important region where the ascending portion meets the plateau. 4. The BLQ model clearly defines the technical concept of nutritional requirements as typically defined by nutritionists. However, the SK, LM and CM models better depict the relationship typically defined by economists as the "law of diminishing marginal productivity". The SK model was used to demonstrate how the law of diminishing marginal productivity can be applied to poultry nutrition, and how the "most economical feeding level" may replace the concept of "requirements". PMID:19234926

  10. Estimating Sugarcane Water Requirements for Biofuel Feedstock Production in Maui, Hawaii Using Satellite Imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop eva...