Science.gov

Sample records for estimated average requirement

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  3. Dietary Protein Requirement of Men >65 Years Old Determined by the Indicator Amino Acid Oxidation Technique Is Higher than the Current Estimated Average Requirement.

    PubMed

    Rafii, Mahroukh; Chapman, Karen; Elango, Rajavel; Campbell, Wayne W; Ball, Ronald O; Pencharz, Paul B; Courtney-Martin, Glenda

    2016-03-09

    The current estimated average requirement (EAR) and RDA for protein of 0.66 and 0.8 g ⋅ kg(-1) ⋅ d(-1), respectively, for adults, including older men, are based on nitrogen balance data analyzed by monolinear regression. Recent studies in young men and older women that used the indicator amino acid oxidation (IAAO) technique suggest that those values may be too low. This observation is supported by 2-phase linear crossover analysis of the nitrogen balance data. The main objective of this study was to determine the protein requirement for older men by using the IAAO technique. Six men aged >65 y were studied; each individual was tested 7 times with protein intakes ranging from 0.2 to 2.0 g ⋅ kg(-1) ⋅ d(-1) in random order for a total of 42 studies. The diets provided energy at 1.5 times the resting energy expenditure and were isocaloric. Protein was consumed hourly for 8 h as an amino acid mixture with the composition of egg protein with l-[1-(13)C]phenylalanine as the indicator amino acid. The group mean protein requirement was determined by applying a mixed-effects change-point regression analysis to F(13)CO2 (label tracer oxidation in breath (13)CO2), which identified a breakpoint in F(13)CO2 in response to graded intakes of protein. The estimated protein requirement and RDA for older men were 0.94 and 1.24 g ⋅ kg(-1) ⋅ d(-1), respectively, which are not different from values we published using the same method in young men and older women. The current intake recommendations for older adults for dietary protein of 0.66 g ⋅ kg(-1) ⋅ d(-1) for the EAR and 0.8 g ⋅ kg(-1) ⋅ d(-1) for the RDA appear to be underestimated by ∼30%. Future longer-term studies should be conducted to validate these results. This trial was registered at clinicaltrials.gov as NCT01948492. © 2016 American Society for Nutrition.

  4. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    PubMed

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; P<.001). The difference between EER(PAR) and EER(IOM) values ranged from 40 kcal/d (1.2% higher than EER(IOM)) in obese (body mass index [BMI] ≥30) men to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI <25) women. There were significant inverse relationships between APAR and both obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (P<.001). The PAR protocol is an accurate method for deriving nationally representative estimates of EER and APAR values. These descriptive data provide novel quantitative baseline values for future investigations into associations of physical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  5. Clinical validity of the estimated energy requirement and the average protein requirement for nutritional status change and wound healing in older patients with pressure ulcers: A multicenter prospective cohort study.

    PubMed

    Iizaka, Shinji; Kaitani, Toshiko; Nakagami, Gojiro; Sugama, Junko; Sanada, Hiromi

    2015-11-01

    Adequate nutritional intake is essential for pressure ulcer healing. Recently, the estimated energy requirement (30 kcal/kg) and the average protein requirement (0.95 g/kg) necessary to maintain metabolic balance have been reported. The purpose was to evaluate the clinical validity of these requirements in older hospitalized patients with pressure ulcers by assessing nutritional status and wound healing. This multicenter prospective study carried out as a secondary analysis of a clinical trial included 194 patients with pressure ulcers aged ≥65 years from 29 institutions. Nutritional status including anthropometry and biochemical tests, and wound status by a structured severity tool, were evaluated over 3 weeks. Energy and protein intake were determined from medical records on a typical day and dichotomized by meeting the estimated average requirement. Longitudinal data were analyzed with a multivariate mixed-effects model. Meeting the energy requirement was associated with changes in weight (P < 0.001), arm muscle circumference (P = 0.003) and serum albumin level (P = 0.016). Meeting the protein requirement was associated with changes in weight (P < 0.001) and serum albumin level (P = 0.043). These markers decreased in patients who did not meet the requirement, but were stable or increased in those who did. Energy and protein intake were associated with wound healing for deep ulcers (P = 0.013 for both), improving exudates and necrotic tissue, but not for superficial ulcers. Estimated energy requirement and average protein requirement were clinically validated for prevention of nutritional decline and of impaired healing of deep pressure ulcers. © 2014 Japan Geriatrics Society.

  6. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  7. Phase-based direct average strain estimation for elastography.

    PubMed

    Ara, Sharmin R; Mohsin, Faisal; Alam, Farzana; Rupa, Sharmin Akhtar; Awwal, Rayhana; Lee, Soo Yeol; Hasan, Md Kamrul

    2013-11-01

    In this paper, a phase-based direct average strain estimation method is developed. A mathematical model is presented to calculate axial strain directly from the phase of the zero-lag cross-correlation function between the windowed precompression and stretched post-compression analytic signals. Unlike phase-based conventional strain estimators, for which strain is computed from the displacement field, strain in this paper is computed in one step using the secant algorithm by exploiting the direct phase-strain relationship. To maintain strain continuity, instead of using the instantaneous phase of the interrogative window alone, an average phase function is defined using the phases of the neighboring windows with the assumption that the strain is essentially similar in a close physical proximity to the interrogative window. This method accounts for the effect of lateral shift but without requiring a prior estimate of the applied strain. Moreover, the strain can be computed both in the compression and relaxation phases of the applied pressure. The performance of the proposed strain estimator is analyzed in terms of the quality metrics elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe), and mean structural similarity (MSSIM), using a finite element modeling simulation phantom. The results reveal that the proposed method performs satisfactorily in terms of all the three indices for up to 2.5% applied strain. Comparative results using simulation and experimental phantom data, and in vivo breast data of benign and malignant masses also demonstrate that the strain image quality of our method is better than the other reported techniques.

  8. Sample size bias in retrospective estimates of average duration.

    PubMed

    Smith, Andrew R; Rule, Shanon; Price, Paul C

    2017-03-25

    People often estimate the average duration of several events (e.g., on average, how long does it take to drive from one's home to his or her office). While there is a great deal of research investigating estimates of duration for a single event, few studies have examined estimates when people must average across numerous stimuli or events. The current studies were designed to fill this gap by examining how people's estimates of average duration were influenced by the number of stimuli being averaged (i.e., the sample size). Based on research investigating the sample size bias, we predicted that participants' judgments of average duration would increase as the sample size increased. Across four studies, we demonstrated a sample size bias for estimates of average duration with different judgment types (numeric estimates and comparisons), study designs (between and within-subjects), and paradigms (observing images and performing tasks). The results are consistent with the more general notion that psychological representations of magnitudes in one dimension (e.g., quantity) can influence representations of magnitudes in another dimension (e.g., duration).

  9. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  10. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  11. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  12. Estimating a weighted average of stratum-specific parameters.

    PubMed

    Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul

    2008-10-30

    This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.

  13. Estimating Health Services Requirements

    NASA Technical Reports Server (NTRS)

    Alexander, H. M.

    1985-01-01

    In computer program NOROCA populations statistics from National Center for Health Statistics used with computational procedure to estimate health service utilization rates, physician demands (by specialty) and hospital bed demands (by type of service). Computational procedure applicable to health service area of any size and even used to estimate statewide demands for health services.

  14. A Spectral Estimate of Average Slip in Earthquakes

    NASA Astrophysics Data System (ADS)

    Boatwright, J.; Hanks, T. C.

    2014-12-01

    We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.

  15. Advising Students about Required Grade-Point Averages

    ERIC Educational Resources Information Center

    Moore, W. Kent

    2006-01-01

    Sophomores interested in professional colleges with grade-point average (GPA) standards for admission to upper division courses will need specific and realistic information concerning the requirements. Specifically, those who fall short of the standard must assess the likelihood of achieving the necessary GPA for professional program admission.…

  16. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  17. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  18. Variations in Nimbus-7 cloud estimates. Part I: Zonal averages

    SciTech Connect

    Weare, B.C. )

    1992-12-01

    Zonal averages of low, middle, high, and total cloud amount estimates derived from measurements from Nimbus-7 have been analyzed for the six-year period April 1979 through March 1985. The globally and zonally averaged valued of six-year annual means and standard deviations of total cloud amount and a proxy of cloudtop height are illustrated. Separate means for day and night and land and sea are also shown. The globally averaged value of intra-annual variability of total cloud amount is greater than 7%, and that for cloud height is greater than 0.3 km. Those of interannual variability are more than one-third of these values. Important latitudinal differences in variability are illustrated. The dominant empirical orthogonal analyses of the intra-annual variations of total cloud amount and heights show strong annual cycles, indicating that in the tropics increases in total cloud amount of up to about 30% are often accompanied by increases in cloud height of up to 1.2 km. This positive link is also evident in the dominant empirical orthogonal function of interannual variations of a total cloud/cloud height complex. This function shows a large coherent variation in total cloud cover of about 10% coupled with changes in cloud height of about 1.1 km associated with the 1982-83 El Ni[tilde n]o-Southern Oscillation event. 14 refs. 12 figs., 2 tabs.

  19. Estimating storm areal average rainfall intensity in field experiments

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, Christa D.; Wood, Eric F.

    1994-07-01

    Estimates of areal mean precipitation intensity derived from rain gages are commonly used to assess the performance of rainfall radars and satellite rainfall retrieval algorithms. Areal mean precipitation time series collected during short-duration climate field studies are also used as inputs to water and energy balance models which simulate land-atmosphere interactions during the experiments. In two recent field experiments (1987 First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) and the Multisensor Airborne Campaign for Hydrology 1990 (MAC-HYDRO '90)) designed to investigate the climatic signatures of land-surface forcings and to test airborne sensors, rain gages were placed over the watersheds of interest. These gages provide the sole means for estimating storm precipitation over these areas, and the gage densities present during these experiments indicate that there is a large uncertainty in estimating areal mean precipitation intensity for single storm events. Using a theoretical model of time- and area-averaged space- time rainfall and a model rainfall generator, the error structure of areal mean precipitation intensity is studied for storms statistically similar to those observed in the FIFE and MAC-HYDRO field experiments. Comparisons of the error versus gage density trade-off curves to those calculated using the storm observations show that the rainfall simulator can provide good estimates of the expected measurement error given only the expected intensity, coefficient of variation, and rain cell diameter or correlation length scale, and that these errors can quickly become very large (in excess of 20%) for certain storms measured with a network whose size is below a "critical" gage density. Because the mean storm rainfall error is particularly sensitive to the correlation length, it is important that future field experiments include radar and/or dense rain gage networks capable of accurately characterizing the

  20. Identification and estimation of survivor average causal effects

    PubMed Central

    Tchetgen, Eric J Tchetgen

    2014-01-01

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

  1. Optimal Estimation of the Average Areal Rainfall and Optimal Selection of Rain Gauge Locations

    NASA Astrophysics Data System (ADS)

    Bastin, G.; Lorent, B.; Duqué, C.; Gevers, M.

    1984-04-01

    We propose a simple procedure for the real-time estimation of the average rainfall over a catchment area. The rainfall is modeled as a two-dimensional random field. The average areal rainfall is computed by a linear unbiased minimum variance estimation method (kriging) which requires knowledge of the variogram of the random field. We propose a time-varying estimator for the variogram which takes into account the influences of both the seasonal variations and the rainfall intensity. Our average areal rainfall estimator has been implemented in practice. We illustrate its application to real data in two river basins in Belgium. Finally, it is shown how the method can be used for the optimal selection of the rain gauge locations in a basin.

  2. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.

  3. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  4. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  5. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    DTIC Science & Technology

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to

  6. Asymptotic Properties of Some Estimators in Moving Average Models

    DTIC Science & Technology

    1975-09-08

    consider a different approach due to Durbin (1959), based on approximating the moving average of order .q by an autoregression of order k ( k ~ q). This...method shows good statistical properties. The paper by Durbin does not treat in detail the role of k in the parameters of the limiting normal...k) confirming some of the examples presented by Durbin . The parallel analysis with k = k(T) was also attempted.:> but at this point no complete

  7. Effect of wind averaging time on wind erosivity estimation

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...

  8. Propensity scores based methods for estimating average treatment effect and average treatment effect among treated: A comparative study.

    PubMed

    Abdia, Younathan; Kulasekera, K B; Datta, Somnath; Boakye, Maxwell; Kong, Maiying

    2017-09-01

    Propensity score based statistical methods, such as matching, regression, stratification, inverse probability weighting (IPW), and doubly robust (DR) estimating equations, have become popular in estimating average treatment effect (ATE) and average treatment effect among treated (ATT) in observational studies. Propensity score is the conditional probability receiving a treatment assignment with given covariates, and propensity score is usually estimated by logistic regression. However, a misspecification of the propensity score model may result in biased estimates for ATT and ATE. As an alternative, the generalized boosting method (GBM) has been proposed to estimate the propensity score. GBM uses regression trees as weak predictors and captures nonlinear and interactive effects of the covariate. For GBM-based propensity score, only IPW methods have been investigated in the literature. In this article, we provide a comparative study of the commonly used propensity score based methods for estimating ATT and ATE, and examine their performances when propensity score is estimated by logistic regression and GBM, respectively. Extensive simulation results indicate that the estimators for ATE and ATT may vary greatly due to different methods. We concluded that (i) regression may not be suitable for estimating ATE and ATT regardless of the estimation method of propensity score; (ii) IPW and stratification usually provide reliable estimates of ATT when propensity score model is correctly specified; (iii) the estimators of ATE based on stratification, IPW, and DR are close to the underlying true value of ATE when propensity score is correctly specified by logistic regression or estimated using GBM. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Estimating the average time for inter-continental transport of air pollutants

    NASA Astrophysics Data System (ADS)

    Liu, Junfeng; Mauzerall, Denise L.

    2005-06-01

    We estimate the average time required for inter-continental transport of atmospheric tracers based on simulations with the global chemical tracer model MOZART-2 driven with NCEP meteorology. We represent the average transport time by a ratio of the concentration of two tracers with different lifetimes. We find that average transport times increase with tracer lifetimes. With tracers of 1- and 2-week lifetimes the average transport time from East Asia (EA) to the surface of western North America (NA) in April is 2-3 weeks, approximately a half week longer than transport from NA to western Europe (EU) and from EU to EA. We develop an `equivalent circulation' method to estimate a timescale which has little dependence on tracer lifetimes and obtain similar results to those obtained with short-lived tracers. Our findings show that average inter-continental transport times, even for tracers with short lifetimes, are on average 1-2 weeks longer than rapid transport observed in plumes.

  10. Average Lifetime of an Intelligent Civilization Estimated on its

    NASA Astrophysics Data System (ADS)

    Kompanichenko, Vladimir

    In the cycle of existence of all natural systems (possessing an excess of free energy with respect to the environment - stars, living organisms, social systems, etc.) 4 universal stages can be distinguished: growth. internal development, stationary state, ageing (Kompanichenko, Futures, 1994, 26/5). In the context of this approach, human civilization that originated 10000 years ago is going through the natural cycle of its development. The analogy between the two different-hierarchy active systems is drawn: a human ( constructed of 60 trillion autonomous living systems - cells) and the human community (consisting of 6 billion autonomous living systems - people). In the cycle of the existence of a human, each of the four stages accounts for roughly 25% of its lifespan: growth 0-18 years, internal development, reaching maturity 18-36 years, stationary state 36-54 years, ageing 54-72 years. The humankind is now almost approaching the limits of its growth. Consequently, it can correspond to the age of 16-17 years old person. Thus, we can assume that during 10000 years of its existence human civilization has passed about 25% of the cycle of development. There is 30000 years of normative existence ahead (middle estimation). Actual existence can oscillate between 300 years and 3 million years, depending on the reasonable, conscious approach of humankind to its future. According to the Drake equation, with normative estimation L=30000 years there should exist at least several thousand of intelligent civilizations in our Galaxy.

  11. Analysis of the estimators of the average coefficient of dominance of deleterious mutations.

    PubMed

    Fernández, B; García-Dorado, A; Caballero, A

    2004-10-01

    We investigate the sources of bias that affect the most commonly used methods of estimation of the average degree of dominance (h) of deleterious mutations, focusing on estimates from segregating populations. The main emphasis is on the effect of the finite size of the populations, but other sources of bias are also considered. Using diffusion approximations to the distribution of gene frequencies in finite populations as well as stochastic simulations, we assess the behavior of the estimators obtained from populations at mutation-selection-drift balance under different mutational scenarios and compare averages of h for newly arisen and segregating mutations. Because of genetic drift, the inferences concerning newly arisen mutations based on the mutation-selection balance theory can have substantial upward bias depending upon the distribution of h. In addition, estimates usually refer to h weighted by the homozygous deleterious effect in different ways, so that inferences are complicated when these two variables are negatively correlated. Due to both sources of bias, the widely used regression of heterozygous on homozygous means underestimates the arithmetic mean of h for segregating mutations, in contrast to their repeatedly assumed equality in the literature. We conclude that none of the estimators from segregating populations provides, under general conditions, a useful tool to ascertain the properties of the degree of dominance, either for segregating or for newly arisen deleterious mutations. Direct estimates of the average h from mutation-accumulation experiments are shown to suffer some bias caused by purging selection but, because they do not require assumptions on the causes maintaining segregating variation, they appear to give a more reliable average dominance for newly arisen mutations.

  12. Analysis of the Estimators of the Average Coefficient of Dominance of Deleterious Mutations

    PubMed Central

    Fernández, B.; García-Dorado, A.; Caballero, A.

    2004-01-01

    We investigate the sources of bias that affect the most commonly used methods of estimation of the average degree of dominance (h) of deleterious mutations, focusing on estimates from segregating populations. The main emphasis is on the effect of the finite size of the populations, but other sources of bias are also considered. Using diffusion approximations to the distribution of gene frequencies in finite populations as well as stochastic simulations, we assess the behavior of the estimators obtained from populations at mutation-selection-drift balance under different mutational scenarios and compare averages of h for newly arisen and segregating mutations. Because of genetic drift, the inferences concerning newly arisen mutations based on the mutation-selection balance theory can have substantial upward bias depending upon the distribution of h. In addition, estimates usually refer to h weighted by the homozygous deleterious effect in different ways, so that inferences are complicated when these two variables are negatively correlated. Due to both sources of bias, the widely used regression of heterozygous on homozygous means underestimates the arithmetic mean of h for segregating mutations, in contrast to their repeatedly assumed equality in the literature. We conclude that none of the estimators from segregating populations provides, under general conditions, a useful tool to ascertain the properties of the degree of dominance, either for segregating or for newly arisen deleterious mutations. Direct estimates of the average h from mutation-accumulation experiments are shown to suffer some bias caused by purging selection but, because they do not require assumptions on the causes maintaining segregating variation, they appear to give a more reliable average dominance for newly arisen mutations. PMID:15514075

  13. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and

  14. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  15. A diameter distribution approach to estimating average stand dominant height in Appalachian hardwoods

    Treesearch

    John R. Brooks

    2007-01-01

    A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...

  16. Comparison of Average Energy Slope Estimation Formulas for One-dimensional Steady Gradually Varied Flow

    NASA Astrophysics Data System (ADS)

    Artichowicz, Wojciech; Mikos-Studnicka, Patrycja

    2014-12-01

    To find the steady flow water surface profile, it is possible to use Bernoulli's equation, which is a discrete form of the differential energy equation. Such an approach requires the average energy slope between cross-sections to be estimated. In the literature, many methods are proposed for estimating the average energy slope in this case, such as the arithmetic mean, resulting in the standard step method, the harmonic mean and the geometric mean. Also hydraulic averaging by means of conveyance is commonly used. In this study, water surface profiles numerically computed using different formulas for expressing the average slope were compared with exact analytical solutions of the differential energy equation. Maximum relative and mean square errors between numerical and analytical solutions were used as measures of the quality of numerical models. Experiments showed that all methods gave solutions useful for practical engineering purposes. For every method, the numerical solution was very close to the analytical one. However, from the numerical viewpoint, the differences between the methods were significant, as the errors differed up to two orders of magnitude.

  17. Estimation of Daily Area-Average Rainfall during the CaPE Experiment in Central Florida.

    NASA Astrophysics Data System (ADS)

    Duchon, Claude E.; Renkevens, Thomas M.; Crosson, William L.

    1995-12-01

    The principal component of the water cycle over land areas is precipitation. Knowledge of the accuracy of areal precipitation estimates, therefore, is imperative. The manifold observations of hydrometeorological quantities in the Convection and Precipitation/Electrification Experiment (CaPE) that took place in central Florida during the summer of 1991 have provided an opportunity to examine the various water fluxes. This paper deals with only one of them, the daily area-average rainfall as derived from rain gauges. The theory for spatial sampling errors for random gauge design is extended to include rain gauge bias and random errors. The requirement for randomly located gauges turns out to be quite well met with the approximately 100 gauges available each day over the 17 300-km2 CaPE study area. Rain gauge bias, due mainly to wind effects, is estimated to be 6% based on a previous study and random measurement error is calculated from seven pairs of collocated gauges. For the category of daily area-average rainfall less than 1 mm, the standard error was found to be 0.27 mm or a 53% variation with respect to the category mean. For the highest rainfall category, 11 15 mm, the standard error was 1.49 mm or 11%. For this application, the standard error is essentially a consequence of the limited number of gauges. Given that the rain gauges are randomly distributed and arithmetic averaging is used to estimate the daily area-average rainfall, application of the appropriate standard error provides a useful measure of the best accuracy achievable in assessing the daily water budget of this area.

  18. Estimation of the Area of a Reverberant Plate Using Average Reverberation Properties

    NASA Astrophysics Data System (ADS)

    Achdjian, Hossep; Moulin, Emmanuel; Benmeddour, Farouk; Assaad, Jamal

    This paper aims to present an original method for the estimation of the area of thin plates of arbitrary geometrical shapes. This method relies on the acquisition and ensemble processing of reverberated elastic signals on few sensors. The acoustical Green's function in a reverberant solid medium is modeled by a nonstationary random process based on the image-sources method. In that way, mathematical expectations of the signal envelopes can be analytically related to reverberation properties and structural parameters such as plate area, group velocity, or source-receiver distance. Then, a simple curve fitting applied to an ensemble average over N realizations of the late envelopes allows to estimate a global term involving the values of structural parameters. From simple statistical modal arguments, it is shown that the obtained relation depends on the plate area and not on the plate shape. Finally, by considering an additional relation obtained from the early characteristics (treated in a deterministic way) of the reverberation signals, it is possible to deduce the area value. This estimation is performed without geometrical measurements and requires an access to only a small portion of the plate. Furthermore, this method does not require any time measurement nor trigger synchronization between the input channels of instrumentation (between measured signals), thus implying low hardware constraints. Experimental results obtained on metallic plates with free boundary conditions and embedded window glasses will be presented. Areas of up to several meter-squares are correctly estimated with a relative error of a few percents.

  19. Experimental Estimation of Average Fidelity of a Clifford Gate on a 7-Qubit Quantum Processor

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Li, Hang; Trottier, Denis-Alexandre; Li, Jun; Brodutch, Aharon; Krismanich, Anthony P.; Ghavami, Ahmad; Dmitrienko, Gary I.; Long, Guilu; Baugh, Jonathan; Laflamme, Raymond

    2015-04-01

    One of the major experimental achievements in the past decades is the ability to control quantum systems to high levels of precision. To quantify the level of control we need to characterize the dynamical evolution. Full characterization via quantum process tomography is impractical and often unnecessary. For most practical purposes, it is enough to estimate more general quantities such as the average fidelity. Here we use a unitary 2-design and twirling protocol for efficiently estimating the average fidelity of Clifford gates, to certify a 7-qubit entangling gate in a nuclear magnetic resonance quantum processor. Compared with more than 1 08 experiments required by full process tomography, we conducted 1656 experiments to satisfy a statistical confidence level of 99%. The average fidelity of this Clifford gate in experiment is 55.1%, and rises to at least 87.5% if the signal's decay due to decoherence is taken into account. The entire protocol of certifying Clifford gates is efficient and scalable, and can easily be extended to any general quantum information processor with minor modifications.

  20. Estimates of zonally averaged tropical diabatic heating in AMIP GCM simulations. PCMDI report No. 25

    SciTech Connect

    Boyle, J.S.

    1995-07-01

    An understanding of the processess that generate the atmospheric diabatic heating rates is basic to an understanding of the time averaged general circulation of the atmosphere and also circulation anomalies. Knowledge of the sources and sinks of atmospheric heating enables a fuller understanding of the nature of the atmospheric circulation. An actual assesment of the diabatic heating rates in the atmosphere is a difficult problem that has been approached in a number of ways. One way is to estimate the total diabatic heating by estimating individual components associated with the radiative fluxes, the latent heat release, and sensible heat fluxes. An example of this approach is provided by Newell. Another approach is to estimate the net heating rates from consideration of the balance required of the mass and wind variables as routinely observed and analyzed. This budget computation has been done using the thermodynamic equation and more recently done by using the vorticity and thermodynamic equations. Schaak and Johnson compute the heating rates through the integration of the isentropic mass continuity equation. The estimates of heating arrived at all these methods are severely handicapped by the uncertainties in the observational data and analyses. In addition the estimates of the individual heating components suffer an additional source of error from the parameterizations used to approximate these quantities.

  1. The Estimation of Theta in the Integrated Moving Average Time-Series Model.

    ERIC Educational Resources Information Center

    Martin, Gerald R.

    Through Monte Carlo procedures, three different techniques for estimating the parameter theta (proportion of the "shocks" remaining in the system) in the Integrated Moving Average (0,1,1) time-series model are compared in terms of (1) the accuracy of the estimates, (2) the independence of the estimates from the true value of theta, and…

  2. Improved pilot-aided optical carrier phase recovery using average processing for pilot phase estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Fangzheng; Wu, Jian; Li, Yan; Lin, Jintong

    2012-12-01

    We propose and numerically investigate an improved pilot-aided (PA) optical carrier phase recovery (CPR) method using average processing for pilot phase estimation in order to suppress the amplified spontaneous emission (ASE) noise in PA coherent transmission systems. Extensive simulations for 28 Gbuad QPSK systems are implemented. The performance of proposed PA-CPR with averaging is comprehensively investigated and compared with PA-CPR without averaging and Viterbi & Viterbi phase estimation (VVPE). Results show that, PA-CPR with averaging can significantly improve the performance of PA-CPR without averaging, and the best averaging length in the average operation is determined by trading off between ASE noise and laser phase noise (PN). By comparing the optical-signal-to-noise-ratio (OSNR) penalties at bit error rate (BER) of 1 × 10-3 using PA-CPR without averaging, PA-CPR with averaging and VVPE, the advantage of PA-CPR with averaging is further confirmed. Quantitatively, the tolerable linewidth-symbol-duration products (Δf · T) at 1-dB OSNR penalty by using PA-CPR without averaging, VVPE, and PA-CPR with averaging are 6 × 10-5, 1.2 × 10-4, and 5 × 10-4, respectively.

  3. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    SciTech Connect

    Verdin, Kristine L.

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  4. A comparison of spatial averaging and Cadzow's method for array wavenumber estimation

    SciTech Connect

    Harris, D.B.; Clark, G.A.

    1989-10-31

    We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.

  5. Weighted interframe averaging-based channel estimation for orthogonal frequency division multiplexing passive optical network

    NASA Astrophysics Data System (ADS)

    Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan

    2015-10-01

    Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.

  6. Alternative Estimates of the Reliability of College Grade Point Averages. Professional File. Article 130, Spring 2013

    ERIC Educational Resources Information Center

    Saupe, Joe L.; Eimers, Mardy T.

    2013-01-01

    The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…

  7. A fast algorithm for the estimation of statistical error in DNS (or experimental) time averages

    NASA Astrophysics Data System (ADS)

    Russo, Serena; Luchini, Paolo

    2017-10-01

    Time- and space-averaging of the instantaneous results of DNS (or experimental measurements) represent a standard final step, necessary for the estimation of their means or correlations or other statistical properties. These averages are necessarily performed over a finite time and space window, and are therefore more correctly just estimates of the 'true' statistical averages. The choice of the appropriate window size is most often subjectively based on individual experience, but as subtler statistics enter the focus of investigation, an objective criterion becomes desirable. Here a modification of the classical estimator of averaging error of finite time series, i.e. 'batch means' algorithm, will be presented, which retains its speed while removing its biasing error. As a side benefit, an automatic determination of batch size is also included. Examples will be given involving both an artificial time series of known statistics and an actual DNS of turbulence.

  8. [Adaptive moving averaging based estimation of single event-related potentials].

    PubMed

    Qi, C; Liang, D; Jiang, X

    2001-03-01

    Event-related potentials (ERP) is pertinent to medical research and clinical diagnosis. Estimation of single event-related potentials (sERP) is the objective of ERP processing. A new technique, adaptive moving averaging based method for estimation of sERP, is presented. After analysis of the properties of background noise by crossing zero, the window length of moving averaging is adaptively set according to the maximum width of the impulse noise for each recorded raw data. The experiments are made with real recorded data and the results demonstrate that the performance of sERP estimation is excellent. So the method proposed is suitable to sERP processing.

  9. Estimation of the global average temperature with optimally weighted point gauges

    NASA Technical Reports Server (NTRS)

    Hardin, James W.; Upson, Robert B.

    1993-01-01

    This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.

  10. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  11. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    SciTech Connect

    Liu, Y.; Liu, Z.; Zhang, S.; Rong, X.; Jacob, R.; Wu, S.; Lu, F.

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final global uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.

  12. Robust estimation for class averaging in cryo-EM Single Particle Reconstruction.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2014-01-01

    Single Particle Reconstruction (SPR) for Cryogenic Electron Microscopy (cryo-EM) aligns and averages the images extracted from micrographs to improve the Signal-to-Noise ratio (SNR). Outliers compromise the fidelity of the averaging. We propose a robust cross-correlation-like w-estimator for combating the effect of outliers on the average images in cryo-EM. The estimator accounts for the natural variation of signal contrast among the images and eliminates the need for a threshold for outlier rejection. We show that the influence function of our estimator is asymptotically bounded. Evaluations of the estimator on simulated and real cryo-EM images show good performance in the presence of outliers.

  13. Noninvasive average flow and differential pressure estimation for an implantable rotary blood pump using dimensional analysis.

    PubMed

    Lim, Einly; Karantonis, Dean M; Reizes, John A; Cloherty, Shaun L; Mason, David G; Lovell, Nigel H

    2008-08-01

    Accurate noninvasive average flow and differential pressure estimation of implantable rotary blood pumps (IRBPs) is an important practical element for their physiological control. While most attempts at developing flow and differential pressure estimate models have involved purely empirical techniques, dimensional analysis utilizes theoretical principles of fluid mechanics that provides valuable insights into parameter relationships. Based on data obtained from a steady flow mock loop under a wide range of pump operating points and fluid viscosities, flow and differential pressure estimate models were thus obtained using dimensional analysis. The algorithm was then validated using data from two other VentrAssist IRBPs. Linear correlations between estimated and measured pump flow over a flow range of 0.5 to 8.0 L/min resulted in a slope of 0.98 ( R(2) = 0.9848). The average flow error was 0.20 +/- 0.14 L/min (mean +/- standard deviation) and the average percentage error was 5.79%. Similarly, linear correlations between estimated and measured pump differential pressure resulted in a slope of 1.027 ( R(2) = 0.997) over a pressure range of 60 to 180 mmHg. The average differential pressure error was 1.84 +/- 1.54 mmHg and the average percentage error was 1.51%.

  14. Relating Point to Area Average Rainfall in Semiarid West Africa and the Implications for Rainfall Estimates Derived from Satellite Data.

    NASA Astrophysics Data System (ADS)

    Flitcroft, I. D.; Milford, J. R.; Dugdale, G.

    1989-04-01

    The variability of rainfall over small areas (100 km2) in the West African Sahel has been investigated using a dense network of raingages in the Republic of Niger. Rainfall was as well or better correlated over small distances as rainfall in other semiarid climates. A regression model was developed to specify the systematic correction required, when a raingage measurement is used to represent an area average rainfall and to quantify the errors involved. It was found that in this particular climate large point measurements are not representative of even small areas and need to be reduced by a factor which depends on the size of the area. Rainfall estimates based on satellite observations often use point rainfall to calibrate and validate estimates of areas average (pixel) rainfall. The effect of satellite location errors on the correct pairing of point measurements and pixel estimates, and the subsequent effect on the assessment of the accuracy of rainfall estimates based on satellite data are discussed.

  15. Spread Spectrum Signal Characteristic Estimation Using Exponential Averaging and an AD-HOC Chip rate Estimator

    DTIC Science & Technology

    2007-03-01

    performance as a function of the exponential coefficient, the combining method, the probability of false alarm, signal-to-AWGN ratio, and signal-to...the combining method, the probability of false alarm, signal-to-AWGN ratio, and signal-to-interference ratio. The second method of SS signal...versus SNR with standard ACRE for data durations from four to thirty-two ms with the associated upper-estimate of the probability of false alarm for

  16. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  17. A Study of the Sampling Error in Satellite Rainfall Estimates Using Optimal Averaging of Data and a Stochastic Model.

    NASA Astrophysics Data System (ADS)

    Bell, Thomas L.; Kundu, Prasun K.

    1996-06-01

    A method of combining satellite estimates of rainfall into gridded monthly averages suitable for climatological studies is examined. Weighted averages of the satellite estimates are derived that minimize the mean squared error of the grid-box averages. A spectral model with nonlocal, scaling, diffusive behavior at small distances, tuned to tropical Atlantic (GATE) statistics, is developed to study the optimal weighting method. Using it, the effect of optimal weighting for averaging data similar to what will be provided by the Tropical Rainfall Measuring Mission (TRMM) satellite is examined. The improvement in the accuracy of the averages is found to be small except for higher-latitude grid boxes near the edges of the satellite coverage. The averages of data from a combination of TRMM and a polar orbiting instrument such as SSM/I, however, are substantially improved using the method. A simple formula for estimating sampling error for each grid box is proposed, requiring only the local rain rate and a measure of the sample volume provided by the satellite.

  18. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  19. Estimation of genetic parameters for average daily gain using models with competition effects

    USDA-ARS?s Scientific Manuscript database

    Components of variance for ADG with models including competition effects were estimated from data provided by Pig Improvement Company on 11,235 pigs from 4 selected lines of swine. Fifteen pigs with average age of 71 d were randomly assigned to a pen by line and sex and taken off test after approxi...

  20. Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory

    ERIC Educational Resources Information Center

    Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena

    2013-01-01

    This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…

  1. Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory

    ERIC Educational Resources Information Center

    Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena

    2013-01-01

    This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…

  2. Assessment of estimated daily intakes of benzoates for average and high consumers in Korea.

    PubMed

    Yoon, Hae Jung; Cho, Yang Hee; Park, Juyeon; Lee, Chang Hee; Park, Sung Kwan; Cho, Young Ju; Han, Ki Won; Lee, Jong Ok; Lee, Chul Won

    2003-02-01

    A study was performed to evaluate the estimated daily intakes (EDI) of benzoates for the average and high (90th percentile) consumers by age and sex categories in Korea. The estimation of daily intakes of benzoates was based on individual dietary intake data from the National Health and Nutrition Survey in 1998 and on the determination of benzoates in eight food categories. The EDI of benzoates for average consumers of different age groups ranged from 0.009 to 0.025 mg kg(-1) bw day(-1). For high consumers, the range of EDI of benzoates was 0.195-1.878 mg kg(-1) bw day(-1). The intakes represented 0.18-0.50% of the acceptable daily intake (ADI) of benzoates for average consumers and 3.9-37.6% of the ADI for high consumers. Foods that contributed most to the daily intakes of benzoates were mixed beverages and soy sauce in Korea.

  3. How ants use quorum sensing to estimate the average quality of a fluctuating resource.

    PubMed

    Franks, Nigel R; Stuttard, Jonathan P; Doran, Carolina; Esposito, Julian C; Master, Maximillian C; Sendova-Franks, Ana B; Masuda, Naoki; Britton, Nicholas F

    2015-07-08

    We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures.

  4. Limitations of the spike-triggered averaging for estimating motor unit twitch force: a theoretical analysis.

    PubMed

    Negro, Francesco; Yavuz, Ş Utku; Yavuz, Utku Ş; Farina, Dario

    2014-01-01

    Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.

  5. Path-averaged rainfall estimation using microwave links: Uncertainty due to spatial rainfall variability

    NASA Astrophysics Data System (ADS)

    Berne, A.; Uijlenhoet, R.

    2007-04-01

    Microwave links can be used to estimate the path-averaged rain rate along the link when precipitation occurs. They take advantage of the near proportionality between the specific attenuation affecting the link signal and the rain rate. This paper deals with the influence of the spatial variability of rainfall along the link on the accuracy of the rainfall estimates. We focus on single-polarization single-frequency links operating at frequencies ranging from 5 to 50 GHz and with path lengths ranging from 500 m to 30 km. A stochastic simulation framework is used to investigate the frequency at which the linearity occurs (found to be about 30 GHz) for intense Mediterranean precipitation. In addition, the error associated with path-averaged rain rate estimates is being quantified. For instance, the bias is found to be in the order of -15% with an uncertainty of about 10% for microwave links described in the recent literature.

  6. Path-average rainfall estimation from optical extinction measurements using a large-aperture scintillometer

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, Remko; Cohard, Jean-Martial; Gosset, Marielle

    2010-05-01

    The potential of a near infrared large-aperture boundary layer scintillometer as path-average rain gauge is investigated. The instrument was installed over a 2.4 km path in Benin as part of the AMMA Enhanced Observation Period during 2006 and 2007. Measurements of the one-minute average received signal intensity were collected for 6 rainfall events during the dry season and 16 events during the wet season. Using estimates of the signal base level just before the onset of the rain events, the optical extinction coeffcient is estimated from the path-integrated attenuation for each minute. The corresponding path-average rain rates are computed using a power-law relation between the optical extinction coeffcient and rain rate obtained from measurements of raindrop size distributions with an optical spectropluviometer. Comparisons of five-minute rainfall estimates with measurements from two nearby rain gauges show that the temporal dynamics are generally captured well by the scintillometer. However, the instrument has a tendency to underestimate rain rates and event total rain amounts with respect to the gauges. It is shown that this underestimation can be explained partly by systematic differences between the actual and the employed mean power-law relation between rain rate and specific attenuation, partly by unresolved spatial and temporal rainfall variations along the scintillometer path. Occasionally, the signal may even be lost completely. It is demonstrated that if these effects are properly accounted for, by employing appropriate relations between rain rate and specific attenuation and by adapting the path length to the local rainfall climatology, scintillometer-based rainfall estimates can be within 20% of those estimated using rain gauges. These results demonstrate the potential of large-aperture scintillometers to estimate path-average rain rates at hydrologically relevant scales.

  7. Optimal estimators and asymptotic variances for nonequilibrium path-ensemble averages

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Chodera, John D.

    2009-10-01

    Existing optimal estimators of nonequilibrium path-ensemble averages are shown to fall within the framework of extended bridge sampling. Using this framework, we derive a general minimal-variance estimator that can combine nonequilibrium trajectory data sampled from multiple path-ensembles to estimate arbitrary functions of nonequilibrium expectations. The framework is also applied to obtain asymptotic variance estimates, which are a useful measure of statistical uncertainty. In particular, we develop asymptotic variance estimates pertaining to Jarzynski's equality for free energies and the Hummer-Szabo expressions for the potential of mean force, calculated from uni- or bidirectional path samples. These estimators are demonstrated on a model single-molecule pulling experiment. In these simulations, the asymptotic variance expression is found to accurately characterize the confidence intervals around estimators when the bias is small. Hence, the confidence intervals are inaccurately described for unidirectional estimates with large bias, but for this model it largely reflects the true error in a bidirectional estimator derived by Minh and Adib.

  8. Average sound speed estimation using speckle analysis of medical ultrasound data.

    PubMed

    Qu, Xiaolei; Azuma, Takashi; Liang, Jack T; Nakajima, Yoshikazu

    2012-11-01

    Most ultrasound imaging systems assume a pre-determined sound propagation speed for imaging. However, a mismatch between assumed and real sound speeds can lead to spatial shift and defocus of ultrasound image, which may limit the applicability of ultrasound imaging. The estimation of real sound speed is important for improving positioning accuracy and focus quality of ultrasound image. A novel method using speckle analysis of ultrasound image is proposed for average sound speed estimation. Firstly, dynamic receive beam forming technology is employed to form ultrasound images. These ultrasound images are formed by same pre-beam formed radio frequency data but using different assumed sound speeds. Secondly, an improved speckle analysis method is proposed to evaluate focus quality of these ultrasound images. Thirdly, an iteration strategy is employed to locate the desired sound speed that corresponds to the best focus quality image. For quantitative evaluation, a group of ultrasound data with 20 different structure patterns is simulated. The comparison of estimated and simulated sound speeds shows speed estimation errors to be -0.7 ± 2.54 m/s and -1.30 ± 5.15 m/s for ultrasound data obtained by 128- and 64-active individual elements linear arrays, respectively. Furthermore, we validate our method via phantom experiments. The sound speed estimation error is -1.52 ± 8.81 m/s. Quantitative evaluation proves that proposed method can estimate average sound speed accurately using single transducer with single scan.

  9. [Estimation of average traffic emission factor based on synchronized incremental traffic flow and air pollutant concentration].

    PubMed

    Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng

    2014-04-01

    On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors.

  10. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  11. Path-Average Rainfall Estimation From Optical Extinction Measurements Using a Large- Aperture Scintillometer

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Cohard, J.; Gosset, M.

    2008-12-01

    We employ a Scintec BLS900 near infrared (880 nm) large aperture boundary layer scintillometer as path average rain gauge. The instrument was installed over a path of 2.4 km in Benin as part of the AMMA CATCH (African Monsoon Multidisciplinary Analysis) intensive observation period during 2006 and 2007. Measurements of the one minute average and variance of the received signal intensity from two transmitter disks of 462 LEDs each, operating at a pulse repetition rate of 5 Hz (i.e. 300 samples per minute), were collected for a few rainfall events that occurred during the dry season and several events during the wet season. Using estimates of the signal base level just before the start of the rain events, the optical extinction coefficient was estimated from the path integrated signal attenuation for each minute. The corresponding one minute path average rain rates were computed using a power law relation between the optical extinction coefficient and rain rate obtained from measurements of raindrop size spectra with an optical spectropluviometer. The estimated rain rates are compared to measurements from nearby rain gauges. Our results demonstrate the potential of optical extinction measurements from large aperture boundary layer scintillometers to obtain estimates of rainfall variability at high temporal resolution for hydrologically relevant spatial scales.

  12. Bayesian and Frequentist Estimation of the Average Capacity of Log Normal Wireless Optical Channels

    NASA Astrophysics Data System (ADS)

    Katsis, A.; Nistazakis, H. E.; Tombras, G. S.

    2008-11-01

    We investigate the average (ergodic) capacity of practical free-space optical communication channels using the frequentist and the Bayesian approach. We concentrate on the cases of weak and moderate atmospheric turbulence leading to channels modeled by Log-Normal distributed intensity fading and derive closed-form expressions and estimation procedures for their achievable capacity and the important parameters of interest. Each methodology is reviewed in terms of their analytic convenience and their accuracy is also discussed.

  13. Estimating the path-average rainwater content and updraft speed along a microwave link

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  14. Estimating the path-average rainwater content and updraft speed along a microwave link

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  15. Estimation of annual average daily traffic for off-system roads in Florida. Final report

    SciTech Connect

    Shen, L.D.; Zhao, F.; Ospina, D.I.

    1999-07-28

    Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination of roadway geometry, congestion management, pavement design, safety considerations, etc. AADT is also used to estimate state wide vehicle miles traveled on all the roads and is used by local governments and the environmental protection agencies to determine compliance with the 1990 Clean Air Act Amendment. Additionally, AADT is reported annually by the Florida Department of transportation (FDOT) to the Federal Highway Administration. In the past, considerable efforts have been made in obtaining traffic counts to estimate AADT on state roads. However, traffic counts are often not available on off-system roads, and less attention has been paid to the estimation of AADT in the absence of counts. Current estimates rely on comparisons with roads that are subjectively considered to be similar. Such comparisons are inherently subject to large errors, and also may not be repeated often enough to remain current. Therefore, a better method is needed for estimating AADT for off-system roads in Florida. This study investigates the possibility of establishing one or more models for estimating AADT for off-system roads in Florida.

  16. Model-averaged benchmark concentration estimates for continuous response data arising from epidemiological studies

    SciTech Connect

    Noble, R.B.; Bailer, A.J.; Park, R.

    2009-04-15

    Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.

  17. Estimation of Area-Averaged Rainfall over Tropical Oceans from Microwave Radiometry: A Single Channel Approach.

    NASA Astrophysics Data System (ADS)

    Shin, Kyung-Sup; Riba, Phil E.; North, Gerald R.

    1990-10-01

    This paper presents a new simple retrieval algorithm for estimating area-time averaged rain rates over tropical oceans by using single channel microwave measurements from satellites. The algorithm was tested by using the Nimbus-5 Electrically Scanning Microwave Radiometer (ESMR-5) and a simple microwave radiative transfer model to retrieve seasonal 5° × 5° area averaged rainrate over the tropical Atlantic and Pacific from December 1973 to November 1974.The brightness temperatures were collected and analyzed into histograms for each season and in each grid box from December 1973 to November 1974. The histograms suggest a normal distribution of background noise plus a skewed rain distribution at the higher brightness temperatures. By using a statistical estimation procedure based upon normally distributed background noise, the rain distribution was separated from the raw histogram. The radiative transfer model was applied to the rain-only distribution to retrieve area-time averaged rainrates throughout the tropics. An adjustment for the beam filling error was incorporated in the procedure.Despite limitations of single channel information, the retrieved seasonal rain rates agree well in the open ocean with expectations based upon previous estimates of tropical rainfall over the oceans. We suggest that the beam filling correction factor is the most important, but least understood parameter in the retrieval process.

  18. Maximum one-day point rainfall estimation for North Indian plains using district average rainfall ratios

    NASA Astrophysics Data System (ADS)

    Dhar, O. N.; Kulkarni, A. K.; Rakhecha, P. R.

    1980-09-01

    A quick and simple procedure has been developed for evaluating maximum point rainfall for different return periods for any location in the plains of north India. According to this procedure, 2-year one-day rainfall of a location is estimated from the 2-year generalized chart of the region. Average district rainfall ratios for higher return periods of 5, 10, 25 and 50 years to 2-year return period are obtained with the help of (i) district average ratio map of 100/2 and (ii) frequency interpolation nomogram. The magnitudes of 5, 10, 25, 50 and 100-year rainfall are then obtained by multiplying the 2-year value by the corresponding district average ratios pertaining to different return periods. The estimates of point rainfall obtained by this procedure are quite comparable with those obtained directly by the Gumbel method. By using the procedure given in this study, a design engineer or a hydrologist can estimate point rainfall of different return periods for any station in north Indian plains without undertaking elaborate statistical calculations.

  19. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528

  20. Coherent radar estimates of average high-latitude ionospheric Joule heating

    SciTech Connect

    Kosch, M.J.; Nielsen, E.

    1995-07-01

    The Scandinavian Twin Auroral Radar Experiment (STARE) and Sweden and Britain Radar Experiment (SABRE) bistatic coherent radar systems have been employed to estimate the spatial and temporal variation of the ionospheric Joule heating in the combined geographic latitude range 63.8 deg - 72.6 deg (corrected geomagnetic latitude 61.5 deg - 69.3 deg) over Scandinavia. The 173 days of good observations with all four radars have been analyzed during the period 1982 to 1986 to estimate the average ionospheric electric field versus time and latitude. The AE dependent empirical model of ionospheric Pedersen conductivity of Spiro et al. (1982) has been used to calculate the Joule heating. The latitudinal and diurnal variation of Joule heating as well as the estimated mean hemispherical heating of 1.7 x 10(exp 11) W are in good agreement with earlier results. Average Joule heating was found to vary linearly with the AE, AU, and AL indices and as a second-order power law with Kp. The average Joule heating was also examined as a function of the direction and magnitude of the interplanetary magnetic field. It has been shown for the first time that the ionospheric electric field magnitude as well as the Joule heating increase with increasingly negative (southward) Bz.

  1. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects.

    PubMed

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of "big data" are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model's ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online.

  2. An estimate of the average number of recessive lethal mutations carried by humans.

    PubMed

    Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly

    2015-04-01

    The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes.

  3. An Estimate of the Average Number of Recessive Lethal Mutations Carried by Humans

    PubMed Central

    Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly

    2015-01-01

    The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes. PMID:25697177

  4. Unmanned Aerial Vehicles unique cost estimating requirements

    NASA Astrophysics Data System (ADS)

    Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.

    Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.

  5. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  6. A temperature-based model for estimating monthly average daily global solar radiation in China.

    PubMed

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China.

  7. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  8. Comparison of geostatistical methods for estimating the areal average climatological rainfall mean using data on precipitation and topography

    NASA Astrophysics Data System (ADS)

    Pardo-Igúzquiza, Eulogio

    1998-07-01

    The results of estimating the areal average climatological rainfall mean in the Guadalhorce river basin in southern Spain are presented in this paper. The classical Thiessen method and three different geostatistical approaches (ordinary kriging, cokriging and kriging with an external drift) have been used as estimators and their results are compared and discussed. The first two methods use only rainfall information, while cokriging and kriging with an external drift use both precipitation data and orographic information (easily accessible from topographic maps). In the case study presented, kriging with an external drift seems to give the most coherent results in accordance with cross-validation statistics. If there is a correlation between climatological rainfall mean and altitude, it seems logical that the inclusion of topographic information should improve the estimates. Kriging with an external drift has the advantage of requiring a less demanding variogram analysis than cokriging.

  9. Characterization of the temporal sampling error in space-time-averaged rainfall estimates from satellites

    NASA Astrophysics Data System (ADS)

    Gebremichael, Mekonnen; Krajewski, Witold F.

    2004-06-01

    The authors investigate the sampling error of space-time-averaged rain rates due to temporal sampling by satellite, based on 5 years of radar-based rainfall estimates over the Mississippi River Basin, and use the data to estimate the sampling uncertainty. The two approaches used in this estimation consist of a parametric approach that ignores the diurnal cycle in the rain rate statistics and a nonparametric approach that accounts for it. Results show that the parametric approach yields uncertainty estimates that are generally smaller than those obtained by the nonparametric approach. At a sampling interval of 12 hours, the parametric approach typically underestimates the sampling uncertainty for monthly rainfall by about 28%, 25%, and 14% at averaging areas of 512 × 512 km2, 256 × 256 km2, and 32 × 32 km2, respectively. Results verify the power law scaling characteristics of the sampling uncertainty with respect to the space-time scales of measurement and the large-scale precipitation observables suggested by simple models of rain statistics and revealed in previous studies with smaller data sets. With respect to the spatial scale and the mean rain rate, the sampling uncertainty behaves significantly differently from the inverse square-root behavior predicted by many models. The authors compare the resulting scaling exponents to those obtained from previous studies and identify the factors that need to be addressed in sampling uncertainty comparisons. The main new finding is that the power law exponent governing the dependence of the uncertainty estimate on the mean rain rate appears to exhibit seasonal variations.

  10. Path-averaged Rainfall Estimation Using a 27 GHz Microwave Link: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Bijkerk, A.; Stricker, H.; Uijlenhoet, R.

    2003-04-01

    Between May and July 1999 a 27 GHz microwave link has been operated over a 5 km path between the towns of Rhenen and Wageningen in The Netherlands. The instrument, which was built at Eindhoven University of Technology, measures the power arriving at the receiving antenna with a frequency of 18 Hz (i.e. 18 samples per second). During dry weather conditions, it can be used as a microwave scintillometer, i.e. the (turbulent) fluxes of sensible and latent heat can be estimated from the variance of the received power fluctuations. Here we focus on the use of the instrument during rainy conditions, where it can be used to measure the path-integrated attenuation of the microwave signal due to intervening rain between the transmitting and the receiving antenna. Owing to the fact that the specific attenuation at this particular microwave frequency (in dB/km) is closely proportional to the rainfall rate (in mm/h), this instrument is well suited for path-averaged rainfall estimation. This parameter is highly relevant for various hydrological and meteorological applications. We present preliminary analyses for several rainfall events during the mentioned period, where we have compared the path-averaged rainfall estimates from the microwave link with rainfall measurements from a co-located line configuration of six tipping bucket rain gauges.

  11. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  12. Fetal cardiac time intervals estimated on fetal magnetocardiograms: single cycle analysis versus average beat inspection.

    PubMed

    Comani, Silvia; Alleva, Giovanna

    2007-01-01

    Fetal cardiac time intervals (fCTI) are dependent on fetal growth and development, and may reveal useful information for fetuses affected by growth retardation, structural cardiac defects or long QT syndrome. Fetal cardiac signals with a signal-to-noise ratio (SNR) of at least 15 dB were retrieved from fetal magnetocardiography (fMCG) datasets with a system based on independent component analysis (ICA). An automatic method was used to detect the onset and offset of the cardiac waves on single cardiac cycles of each signal, and the fCTI were quantified for each heartbeat; long rhythm strips were used to calculate average fCTI and their variability for single fetal cardiac signals. The aim of this work was to compare the outcomes of this system with the estimates of fCTI obtained with a classical method based on the visual inspection of averaged beats. No fCTI variability can be measured from averaged beats. A total of 25 fMCG datasets (fetal age from 22 to 37 weeks) were evaluated, and 1768 cardiac cycles were used to compute fCTI. The real differences between the values obtained with a single cycle analysis and visual inspection of averaged beats were very small for all fCTI. They were comparable with signal resolution (+/-1 ms) for QRS complex and QT interval, and always <5 ms for the PR interval, ST segment and T wave. The coefficients of determination between the fCTI estimated with the two methods ranged between 0.743 and 0.917. Conversely, inter-observer differences were larger, and the related coefficients of determination ranged between 0.463 and 0.807, assessing the high performance of the automated single cycle analysis, which is also rapid and unaffected by observer-dependent bias.

  13. Estimating eye care workforce supply and requirements.

    PubMed

    Lee, P P; Jackson, C A; Relles, D A

    1995-12-01

    To estimate the workforce supply and requirements for eye care in the United States. Three models were constructed for analysis: supply of providers, public health need for eye care, and demand (utilization) for eye care. Ophthalmologists, other physicians, and optometrists were included in the models. Public health need was determined by applying condition-specific prevalence and incidence rates from population-based and other epidemiologic studies. Demand was determined by use of national databases, such as the National Ambulatory Care Survey, National Hospital Discharge Survey, and Medicare Part B. Time requirements for care were obtained through a stratified sample survey of the membership of the American Academy of Ophthalmology. Under modeling assumptions that use a work-time ratio of one between optometrists and ophthalmologists and between specialist and generalist ophthalmologists, a significant excess of eye care providers exists relative to both public health need and demand. Changes in the work-time ratio, work-hours per year per provider, care patterns for the same condition, or other factors could significantly reduce or eliminate the surplus relative to need. If optometrists are the preferred primary eye care provider, ophthalmologists would be in excess under all demand scenarios and all need scenarios where the optometrist to ophthalmologist work-time ratio is greater than 0.6. No excess of ophthalmologists would exist if ophthalmologists are the preferred primary eye care provider. Data on the appropriate work time ratio will help refine estimates of the imbalance between supply and requirements.

  14. A method of estimating physician requirements.

    PubMed

    Scitovsky, A A; McCall, N

    1976-01-01

    This article describes and applies a method of estimating physician requirements for the United States based on physician utilization rates of members of two comprehensive prepaid plans of medical care providing first-dollar coverage for practically all physician services. The plan members' physician utilization rates by age and sex and by field of specialty of the physician were extrapolated to the entire population of the United States. On the basis of data for 1966, it was found that 34 percent more physicians than were available would have been required to give the entire population the amount and type of care received by the plan members. The "shortage" of primary care physicians (general practice, internal medicine, and pediatrics combined) was found to be considerably greater than of physicians in the surgical specialties taken together (41 percent as compared to 21 percent). The paper discusses in detail the various assumptions underlying this method and stresses the need for careful evaluation of all methods of estimating physician requirements.

  15. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  16. Combining Rain Gages With Satellite Measurements for Optimal Estimates of Area-Time Averaged Rain Rates

    NASA Astrophysics Data System (ADS)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert B.

    1991-10-01

    The problem of optimally combining data from an array of point rain gages with those from a low Earth-orbiting satellite to obtain space-time averaged rain rates is considered. The mean square error due to sampling gaps in space-time can be expressed as an integral of a filter multiplied by the space-time spectral density of the rain rate field. It is shown that for large numbers of gages or large numbers of overpasses the two estimates can be regarded as orthogonal in the sense that the optimal weighting is the same as for independent estimators, i.e., the weights are inversely proportional to the error variances that would occur in the single-component case. The result involving point gages and satellite overpasses appears to hold under quite general conditions. The result is interesting since most other design combinations do not exhibit the orthogonality property.

  17. Estimation of muscle fatigue by ratio of mean frequency to average rectified value from surface electromyography.

    PubMed

    Fernando, Jeffry Bonar; Yoshioka, Mototaka; Ozawa, Jun

    2016-08-01

    A new method to estimate muscle fatigue quantitatively from surface electromyography (EMG) is proposed. The ratio of mean frequency (MNF) to average rectified value (ARV) is used as the index of muscle fatigue, and muscle fatigue is detected when MNF/ARV falls below a pre-determined or pre-calculated baseline. MNF/ARV gives larger distinction between fatigued muscle and non-fatigued muscle. Experiment results show the effectiveness of our method in estimating muscle fatigue more correctly compared to conventional methods. An early evaluation based on the initial value of MNF/ARV and the subjective time when the subjects start feeling the fatigue also indicates the possibility of calculating baseline from the initial value of MNF/ARV.

  18. Beyond intent to treat (ITT): A complier average causal effect (CACE) estimation primer.

    PubMed

    Peugh, James L; Strotman, Daniel; McGrady, Meghan; Rausch, Joseph; Kashikar-Zuck, Susmita

    2017-02-01

    Randomized control trials (RCTs) have long been the gold standard for allowing causal inferences to be made regarding the efficacy of a treatment under investigation, but traditional RCT data analysis perspectives do not take into account a common reality: imperfect participant compliance to treatment. Recent advances in both maximum likelihood parameter estimation and mixture modeling methodology have enabled treatment effects to be estimated, in the presence of less than ideal levels of participant compliance, via a Complier Average Causal Effect (CACE) structural equation mixture model. CACE is described in contrast to "intent to treat" (ITT), "per protocol", and "as treated" RCT data analysis perspectives. CACE model assumptions, specification, estimation, and interpretation will all be demonstrated with simulated data generated from a randomized controlled trial of cognitive-behavioral therapy for Juvenile Fibromyalgia. CACE analysis model figures, linear model equations, and Mplus estimation syntax examples are all provided. Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article.

  19. Estimating average annual percent change for disease rates without assuming constant change.

    PubMed

    Fay, Michael P; Tiwari, Ram C; Feuer, Eric J; Zou, Zhaohui

    2006-09-01

    The annual percent change (APC) is often used to measure trends in disease and mortality rates, and a common estimator of this parameter uses a linear model on the log of the age-standardized rates. Under the assumption of linearity on the log scale, which is equivalent to a constant change assumption, APC can be equivalently defined in three ways as transformations of either (1) the slope of the line that runs through the log of each rate, (2) the ratio of the last rate to the first rate in the series, or (3) the geometric mean of the proportional changes in the rates over the series. When the constant change assumption fails then the first definition cannot be applied as is, while the second and third definitions unambiguously define the same parameter regardless of whether the assumption holds. We call this parameter the percent change annualized (PCA) and propose two new estimators of it. The first, the two-point estimator, uses only the first and last rates, assuming nothing about the rates in between. This estimator requires fewer assumptions and is asymptotically unbiased as the size of the population gets large, but has more variability since it uses no information from the middle rates. The second estimator is an adaptive one and equals the linear model estimator with a high probability when the rates are not significantly different from linear on the log scale, but includes fewer points if there are significant departures from that linearity. For the two-point estimator we can use confidence intervals previously developed for ratios of directly standardized rates. For the adaptive estimator, we show through simulation that the bootstrap confidence intervals give appropriate coverage.

  20. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-09-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results

  1. [Estimation of the Average Glandular Dose Using the Mammary Gland Image Analysis in Mammography].

    PubMed

    Otsuka, Tomoko; Teramoto, Atsushi; Asada, Yasuki; Suzuki, Shoichi; Fujita, Hiroshi; Kamiya, Satoru; Anno, Hirofumi

    2016-05-01

    Currently, the glandular dose is evaluated quantitatively on the basis of the measured data using phantom, and not in a dose based on the mammary gland structure of an individual patient. However, mammary gland structures of the patients are different from each other and mammary gland dose of an individual patient cannot be obtained by the existing methods. In this study, we present an automated estimation method of mammary gland dose by means of mammary structure which is measured automatically using mammogram. In this method, mammary gland structure is extracted by Gabor filter; mammary region is segmented by the automated thresholding. For the evaluation, mammograms of 100 patients diagnosed with category 1 were collected. Using these mammograms we compared the mammary gland ratio measured by proposed method and visual evaluation. As a result, 78% of the total cases were matched. Furthermore, the mammary gland ratio and average glandular dose among the patients with same breast thickness was matched well. These results show that the proposed method may be useful for the estimation of average glandular dose for the individual patients.

  2. Spontaneous BOLD event triggered averages for estimating functional connectivity at resting state

    PubMed Central

    Tagliazucchi, Enzo; Balenzuela, Pablo; Fraiman, Daniel; Montoya, Pedro; Chialvo, Dante R.

    2010-01-01

    Recent neuroimaging studies have demonstrated that the spontaneous brain activity reflects, to a large extent, the same activation patterns measured in response to cognitive and behavioral tasks. This correspondence between activation and rest has been explored with a large repertoire of computational methods, ranging from analysis of pairwise interactions between areas of the brain to the global brain networks yielded by independent component analysis. In this paper we describe an alternative method based on the averaging of the BOLD signal at a region of interest (target) triggered by spontaneous increments in activity at another brain area (seed). The resting BOLD event triggered averages (“rBeta”) can be used to estimate functional connectivity at resting state. Using two simple examples, here we illustrate how the analysis of the average response triggered by spontaneous increases/decreases in the BOLD signal is sufficient to capture the aforementioned correspondence in a variety of circumstances. The computation of the non linear response during rest here described allows for a direct comparison with results obtained during task performance, providing an alternative measure of functional interaction between brain areas. PMID:21078369

  3. Planning and Estimation of Operations Support Requirements

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon

    2010-01-01

    Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D

  4. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  5. Noise estimation from averaged diffusion weighted images: Can unbiased quantitative decay parameters assist cancer evaluation?

    PubMed Central

    Dikaios, Nikolaos; Punwani, Shonit; Hamy, Valentin; Purpura, Pierpaolo; Rice, Scott; Forster, Martin; Mendes, Ruheena; Taylor, Stuart; Atkinson, David

    2014-01-01

    Purpose Multiexponential decay parameters are estimated from diffusion-weighted-imaging that generally have inherently low signal-to-noise ratio and non-normal noise distributions, especially at high b-values. Conventional nonlinear regression algorithms assume normally distributed noise, introducing bias into the calculated decay parameters and potentially affecting their ability to classify tumors. This study aims to accurately estimate noise of averaged diffusion-weighted-imaging, to correct the noise induced bias, and to assess the effect upon cancer classification. Methods A new adaptation of the median-absolute-deviation technique in the wavelet-domain, using a closed form approximation of convolved probability-distribution-functions, is proposed to estimate noise. Nonlinear regression algorithms that account for the underlying noise (maximum probability) fit the biexponential/stretched exponential decay models to the diffusion-weighted signal. A logistic-regression model was built from the decay parameters to discriminate benign from metastatic neck lymph nodes in 40 patients. Results The adapted median-absolute-deviation method accurately predicted the noise of simulated (R2 = 0.96) and neck diffusion-weighted-imaging (averaged once or four times). Maximum probability recovers the true apparent-diffusion-coefficient of the simulated data better than nonlinear regression (up to 40%), whereas no apparent differences were found for the other decay parameters. Conclusions Perfusion-related parameters were best at cancer classification. Noise-corrected decay parameters did not significantly improve classification for the clinical data set though simulations show benefit for lower signal-to-noise ratio acquisitions. PMID:23913479

  6. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  7. Nonlinear models for estimating GSFC travel requirements

    NASA Technical Reports Server (NTRS)

    Buffalano, C.; Hagan, F. J.

    1974-01-01

    A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.

  8. Estimation of Average Rainfall Areal Reduction Factors in Texas Using NEXRAD Data

    NASA Astrophysics Data System (ADS)

    Choi, J.; Kim, D.; Olivera, F.

    2005-12-01

    In general, larger catchments are less likely than smaller catchments to experience high intensity storms over the whole of the catchment area. Therefore, the conversion of point precipitation into area-averaged precipitation is necessary whenever an area, large enough for rainfall not to be uniform, is to be modeled. However, while point precipitation has been well recorded because of the availability of rain gauge data, areal precipitation cannot be measured and its estimation has been a subject of research for the last decades. With the understanding that the NEXt generation RADar (NEXRAD) precipitation data distributed by the U.S. National Weather Service (NWS) is the best data with spatial coverage available for large areas, this paper addresses the estimation of areal reduction factors (ARFs) using these type of data. The study site is the 685,000-km2 area of the state of Texas. Storms were assumed to be elliptically shaped of different aspect ratios and orientations. It was found that, in addition to the storm duration and area, already considered in previous studies, ARFs depend also on the geographic region, the season of the year (summer or winter), and the precipitation depth. Storm shape and orientation were also studied.

  9. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition.

    PubMed

    Taylor, Brian A; Hwang, Ken-Pin; Hazle, John D; Stafford, R Jason

    2009-03-01

    The authors investigated the performance of the iterative Steiglitz-McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (< or equal 16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer-Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR) > or =5 for echo train lengths (ETLs) > or =4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and/or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with > or =4 echoes and for T2*(<1.0%) with > or =7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire < or =16 echoes for one- and two-peak systems. Preliminary ex vivo

  10. Translating the A1C assay into estimated average glucose values.

    PubMed

    Nathan, David M; Kuenen, Judith; Borg, Rikke; Zheng, Hui; Schoenfeld, David; Heine, Robert J

    2008-08-01

    The A1C assay, expressed as the percent of hemoglobin that is glycated, measures chronic glycemia and is widely used to judge the adequacy of diabetes treatment and adjust therapy. Day-to-day management is guided by self-monitoring of capillary glucose concentrations (milligrams per deciliter or millimoles per liter). We sought to define the mathematical relationship between A1C and average glucose (AG) levels and determine whether A1C could be expressed and reported as AG in the same units as used in self-monitoring. A total of 507 subjects, including 268 patients with type 1 diabetes, 159 with type 2 diabetes, and 80 nondiabetic subjects from 10 international centers, was included in the analyses. A1C levels obtained at the end of 3 months and measured in a central laboratory were compared with the AG levels during the previous 3 months. AG was calculated by combining weighted results from at least 2 days of continuous glucose monitoring performed four times, with seven-point daily self-monitoring of capillary (fingerstick) glucose performed at least 3 days per week. Approximately 2,700 glucose values were obtained by each subject during 3 months. Linear regression analysis between the A1C and AG values provided the tightest correlations (AG(mg/dl) = 28.7 x A1C - 46.7, R(2) = 0.84, P < 0.0001), allowing calculation of an estimated average glucose (eAG) for A1C values. The linear regression equations did not differ significantly across subgroups based on age, sex, diabetes type, race/ethnicity, or smoking status. A1C levels can be expressed as eAG for most patients with type 1 and type 2 diabetes.

  11. A Method for the Estimation of p-Mode Parameters from Averaged Solar Oscillation Power Spectra

    NASA Astrophysics Data System (ADS)

    Reiter, J.; Rhodes, E. J., Jr.; Kosovichev, A. G.; Schou, J.; Scherrer, P. H.; Larson, T. P.

    2015-04-01

    A new fitting methodology is presented that is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from m-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the “Windowed, MuLTiple-Peak, averaged-spectrum” or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run, using weights from a leakage matrix that takes into account observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method, which employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure, which is based upon 6366 modes that we computed using the WMLTP method on the 66 day 2010 Solar and Heliospheric Observatory/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion, we developed a new procedure for the identification and correction of outliers in a frequency dataset. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle 24 during mid-2010.

  12. Homology-based prediction of interactions between proteins using Averaged One-Dependence Estimators.

    PubMed

    Murakami, Yoichi; Mizuguchi, Kenji

    2014-06-23

    Identification of protein-protein interactions (PPIs) is essential for a better understanding of biological processes, pathways and functions. However, experimental identification of the complete set of PPIs in a cell/organism ("an interactome") is still a difficult task. To circumvent limitations of current high-throughput experimental techniques, it is necessary to develop high-performance computational methods for predicting PPIs. In this article, we propose a new computational method to predict interaction between a given pair of protein sequences using features derived from known homologous PPIs. The proposed method is capable of predicting interaction between two proteins (of unknown structure) using Averaged One-Dependence Estimators (AODE) and three features calculated for the protein pair: (a) sequence similarities to a known interacting protein pair (FSeq), (b) statistical propensities of domain pairs observed in interacting proteins (FDom) and (c) a sum of edge weights along the shortest path between homologous proteins in a PPI network (FNet). Feature vectors were defined to lie in a half-space of the symmetrical high-dimensional feature space to make them independent of the protein order. The predictability of the method was assessed by a 10-fold cross validation on a recently created human PPI dataset with randomly sampled negative data, and the best model achieved an Area Under the Curve of 0.79 (pAUC0.5% = 0.16). In addition, the AODE trained on all three features (named PSOPIA) showed better prediction performance on a separate independent data set than a recently reported homology-based method. Our results suggest that FNet, a feature representing proximity in a known PPI network between two proteins that are homologous to a target protein pair, contributes to the prediction of whether the target proteins interact or not. PSOPIA will help identify novel PPIs and estimate complete PPI networks. The method proposed in this article is freely

  13. Estimating monthly-averaged air-sea transfers of heat and momentum using the bulk aerodynamic method

    NASA Technical Reports Server (NTRS)

    Esbensen, S. K.; Reynolds, R. W.

    1980-01-01

    Air-sea transfers of sensible heat, latent heat, and momentum are computed from twenty-five years of middle-latitude and subtropical ocean weather ship data in the North Atlantic and North Pacific using the bulk aerodynamic method. The results show that monthly-averaged wind speeds, temperatures, and humidities can be used to estimate the monthly-averaged sensible and latent heat fluxes computed from the bulk aerodynamic equations to within a relative error of approximately 10%. The estimate of monthly-averaged wind stress under the assumption of neutral stability are shown to be within approximately 5% of the monthly-averaged non-neutral values.

  14. A framework for estimating forest disturbance intensity from successive remotely sensed biomass maps: moving beyond average biomass loss estimates.

    PubMed

    Hill, T C; Ryan, C M; Williams, M

    2015-12-01

    The success of satellites in mapping deforestation has been invaluable for improving our understanding of the impacts and nature of land cover change and carbon balance. However, current satellite approaches struggle to quantify the intensity of forest disturbance, i.e. whether the average rate of biomass loss for a region arises from heavy disturbance focused in a few locations, or the less severe disturbance of a wider area. The ability to distinguish between these, very different, disturbance regimes remains critical for forest managers and ecologists. We put forward a framework for describing all intensities of forest disturbance, from deforestation, to widespread low intensity disturbance. By grouping satellite observations into ensembles with a common disturbance regime, the framework is able to mitigate the impacts of poor signal-to-noise ratio that limits current satellite observations. Using an observation system simulation experiment we demonstrate that the framework can be applied to provide estimates of the mean biomass loss rate, as well as distinguish the intensity of the disturbance. The approach is robust despite the large random and systematic errors typical of biomass maps derived from radar. The best accuracies are achieved with ensembles of ≥1600 pixels (≥1 km(2) with 25 by 25 m pixels). The framework we describe provides a novel way to describe and quantify the intensity of forest disturbance, which could help to provide information on the causes of both natural and anthropogenic forest loss-such information is vital for effective forest and climate policy formulation.

  15. Remote Sensing Precision Requirements For FIA Estimation

    Treesearch

    Mark H. Hansen

    2001-01-01

    In this study the National Land Cover Data (NLCD) available from the Multi-Resolution Land Characteristics Consortium (MRLC) is used for stratification in the estimation of forest area, timberland area, and growing-stock volume from the first year (1999) of annual FIA data collected in Indiana, Iowa, Minnesota, and Missouri. These estimates show that with improvements...

  16. Homology-based prediction of interactions between proteins using Averaged One-Dependence Estimators

    PubMed Central

    2014-01-01

    Background Identification of protein-protein interactions (PPIs) is essential for a better understanding of biological processes, pathways and functions. However, experimental identification of the complete set of PPIs in a cell/organism (“an interactome”) is still a difficult task. To circumvent limitations of current high-throughput experimental techniques, it is necessary to develop high-performance computational methods for predicting PPIs. Results In this article, we propose a new computational method to predict interaction between a given pair of protein sequences using features derived from known homologous PPIs. The proposed method is capable of predicting interaction between two proteins (of unknown structure) using Averaged One-Dependence Estimators (AODE) and three features calculated for the protein pair: (a) sequence similarities to a known interacting protein pair (FSeq), (b) statistical propensities of domain pairs observed in interacting proteins (FDom) and (c) a sum of edge weights along the shortest path between homologous proteins in a PPI network (FNet). Feature vectors were defined to lie in a half-space of the symmetrical high-dimensional feature space to make them independent of the protein order. The predictability of the method was assessed by a 10-fold cross validation on a recently created human PPI dataset with randomly sampled negative data, and the best model achieved an Area Under the Curve of 0.79 (pAUC0.5% = 0.16). In addition, the AODE trained on all three features (named PSOPIA) showed better prediction performance on a separate independent data set than a recently reported homology-based method. Conclusions Our results suggest that FNet, a feature representing proximity in a known PPI network between two proteins that are homologous to a target protein pair, contributes to the prediction of whether the target proteins interact or not. PSOPIA will help identify novel PPIs and estimate complete PPI networks. The method

  17. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  18. Potential for path-average rainfall estimation from combined microwave and optical attenuation measurements

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Leijnse, H.

    2009-04-01

    On previous occasions we have demonstrated the potential and limitations of both microwave and optical links as path-average rain gauges. In this presentation we investigate the potential of combined microwave and optical attenuation measurements for improved path-average rainfall monitoring through theoretical analysis and numerical experiments.

  19. The Average Distance between Item Values: A Novel Approach for Estimating Internal Consistency

    ERIC Educational Resources Information Center

    Sturman, Edward D.; Cribbie, Robert A.; Flett, Gordon L.

    2009-01-01

    This article presents a method for assessing the internal consistency of scales that works equally well with short and long scales, namely, the average proportional distance. The method provides information on the average distance between item scores for a particular scale. In this article, we sought to demonstrate how this relatively simple…

  20. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  1. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    SciTech Connect

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.

  2. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGES

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; ...

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  3. Path-average rainfall estimation using attenuation measurements: combining microwave and optical links

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Leijnse, H.

    2009-09-01

    On previous occasions we have demonstrated the potential and limitations of single microwave and optical links as path-average rain gauges. In this presentation we investigate the potential of combined microwave and optical attenuation measurements for improved path-average rainfall monitoring through theoretical analyses and numerical experiments. We show theoretically that path-average rain rate can in principle be retrieved from the specific attenuations at microwave and optical wavelengths through a double power-law relation, of which the exponents are independent and the coefficient is only weakly dependent on the raindrop size distribution. Our calculations indicate further that the main gain of adding an optical link to an existing microwave link is in reducing the root-mean-square-error of the retrieved rain rate, particularly for microwave frequencies below 35 GHz.

  4. Estimation of Critical Population Support Requirements.

    DTIC Science & Technology

    1984-05-30

    VA 22160 W.U. 4921H 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Federal Emergency Management Agency May 30, 1984 Industrial Protection...ensure the availability of industrial production required to support the population, maintain defense capabilities and perform command and control ...the population, maintain national defense capabilities and perform command and control activi- ties during a national emergency such as a threat of a

  5. Average landslide erosion rate at the watershed scale in southern Taiwan estimated from magnitude and frequency of rainfall

    NASA Astrophysics Data System (ADS)

    Chen, Yi-chin; Chang, Kang-tsung; Lee, Hong-yuan; Chiang, Shou-hao

    2015-01-01

    This study calculated the long-term average landslide erosion rate in the Kaoping River watershed in southern Taiwan and investigated the relative importance of extreme rainfall events on landslide erosion. The method followed three steps: first, calculating landslide volumes for 10 rainfall events from a multi-temporal, event-based landslide inventory; second, estimating the frequency of landslide-generating rainfall by using hydrologic frequency analyses; and third, combining the two sets of data to estimate the average landslide erosion rate. Results of the study showed that the average landslide erosion rate is 2.65-5.17 mm yr- 1, corresponding well to rates reported in other studies using other methods. The study also found that extreme-intensive rainfall events play a more important role on landslide erosion than frequent-moderate rainfall events. Extreme rainfall (maximum 24-h rainfall > 600 mm) contributes 64-79% of the average landslide erosion rate. Moreover, the natural variation of landslide erosion magnitudes is extremely large and can cause significant uncertainty in estimating the landslide erosion rate from total landslide volume. This study found ± 1.2 mm yr- 1 of uncertainty based on simulation results involving a hypothetical 100-year landslide inventory. In summary, this study demonstrates the importance of extreme rainfall events on landslide erosion, and the method proposed in this study is capable of calculating a reliable estimate of average landslide erosion rate in areas with insufficient landslide records.

  6. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  7. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    NASA Astrophysics Data System (ADS)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  8. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  9. Another Failure to Replicate Lynn's Estimate of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    In his comment on our literature review of data on the performance of sub-Saharan Africans on Raven's Progressive Matrices, Lynn (this issue) criticized our selection of samples of primary and secondary school students. On the basis of the samples he deemed representative, Lynn concluded that the average IQ of sub-Saharan Africans stands at 67…

  10. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  11. Another Failure to Replicate Lynn's Estimate of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    In his comment on our literature review of data on the performance of sub-Saharan Africans on Raven's Progressive Matrices, Lynn (this issue) criticized our selection of samples of primary and secondary school students. On the basis of the samples he deemed representative, Lynn concluded that the average IQ of sub-Saharan Africans stands at 67…

  12. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  13. Does the orbit-averaged theory require a scale separation between periodic orbit size and perturbation correlation length?

    SciTech Connect

    Zhang, Wenlu; Lin, Zhihong

    2013-10-15

    Using the canonical perturbation theory, we show that the orbit-averaged theory only requires a time-scale separation between equilibrium and perturbed motions and verifies the widely accepted notion that orbit averaging effects greatly reduce the microturbulent transport of energetic particles in a tokamak. Therefore, a recent claim [Hauff and Jenko, Phys. Rev. Lett. 102, 075004 (2009); Jenko et al., ibid. 107, 239502 (2011)] stating that the orbit-averaged theory requires a scale separation between equilibrium orbit size and perturbation correlation length is erroneous.

  14. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine.

  15. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate

  16. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  17. A data-driven model for estimating industry average numbers of hospital security staff.

    PubMed

    Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M

    2015-01-01

    In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital.

  18. Estimation of heat load in waste tanks using average vapor space temperatures

    SciTech Connect

    Crowe, R.D.; Kummerer, M.; Postma, A.K.

    1993-12-01

    This report describes a method for estimating the total heat load in a high-level waste tank with passive ventilation. This method relates the total heat load in the tank to the vapor space temperature and the depth of waste in the tank. Q{sub total} = C{sub f} (T{sub vapor space {minus}} T{sub air}) where: C{sub f} = Conversion factor = (R{sub o}k{sub soil}{sup *}area)/(z{sub tank} {minus} z{sub surface}); R{sub o} = Ratio of total heat load to heat out the top of the tank (function of waste height); Area = cross sectional area of the tank; k{sub soil} = thermal conductivity of soil; (z{sub tank} {minus} z{sub surface}) = effective depth of soil covering the top of tank; and (T{sub vapor space} {minus} T{sub air}) = mean temperature difference between vapor space and the ambient air at the surface. Three terms -- depth, area and ratio -- can be developed from geometrical considerations. The temperature difference is measured for each individual tank. The remaining term, the thermal conductivity, is estimated from the time-dependent component of the temperature signals coming from the periodic oscillations in the vapor space temperatures. Finally, using this equation, the total heat load for each of the ferrocyanide Watch List tanks is estimated. This provides a consistent way to rank ferrocyanide tanks according to heat load.

  19. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... historical costs, and other analyses used to generate cost estimates. (b) General. The Contractor shall... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in...

  20. Areally averaged estimates of surface heat flux from ARM field studies

    SciTech Connect

    Coulter, R.L.; Martin, T.J.; Cook, D.R.

    1993-08-01

    The determination of areally averaged surface fluxes is a problem of fundamental interest to the Atmospheric Radiation Measurement (ARM) program. The Cloud And Radiation Testbed (CART) sites central to the ARM program will provide high-quality data for input to and verification of General Circulation Models (GCMs). The extension of several point measurements of surface fluxes within the heterogeneous CART sites to an accurate representation of the areally averaged surface fluxes is not straightforward. Two field studies designed to investigate these problems, implemented by ARM science team members, took place near Boardman, Oregon, during June of 1991 and 1992. The site was chosen to provide strong contrasts in surface moisture while minimizing the differences in topography. The region consists of a substantial dry steppe (desert) upwind of an extensive area of heavily irrigated farm land, 15 km in width and divided into 800-m-diameter circular fields in a close packed array, in which wheat, alfalfa, corn, or potatoes were grown. This region provides marked contrasts, not only on the scale of farm-desert (10--20 km) but also within the farm (0.1--1 km), because different crops transpire at different rates, and the pivoting irrigation arms provide an ever-changing pattern of heavy surface moisture throughout the farm area. This paper primarily discusses results from the 1992 field study.

  1. The average structural evolution of massive galaxies can be reliably estimated using cumulative galaxy number densities

    NASA Astrophysics Data System (ADS)

    Clauwens, Bart; Hill, Allison; Franx, Marijn; Schaye, Joop

    2017-07-01

    Galaxy evolution can be studied observationally by linking progenitor and descendant galaxies through an evolving cumulative number density (CND) selection. This procedure can reproduce the expected evolution of the median stellar mass from abundance matching. However, models predict an increasing scatter in main progenitor masses at higher redshifts, which makes galaxy selection at the median mass unrepresentative. Consequently, there is no guarantee that the evolution of other galaxy properties deduced from this selection is reliable. Despite this concern, we show that this procedure approximately reproduces the evolution of the average stellar density profile of main progenitors of M ≈ 1011.5M⊙ galaxies, when applied to the EAGLE hydrodynamical simulation. At z ≳ 3.5, the aperture masses disagree by about a factor 2, but this discrepancy disappears when we include the expected scatter in cumulative number densities. The evolution of the average density profile in EAGLE broadly agrees with observations from UltraVISTA and CANDELS, suggesting an inside-out growth history for these massive galaxies over 0 ≲ z ≲ 5. However, for z ≲ 2, the inside-out growth trend is stronger in EAGLE. We conclude that CND matching gives reasonably accurate results when applied to the evolution of the mean density profile of massive galaxies.

  2. The estimation of average areal rainfall by percentage weighting polygon method in Southeastern Anatolia Region, Turkey

    NASA Astrophysics Data System (ADS)

    Bayraktar, Hanefi; Turalioglu, F. Sezer; Şen, Zekai

    2005-01-01

    The percentage weighting polygon (PWP) method is proposed as an alternative to the Thiessen method for calculating the average areal rainfall (AAR) over a given catchment area. The basis of the method is to divide the study area into subareas by considering the rainfall percentages obtained at three adjacent station locations. This method is more reliable and flexible than the Thiessen polygon procedure where the subareas remain the same, independent of the measured rainfall amounts. In this paper, the PWP method is applied to the Southeastern Anatolia Region of Turkey for the first time by considering 10 meteorological stations. In PWP method, higher rainfall values are represented with the smaller subareas than in the case of the Thiessen and the other conventional methods. It is observed that the PWP method yields 13.5% smaller AAR value among the other conventional methods.

  3. Estimation of the diffuse radiation fraction for hourly, daily and monthly-average global radiation

    NASA Astrophysics Data System (ADS)

    Erbs, D. G.; Klein, S. A.; Duffie, J. A.

    1982-01-01

    Hourly pyrheliometer and pyranometer data from four U.S. locations are used to establish a relationship between the hourly diffuse fraction and the hourly clearness index. This relationship is compared to the relationship established by Orgill and Hollands (1977) and to a set of data from Highett, Australia, and agreement is within a few percent in both cases. The transient simulation program TRNSYS is used to calculate the annual performance of solar energy systems using several correlations. For the systems investigated, the effect of simulating the random distribution of the hourly diffuse fraction is negligible. A seasonally dependent daily diffuse correlation is developed from the data, and this daily relationship is used to derive a correlation for the monthly-average diffuse fraction.

  4. Approximate sample sizes required to estimate length distributions

    USGS Publications Warehouse

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  5. Modeling distribution of temporal sampling errors in area-time-averaged rainfall estimates

    NASA Astrophysics Data System (ADS)

    Gebremichael, Mekonnen; Krajewski, Witold F.

    2005-02-01

    In this paper, the authors examine models of probability distributions for sampling error in rainfall estimates obtained from discrete satellite sampling in time based on 5 years of 15-min radar rainfall data in the central United States. The sampling errors considered include all combinations of 3, 6, 12, or 24 h sampling of rainfall over 32, 64, 128, 256, or 512 km square domains, and 1, 5, or 30 day rainfall accumulations. Results of this study reveal that the sampling error distribution depends strongly on the rain rate; hence the conditional distribution of sampling error is more informative than its marginal distribution. The distribution of sampling error conditional on rain rate is strongly affected by the sampling interval. At sampling intervals of 3 or 6 h, the logistic distribution appears to fit the conditional sampling error quite well, while the shifted-gamma, shifted-weibull, shifted-lognormal, and normal distributions fit poorly. At sampling intervals of 12 or 24 h, the shifted-gamma, shifted-weibull, or shifted-lognormal distribution fit the conditional sampling error better than the logistics or normal distribution. These results are vital to understanding the accuracy of satellite rainfall products, for performing validation assessment of these products, and for analyzing the effects of rainfall-related errors in hydrological models.

  6. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  7. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Simple Model.

    NASA Astrophysics Data System (ADS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.

    2001-05-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote sensing error and, especially in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that rms random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain gauge and radar data. This relationship is examined using Special Sensor Microwave Imager (SSM/I) satellite data obtained over the western equatorial Pacific during the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment. Rms error inferred directly from SSM/I rainfall estimates is found to be larger than was predicted from surface data and to depend less on local rain rate than was predicted. Preliminary examination of Tropical Rainfall Measuring Mission (TRMM) microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be computed directly from the satellite data.

  8. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  9. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement.

    PubMed

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  10. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    NASA Astrophysics Data System (ADS)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  11. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  12. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  13. Estimates of the average strength of natural selection are not inflated by sampling error or publication bias.

    PubMed

    Knapczyk, Frances N; Conner, Jeffrey K

    2007-10-01

    Kingsolver et al.'s review of phenotypic selection gradients from natural populations provided a glimpse of the form and strength of selection in nature and how selection on different organisms and traits varies. Because this review's underlying database could be a key tool for answering fundamental questions concerning natural selection, it has spawned discussion of potential biases inherent in the review process. Here, we explicitly test for two commonly discussed sources of bias: sampling error and publication bias. We model the relationship between variance among selection gradients and sample size that sampling error produces by subsampling large empirical data sets containing measurements of traits and fitness. We find that this relationship was not mimicked by the review data set and therefore conclude that sampling error does not bias estimations of the average strength of selection. Using graphical tests, we find evidence for bias against publishing weak estimates of selection only among very small studies (N<38). However, this evidence is counteracted by excess weak estimates in larger studies. Thus, estimates of average strength of selection from the review are less biased than is often assumed. Devising and conducting straightforward tests for different biases allows concern to be focused on the most troublesome factors.

  14. Estimating sustaining base-hospital personnel requirements during extended operations.

    PubMed

    Fulton, Lawrence V; Devore, Raymond B; McMurry, Pat M

    2010-04-01

    This case study provides a unique method for estimating sustaining base military hospital personnel requirements during combat and stability operations while underscoring the need for such analysis before the commencement of combat operations. The requirement estimates are based on a major combat operation (MCO) scenario, which was extended to simulate stability operations. The scenario selected derived from Department of Defense strategic planning guidance as modeled by the Total Army Analysis (TAA). Since casualties experienced in combat result in additional workload for military hospitals, a mechanism for estimating that workload is required. A single scenario generated as part of an analysis for the acting Army surgeon general produced a median requirement of 1,299 additional full-time equivalents (FTEs) over the course of 36 months, highlighting a significant gap between capabilities and requirements.

  15. A statistical approach to resolve incompatibilities between measured runoff data and daily estimates of spatially averaged rainfall

    NASA Astrophysics Data System (ADS)

    Langousis, A.; Kaleris, V.

    2012-04-01

    For many hydrological applications, such as calibration of rainfall-runoff models, estimation of river discharges at the outlet of a basin, and quantification of runoff extremes, one needs accurate estimates of spatial rainfall averages. When a relatively dense raingauge network is available, simple methods like Thiessen polygons and Kriging can be effectively used to weight point rainfall measurements at different locations inside the catchment, to calculate spatial rainfall averages. In the case of catchments covered by a single raingauge (i.e. a frequent case for medium and large- sized catchments in Greece), one approximates spatially averaged rainfall intensities using point rainfall measurements. Since the marginal and joint statistics of the two processes are quite different, one faces important problems when calibrating hydrological models and calculating annual water-budgets. Those problems are amplified by measurement errors, incompleteness of the historical records and topographic influences. In this work, we develop an approach to adjust point rainfall measurements to better resemble the statistical structure of spatial rainfall averages. This is done by developing a statistical tool that a) identifies incompatibilities between daily rainfall measurements and river discharges, and b) adjusts rainfall measurements to better resemble the observed changes of daily river runoff. The latter incorporate important information on the occurrence and amount of spatially averaged rainfalls. The suggested model adjusts rainfall time-series by minimally operating on the fraction of dry days, while reproducing the distribution of rainfall intensities on wet days conditional on the same- and previous-day river discharges. In an application study to a 19-year record of daily rainfalls and river discharges, we find that the suggested statistical approach efficiently identifies and resolves rainfall-runoff incompatibilities at daily level, while respecting the seasonal

  16. Estimating force and power requirements for crosscut shearing of roundwood.

    Treesearch

    Rodger A. Arola

    1972-01-01

    Presents a procedure which, through the use of nomographs, permits rapid estimation of the force required to crosscut shear logs of various species and diameters with shear blades ranging in thickness from 1/4 to 7/8 inch. In addition, nomographs are included to evaluate hydraulic cylinder sizes, pump capacities, and motor horsepower requirements to effect the cut....

  17. Estimating psychiatric manpower requirements based on patients' needs.

    PubMed

    Faulkner, L R; Goldman, C R

    1997-05-01

    To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.

  18. Estimating physician requirements for neurology: a needs-based approach.

    PubMed

    Garrison, L P; Bowman, M A; Perrin, E B

    1984-09-01

    Applying the adjusted needs-based model developed by the Graduate Medical Education National Advisory Committee (GMENAC), physician requirements in neurology were estimated for the year 1990. A Delphi panel of physician experts estimated appropriate patterns of treatment for 56 neurologic conditions. Their median estimates implied a need for 14,500 neurologists in 1990, suggesting a shortage relative to the projected supply. An advisory panel of former GMENAC members reviewed those estimates and recommended certain adjustments to ensure internal consistency and compatibility with those for other specialties. Adoption of these adjustments significantly reduces requirements, implying a total need for 8,400 neurologists--a figure in near balance with the projected supply of 8,650. The difference between the Delphi and Advisory Panel estimates reflects divergent views, apparent as well among the Delphi panelists, of the appropriate role of neurologists--consultants versus principal care providers.

  19. A Budyko framework for estimating how spatial heterogeneity and lateral moisture redistribution affect average evapotranspiration rates as seen from the atmosphere

    NASA Astrophysics Data System (ADS)

    Rouholahnejad Freund, Elham; Kirchner, James W.

    2017-01-01

    Most Earth system models are based on grid-averaged soil columns that do not communicate with one another, and that average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). These models also typically ignore topographically driven lateral redistribution of water (either as groundwater or surface flows), both within and between model grid cells. Here, we present a first attempt to quantify the effects of spatial heterogeneity and lateral redistribution on grid-cell-averaged evapotranspiration (ET) as seen from the atmosphere over heterogeneous landscapes. Our approach uses Budyko curves, as a simple model of ET as a function of atmospheric forcing by P and PET. From these Budyko curves, we derive a simple sub-grid closure relation that quantifies how spatial heterogeneity affects average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimations of average ET. For a sample high-relief grid cell in the Himalayas, this overestimation bias is shown to be roughly 12 %; for adjacent lower-relief grid cells, it is substantially smaller. We use a similar approach to derive sub-grid closure relations that quantify how lateral redistribution of water could alter average ET as seen from the atmosphere. We derive expressions for the maximum possible effect of lateral redistribution on average ET, and the amount of lateral redistribution required to achieve this effect, using only estimates of P and PET in possible source and recipient locations as inputs. We show that where the aridity index P/PET increases with altitude, gravitationally driven lateral redistribution will increase average ET (and models that overlook lateral redistribution will underestimate average ET). Conversely, where the aridity index P/PET decreases with altitude, gravitationally driven lateral redistribution will decrease average

  20. Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching

    PubMed Central

    Balzer, Laura B.; Petersen, Maya L.; van der Laan, Mark J.

    2016-01-01

    In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well-defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the five-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. PMID:27087478

  1. Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching.

    PubMed

    Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J

    2016-09-20

    In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Noninvasive average flow estimation for an implantable rotary blood pump: a new algorithm incorporating the role of blood viscosity.

    PubMed

    Malagutti, Nicolò; Karantonis, Dean M; Cloherty, Shaun L; Ayre, Peter J; Mason, David G; Salamonsen, Robert F; Lovell, Nigel H

    2007-01-01

    The effect of blood hematocrit (HCT) on a noninvasive flow estimation algorithm was examined in a centrifugal implantable rotary blood pump (iRBP) used for ventricular assistance. An average flow estimator, based on three parameters, input electrical power, pump speed, and HCT, was developed. Data were collected in a mock loop under steady flow conditions for a variety of pump operating points and for various HCT levels. Analysis was performed using three-dimensional polynomial surfaces to fit the collected data for each different HCT level. The polynomial coefficients of the surfaces were then analyzed as a function of HCT. Linear correlations between estimated and measured pump flow over a flow range from 1.0 to 7.5 L/min resulted in a slope of 1.024 L/min (R2=0.9805). Early patient data tested against the estimator have shown promising consistency, suggesting that consideration of HCT can improve the accuracy of existing flow estimation algorithms.

  3. Estimation of daily average downward shortwave radiation from MODIS data using principal components regression method: Fars province case study

    NASA Astrophysics Data System (ADS)

    Barzin, Razieh; Shirvani, Amin; Lotfi, Hossein

    2017-01-01

    Downward shortwave radiation is a key quantity in the land-atmosphere interaction. Since the moderate resolution imaging spectroradiometer data has a coarse temporal resolution, which is not suitable for estimating daily average radiation, many efforts have been undertaken to estimate instantaneous solar radiation using moderate resolution imaging spectroradiometer data. In this study, the principal components analysis technique was applied to capture the information of moderate resolution imaging spectroradiometer bands, extraterrestrial radiation, aerosol optical depth, and atmospheric water vapour. A regression model based on the principal components was used to estimate daily average shortwave radiation for ten synoptic stations in the Fars province, Iran, for the period 2009-2012. The Durbin-Watson statistic and autocorrelation function of the residuals of the fitted principal components regression model indicated that the residuals were serially independent. The results indicated that the fitted principal components regression models accounted for about 86-96% of total variance of the observed shortwave radiation values and the root mean square error was about 0.9-2.04 MJ m-2 d-1. Also, the results indicated that the model accuracy decreased as the aerosol optical depth increased and extraterrestrial radiation was the most important predictor variable among all.

  4. Estimated manpower requirements for psychiatrists in Australia 1980-91.

    PubMed

    Burvill, P W; German, G A

    1984-03-01

    A postal survey of all practising psychiatrists in Australia, conducted in 1980 with the purpose of establishing psychiatric manpower requirements for 1980-91, is described. It is estimated that in 1980 there was one Whole Time Equivalent psychiatrist, aged 65 years or less, per 19,400 population in Australia. The relativity of manpower requirements is emphasised. A number of different models of estimating future requirements is proposed. The two models most favoured estimate a shortfall for the whole of Australia of 294 and 349 psychiatrists, respectively, in 1980, and a need for an additional 621 or 692 by 1991. A major maldistribution of psychiatrists is identified, viz between states, between capital cities and country areas, and between general psychiatry and specialised areas. The two issues of the number of trainees required and of how to train psychiatrists in special areas of expertise are discussed.

  5. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  6. A doubly robust estimator for the average treatment effect in the context of a mean-reverting measurement error.

    PubMed

    Lenis, David; Ebnesajjad, Cyrus F; Stuart, Elizabeth A

    2017-04-01

    One of the main limitations of causal inference methods is that they rely on the assumption that all variables are measured without error. A popular approach for handling measurement error is simulation-extrapolation (SIMEX). However, its use for estimating causal effects have been examined only in the context of an additive, non-differential, and homoscedastic classical measurement error structure. In this article we extend the SIMEX methodology, in the context of a mean reverting measurement error structure, to a doubly robust estimator of the average treatment effect when a single covariate is measured with error but the outcome and treatment and treatment indicator are not. Throughout this article we assume that an independent validation sample is available. Simulation studies suggest that our method performs better than a naive approach that simply uses the covariate measured with error. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Enhancement of accuracy and reproducibility of parametric modeling for estimating abnormal intra-QRS potentials in signal-averaged electrocardiograms.

    PubMed

    Lin, Chun-Cheng

    2008-09-01

    This work analyzes and attempts to enhance the accuracy and reproducibility of parametric modeling in the discrete cosine transform (DCT) domain for the estimation of abnormal intra-QRS potentials (AIQP) in signal-averaged electrocardiograms. One hundred sets of white noise with a flat frequency response were introduced to simulate the unpredictable, broadband AIQP when quantitatively analyzing estimation error. Further, a high-frequency AIQP parameter was defined to minimize estimation error caused by the overlap between normal QRS and AIQP in low-frequency DCT coefficients. Seventy-two patients from Taiwan were recruited for the study, comprising 30 patients with ventricular tachycardia (VT) and 42 without VT. Analytical results showed that VT patients had a significant decrease in the estimated AIQP. The global diagnostic performance (area under the receiver operating characteristic curve) of AIQP rose from 73.0% to 84.2% in lead Y, and from 58.3% to 79.1% in lead Z, when the high-frequency range fell from 100% to 80%. The combination of AIQP and ventricular late potentials further enhanced performance to 92.9% (specificity=90.5%, sensitivity=90%). Therefore, the significantly reduced AIQP in VT patients, possibly also including dominant unpredictable potentials within the normal QRS complex, may be new promising evidence of ventricular arrhythmias.

  8. Estimating Watershed-Averaged Precipitation and Evapotranspiration Fluxes using Streamflow Measurements in a Semi-Arid, High Altitude Montane Catchment

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.

    2014-12-01

    Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.

  9. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    PubMed

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-07

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.

  10. Sampling-related uncertainty of satellite rainfall averages: Comparison of two estimation approaches based on a large radar data set

    NASA Astrophysics Data System (ADS)

    Steiner, M.; Bell, T. L.; Zhang, Y.; Wood, E. F.

    2003-04-01

    Estimation of rainfall from spaceborne platforms is burdened by uncertainty resulting from the intermittence of rainfall and a limited sampling in space and time. Experience from past studies indicates that this sampling-related uncertainty is a function of the rainfall intensity, domain size, time integration, and sampling frequency. The uncertainty appears to be directly proportional to the sampling interval, but inversely related to the other parameters. The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar-mosaic data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  11. Estimating Wartime Support Resource Requirements. Statistical and Related Policy Issues.

    DTIC Science & Technology

    1984-07-01

    Fitzgerald of Headquarters, Air Force Logistics Command explained the engine requirements Computation arid identified a body of past work that suggests that...C-5 and other aircraft, as well as the more general problem of * estimating wartime support resource requirements. 3 " Cannibalization " is the use of...service. The missing parts should eventually be replaced, but some TF39 engines have been * cannibalized so extensively that it might be cheaper to

  12. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.

  13. Sample Size Requirements for Estimating Pearson, Spearman and Kendall Correlations.

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Wright, Thomas A.

    2000-01-01

    Reviews interval estimates of the Pearson, Kendall tau-alpha, and Spearman correlates and proposes an improved standard error for the Spearman correlation. Examines the sample size required to yield a confidence interval having the desired width. Findings show accurate results from a two-stage approximation to the sample size. (SLD)

  14. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  15. [A study on estimating health workers requirement (author's transl)].

    PubMed

    Park, I H

    1980-11-01

    4 metholological approaches to the assessment of future demand for health manpower are considered: health needs, service targets, health demand, and ratio of health manpower to population. The service targets approach appears to be the most appropriate for family planning and maternal and child health services. Once targets are determined, they are converted into manpower requirements on the basis of personnel ratios and productivity assumptions. Requirements for family planning workers are estimated by the fertility decline approach, based on national demographic targets established in quantitative terms. Actual number of family planning workers are estimated at 2619 in 1981 (over 14% of current workers in the field) and 2214 in 1986. To meet the established minimum requirements on the maternal and child health program (3 antepartum contacts and direct or indirect services at delivery), more than 4500 workers will be needed during 1981-86. (Author's modified)

  16. Multi-model predictions of local climate change with uncertainty assessment using generalized likelihood uncertainty estimation and Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Huang, Y.

    2013-12-01

    A number of general circulation models (GCMs) have been developed to project future global climate change and their outputs are widely used to represent local climate conditions to predict the effect of climate change on hydrology and water quality. Unfortunately, projected results for future climate change are different and it is not known which set of GCM data is better than the others. The objective of this work is to present a Bayesian approach consisting of generalized likelihood uncertainty estimation (GLUE) and Bayesian model averaging (BMA) for the estimation of local climate change with uncertainty assessment. This method is applied to Cannonsville Reservoir watershed. GCM data contributing to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4), under a range of emission scenarios (20C3M, A1B, A2, and B1) are used. The GCM data for the 20C3M scenario are used to calculated the posterior probability using GLUE, while outputs for future scenarios (A1B, A2, and B1) are then processed using BMA which is a statistical procedure that infers a consensus prediction by weighing individual predictions based on the posterior probabilities obtained by GLUE, with the better performing predictions receiving higher weights than the worse performing ones. The method has the advantage of generating more reliable predictions than original GCM data. The results also indicate clearly the high reliability of the GCM data for daily average, maximum and minimum temperatures, but the reliability for daily precipitation and wind speed is low. The application supports the method presented.

  17. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  18. Is the Whole Really More than the Sum of Its Parts? Estimates of Average Size and Orientation Are Susceptible to Object Substitution Masking

    ERIC Educational Resources Information Center

    Jacoby, Oscar; Kamke, Marc R.; Mattingley, Jason B.

    2013-01-01

    We have a remarkable ability to accurately estimate average featural information across groups of objects, such as their average size or orientation. It has been suggested that, unlike individual object processing, this process of "feature averaging" occurs automatically and relatively early in the course of perceptual processing,…

  19. [Level of stress during pregnancy estimated by mothers and average weight of newborns and frequency of low birth weight].

    PubMed

    Steplewski, Z; Buczyńska, G; Rogoszewski, M; Kuban, T; Steplewska-Mazur, K; Jaskólecki, H; Kasperczyk, J

    1998-02-01

    Low birth weight is still important health problem in many countries. Children's low birth weight increases mortality, injures central nervous system, somatic, interferes with intellectual and emotional development. Low birth weight is frequently occurring in Poland--between 7-9% of live births. There are many risk factors, among them behavioural and environmental. In Poland an attention was put on chemical and physical environmental factors. Behavioural factors (stress) are disregarded. In the present paper it was decided to check the relationship between stress during pregnancy (estimated by pregnant), child birth weight and frequency of low birth weight. The research was carried out by use of a questionnaire using the "case-control study". In the research were involved 450 mothers of new-born children (the group of cases: untimely, premature delivery or child birth weight below 2500 g) and 450 mothers of new-born children (control group-physiologically delivered). Mothers were asked about their relations to the pregnancy; professional and personal stress during pregnancy was estimated. The results were analysed by counting risk ratio coefficient (RR) and correlation coefficient. The research showed, that there is no relation between acceptation of pregnancy, stress and frequency of low birth weight or the average child birth weight. The researches didn't prove unfavourable influence of stress reaction caused by professional and personal stressors on intrauterine foetus development.

  20. Origins for the estimations of water requirements in adults.

    PubMed

    Vivanti, A P

    2012-12-01

    Water homeostasis generally occurs without conscious effort; however, estimating requirements can be necessary in settings such as health care. This review investigates the derivation of equations for estimating water requirements. Published literature was reviewed for water estimation equations and original papers sought. Equation origins were difficult to ascertain and original references were often not cited. One equation (% of body weight) was based on just two human subjects and another equation (ml water/kcal) was reported for mammals and not specifically for humans. Other findings include that some equations: for children were subsequently applied to adults; had undergone modifications without explicit explanation; had adjusted for the water from metabolism or food; and had undergone conversion to simplify application. The primary sources for equations are rarely mentioned or, when located, lack details conventionally considered important. The sources of water requirement equations are rarely made explicit and historical studies do not satisfy more rigorous modern scientific method. Equations are often applied without appreciating their derivation, or adjusting for the water from food or metabolism as acknowledged by original authors. Water requirement equations should be used as a guide only while employing additional means (such as monitoring short-term weight changes, physical or biochemical parameters and urine output volumes) to ensure the adequacy of water provision in clinical or health-care settings.

  1. Lava discharge rate estimates from thermal infrared satellite data for Pacaya volcano, Guatemala: Implications for time-averaged eruption processes

    NASA Astrophysics Data System (ADS)

    Morgan, H. A.; Harris, A. J.; Rose, W. I.

    2011-12-01

    The Pacaya volcanic complex has been producing lava flows nearly continuously since 1961. Matías (2009) compiled a detailed database including information such as length, surface area, volume, duration, and effusion rates for each of the 248 lava flows that occurred during this time. In this investigation, time-averaged discharge rates (TADR) were estimated for a subset of lava flows using a satellite-based method initially applied to infrared satellite data for Etna by Harris et al. (1997). Satellite-based estimates potentially provide a quicker, safer, and less expensive alternative to ground-based measurements and are therefore valuable for hazard mitigation. The excellent record of recent activity at Pacaya provides a unique opportunity to calibrate results from the satellite-based method by comparing them with reliable ground-based measurements. Imagery from two sensors of differing temporal and spatial resolutions were analyzed in order to produce a comprehensive dataset: MODIS (one image every 6 hours, 1-km pixels) and GOES (one image every 15 minutes, 4-km pixels). As of August 2011, 2403 MODIS and 2642 GOES images have been analyzed. Due to the relatively low intensity of Pacaya's effusive activity, each image was searched manually for volcanic "hot spots". It was found that MODIS data allowed better estimations of TADR than did GOES data. We suggested that the very small, sub-resolution flows typical of Pacaya may have surpassed the limits of low-resolution GOES imagery for this particular application. TADR derived from MODIS data were used to describe and parameterize eruptive cycles, as well as to explore conduit models. A pattern was found over the past two decades of short high-TADR periods followed by longer low-TADR periods. We suggested that the low TADR experienced during longer "bleeding" of the conduit may approximate the magma supply rate to the shallow system, while high TADR eruptions may represent the release of volumes collected during

  2. Using seed purity data to estimate an average pollen mediated gene flow from crops to wild relatives.

    PubMed

    Lavigne, C; Klein, E K; Couvet, D

    2002-01-01

    Gene flow from crops to wild related species has been recently under focus in risk-assessment studies of the ecological consequences of growing transgenic crops. However, experimental studies addressing this question are usually temporally or spatially limited. Indirect population-structure approaches can provide more global estimates of gene flow, but their assumptions appear inappropriate in an agricultural context. In an attempt to help the committees providing advice on the release of transgenic crops, we present a new method to estimate the quantity of genes migrating from crops to populations of related wild plants by way of pollen dispersal. This method provides an average estimate at a landscape level. Its originality is based on the measure of the inverse gene flow, i.e. gene flow from the wild plants to the crop. Such gene flow results in an observed level of impurities from wild plants in crop seeds. This level of impurity is usually known by the seed producers and, in any case, its measure is easier than a direct screen of wild populations because crop seeds are abundant and their genetic profile is known. By assuming that wild and cultivated plants have a similar individual pollen dispersal function, we infer the level of pollen-mediated gene flow from a crop to the surrounding wild populations from this observed level of impurity. We present an example for sugar beet data. Results suggest that under conditions of seed production in France (isolation distance of 1,000 m) wild beets produce high numbers of seeds fathered by cultivated plants.

  3. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    PubMed

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape.

  4. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  5. Estimating supply requirements for forward medical treatment facilities.

    PubMed

    Konoske, P J; Galarneau, M R; Pang, G; Emens-Hesslink, K E; Gauker, E D; Tropeano, A

    2000-11-01

    The Naval Health Research Center designed, developed, and used a systematic process to review Marine Corps medical supply requirements. This approach consisted of identifying the medical tasks required to treat patients with specific injuries and illnesses and determining the supplies and equipment required to perform each task. Subject matter experts reviewed treatment briefs, tasks, supplies, and equipment and examined their value to Marine Corps medical providers in forward areas of care. By establishing the clinical requirement for each item pushed forward, the Naval Health Research Center model was able to reduce the logistical burden carried by Marine Corps units and enhance far-forward clinical capability. The result of this effort is a model to estimate supplies and equipment based on a given casualty stream distribution. This approach produces an audit trail for each item and allows current authorized medical allowance list configurations to be revised using information such as type of conflict anticipated, expected duration, and changes in medical doctrine.

  6. Estimated water requirements for gold heap-leach operations

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water necessary for conventional gold heap-leach operations. Water is required for drilling and dust suppression during mining, for agglomeration and as leachate during ore processing, to support the workforce (requires water in potable form and for sanitation), for minesite reclamation, and to compensate for water lost to evaporation and leakage. Maintaining an adequate water balance is especially critical in areas where surface and groundwater are difficult to acquire because of unfavorable climatic conditions [arid conditions and (or) a high evaporation rate]; where there is competition with other uses, such as for agriculture, industry, and use by municipalities; and where compliance with regulatory requirements may restrict water usage. Estimating the water consumption of heap-leach operations requires an understanding of the heap-leach process itself. The task is fairly complex because, although they all share some common features, each gold heap-leach operation is unique. Also, estimating the water consumption requires a synthesis of several fields of science, including chemistry, ecology, geology, hydrology, and meteorology, as well as consideration of economic factors.

  7. Effect of confounding variables on hemodynamic response function estimation using averaging and deconvolution analysis: An event-related NIRS study.

    PubMed

    Aarabi, Ardalan; Osharina, Victoria; Wallois, Fabrice

    2017-07-15

    Slow and rapid event-related designs are used in fMRI and functional near-infrared spectroscopy (fNIRS) experiments to temporally characterize the brain hemodynamic response to discrete events. Conventional averaging (CA) and the deconvolution method (DM) are the two techniques commonly used to estimate the Hemodynamic Response Function (HRF) profile in event-related designs. In this study, we conducted a series of simulations using synthetic and real NIRS data to examine the effect of the main confounding factors, including event sequence timing parameters, different types of noise, signal-to-noise ratio (SNR), temporal autocorrelation and temporal filtering on the performance of these techniques in slow and rapid event-related designs. We also compared systematic errors in the estimates of the fitted HRF amplitude, latency and duration for both techniques. We further compared the performance of deconvolution methods based on Finite Impulse Response (FIR) basis functions and gamma basis sets. Our results demonstrate that DM was much less sensitive to confounding factors than CA. Event timing was the main parameter largely affecting the accuracy of CA. In slow event-related designs, deconvolution methods provided similar results to those obtained by CA. In rapid event-related designs, our results showed that DM outperformed CA for all SNR, especially above -5 dB regardless of the event sequence timing and the dynamics of background NIRS activity. Our results also show that periodic low-frequency systemic hemodynamic fluctuations as well as phase-locked noise can markedly obscure hemodynamic evoked responses. Temporal autocorrelation also affected the performance of both techniques by inducing distortions in the time profile of the estimated hemodynamic response with inflated t-statistics, especially at low SNRs. We also found that high-pass temporal filtering could substantially affect the performance of both techniques by removing the low-frequency components of

  8. Probabilistic correction of precipitation measurement errors using a Bayesian Model Average Approach applied for the estimation of glacier accumulation

    NASA Astrophysics Data System (ADS)

    Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier

    2013-04-01

    Precipitation is a major component of the water cycle that returns atmospheric water to the ground. Without precipitation there would be no water cycle, all the water would run down the rivers and into the seas, then the rivers would dry up with no fresh water from precipitation. Although precipitation measurement seems an easy and simple procedure, it is affected by several systematic errors which lead to underestimation of the actual precipitation. Hence, precipitation measurements should be corrected before their use. Different correction approaches were already suggested in order to correct precipitation measurements. Nevertheless, focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this presentation we propose a Bayesian model average (BMA) approach for correcting rain gauge measurement errors. In the present study we used meteorological data recorded every 10 minutes at the Condoriri station in the Bolivian Andes. Comparing rain gauge measurements with totalisators rain measurements it was possible to estimate the rain underestimation. First, different deterministic models were optimized for the correction of precipitation considering wind effect and precipitation intensities. Then, probabilistic BMA correction was performed. The corrected precipitation was then separated into rainfall and snowfall considering typical Andean temperature thresholds of -1°C and 3°C. Hence, precipitation was separated into rainfall, snowfall and mixed precipitation. Then, relating the total snowfall with the glacier ice density, it was possible to estimate the glacier accumulation. Results show a yearly glacier accumulation of 1200 mm/year. Besides, results confirm that in tropical glaciers winter is not accumulation period, but a low ablation one. Results show that neglecting such correction may induce an underestimation higher than 35 % of total precipitation. Besides, the uncertainty range may induce differences up

  9. Estimation of temporal variations in path-averaged atmospheric refractive index gradient from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; McCrae, Jack E.; Fiorino, Steven; Przelomski, Jared

    2016-09-01

    The sea level vertical refractive index gradient in the U.S. Standard Atmosphere model is -2.7×10-8 m-1 at 500 nm. At any particular location, the actual refractive index gradient varies due to turbulence and local weather conditions. An imaging experiment was conducted to measure the temporal variability of this gradient. A tripod mounted digital camera captured images of a distant building every minute. Atmospheric turbulence caused the images to wander quickly, randomly, and statistically isotropically and changes in the average refractive index gradient along the path caused the images to move vertically and more slowly. The temporal variations of the refractive index gradient were estimated from the slow, vertical motion of the building over a period of several days. Comparisons with observational data showed the gradient variations derived from the time-lapse imagery correlated well with solar heating and other weather conditions. The time-lapse imaging approach has the potential to be used as a validation tool for numerical weather models. These validations will benefit directed energy simulation tools and applications.

  10. Estimates of galactic cosmic ray shielding requirements during solar minimum

    SciTech Connect

    Townsend, L.W.; Nealy, J.E.; Wilson, J.W.; Simonsen, L.C.

    1990-02-01

    Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.

  11. Estimating dental manpower requirements on a statewide basis.

    PubMed

    DeFriese, G H; Konrad, T R

    1981-01-01

    The North Carolina Dental Manpower Study undertook to use epidemiological data on dental disease, data on practice productivity, and estimates of treatment needs to arrive at more useful measures of dental manpower requirements on a statewide and substate regional basis. As in all attempts to plan for health manpower, the North Carolina study relied upon a measure of subjective judgment pertaining to the treatment/services required to deal with certain dental conditions. These judgments, however, were representative of the conventional standards of practice in North Carolina at the time of the study. Though certain important factors necessary to a complete assessment of dental manpower requirements were not directly measured in the study (e.g., consumer demand), estimates of the volume increase in dental-office practice-productivity were derived for the major categories of dental procedures and conditions. In North Carolina this technic is thought to represent a more meaningful approach to dental manpower planning than the conventional manpower-to-population ratio.

  12. Occurrence of aflatoxin M1 in human milk samples in Vojvodina, Serbia: Estimation of average daily intake by babies.

    PubMed

    Radonić, Jelena R; Kocić Tanackov, Sunčica D; Mihajlović, Ivana J; Grujić, Zorica S; Vojinović Miloradov, Mirjana B; Škrinjar, Marija M; Turk Sekulić, Maja M

    2017-01-02

    The objectives of the study were to determine the aflatoxin M1 content in human milk samples in Vojvodina, Serbia, and to assess the risk of infants' exposure to aflatoxins food contamination. The growth of Aspergillus flavus and production of aflatoxin B1 in corn samples resulted in higher concentrations of AFM1 in milk and dairy products in 2013, indicating higher concentrations of AFM1 in human milk samples in 2013 and 2014 in Serbia. A total number of 60 samples of human milk (colostrum and breast milk collected 4-8 months after delivery) were analyzed for the presence of AFM1 using the Enzyme Linked Immunosorbent Assay method. The estimated daily intake of AFM1 through breastfeeding was calculated for the colostrum samples using an average intake of 60 mL/kg body weight (b.w.)/day on the third day of lactation. All breast milk collected 4-8 months after delivery and 36.4% of colostrum samples were contaminated with AFM1. The greatest percentage of contaminated colostrum (85%) and all samples of breast milk collected 4-8 months after delivery had AFM1 concentration above maximum allowable concentration according to the Regulation on health safety of dietetic products. The mean daily intake of AFM1 in colostrum was 2.65 ng/kg bw/day. Results of our study indicate the high risk of infants' exposure, who are at the early stage of development and vulnerable to toxic contaminants.

  13. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    NASA Astrophysics Data System (ADS)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  14. Stochastic physical ecohydrologic-based model for estimating irrigation requirement

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Climate uncertainty affects both natural and managed hydrological systems. Therefore, methods which could take this kind of uncertainty into account are of primal importance for management of ecosystems, especially agricultural ecosystems. One of the famous problems in these ecosystems is crop water requirement estimation under climatic uncertainty. Both deterministic physically-based methods and stochastic time series modeling have been utilized in the literature. Like other fields of hydroclimatic sciences, there is a vast area in irrigation process modeling for developing approaches integrating physics of the process and statistics aspects. This study is about deriving closed-form expressions for probability density function (p.d.f.) of irrigation water requirement using a stochastic physically-based model, which considers important aspects of plant, soil, atmosphere and irrigation technique and policy in a coherent framework. An ecohydrologic stochastic model, building upon the stochastic differential equation of soil moisture dynamics at root zone, is employed as a basis for deriving the expressions considering temporal stochasticity of rainfall. Due to distinguished nature of stochastic processes of micro and traditional irrigation applications, two different methodologies have been used. Micro-irrigation application has been modeled through dichotomic process. Chapman-Kolomogrov equation of time integral of the dichotomic process for transient condition has been solved to derive analytical expressions for probability density function of seasonal irrigation requirement. For traditional irrigation, irrigation application during growing season has been modeled using a marked point process. Using the renewal theory, probability mass function of seasonal irrigation requirement, which is a discrete-value quantity, has been analytically derived. The methodology deals with estimation of statistical properties of the total water requirement in a growing season that

  15. Estimating resource costs of compliance with EU WFD ecological status requirements at the river basin scale

    NASA Astrophysics Data System (ADS)

    Riegels, Niels; Jensen, Roar; Bensasson, Lisa; Banou, Stella; Møller, Flemming; Bauer-Gottwein, Peter

    2011-01-01

    SummaryResource costs of meeting EU WFD ecological status requirements at the river basin scale are estimated by comparing net benefits of water use given ecological status constraints to baseline water use values. Resource costs are interpreted as opportunity costs of water use arising from water scarcity. An optimization approach is used to identify economically efficient ways to meet WFD requirements. The approach is implemented using a river basin simulation model coupled to an economic post-processor; the simulation model and post-processor are run from a central controller that iterates until an allocation is found that maximizes net benefits given WFD requirements. Water use values are estimated for urban/domestic, agricultural, industrial, livestock, and tourism water users. Ecological status is estimated using metrics that relate average monthly river flow volumes to the natural hydrologic regime. Ecological status is only estimated with respect to hydrologic regime; other indicators are ignored in this analysis. The decision variable in the optimization is the price of water, which is used to vary demands using consumer and producer water demand functions. The price-based optimization approach minimizes the number of decision variables in the optimization problem and provides guidance for pricing policies that meet WFD objectives. Results from a real-world application in northern Greece show the suitability of the approach for use in complex, water-stressed basins. The impact of uncertain input values on model outcomes is estimated using the Info-Gap decision analysis framework.

  16. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  17. SU-E-T-364: Estimating the Minimum Number of Patients Required to Estimate the Required Planning Target Volume Margins for Prostate Glands

    SciTech Connect

    Bakhtiari, M; Schmitt, J; Sarfaraz, M; Osik, C

    2015-06-15

    Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varying from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.

  18. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    SciTech Connect

    Supanich, MP

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.

  19. Estimates of the maximum time required to originate life

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Fogleman, Guy

    1989-01-01

    Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.

  20. Estimation of L-threonine requirements for Longyan laying ducks.

    PubMed

    Fouad, A M; Zhang, H X; Chen, W; Xia, W G; Ruan, D; Wang, S; Zheng, C T

    2017-02-01

    A study was conducted to test six threonine (Thr) levels (0.39%, 0.44%, 0.49%, 0.54%, 0.59%, and 0.64%) to estimate the optimal dietary Thr requirements for Longyan laying ducks from 17 to 45 wk of age. Nine hundred Longyan ducks aged 17 wk were assigned randomly to the six dietary treatments, where each treatment comprised six replicate pens with 25 ducks per pen. Increasing the Thr level enhanced egg production, egg weight, egg mass, and the feed conversion ratio (FCR) (linearly or quadratically; p<0.05). The Haugh unit score, yolk color, albumen height, and the weight, percentage, thickness, and breaking strength of the eggshell did not response to increases in the Thr levels, but the albumen weight and its proportion increased significantly (p<0.05), whereas the yolk weight and its proportion decreased significantly as the Thr levels increased. According to a regression model, the optimal Thr requirement for egg production, egg mass, and FCR in Longyan ducks is 0.57%, while 0.58% is the optimal level for egg weight from 17 to 45 wk of age.

  1. Estimation of L-threonine requirements for Longyan laying ducks

    PubMed Central

    Fouad, A. M.; Zhang, H. X.; Chen, W.; Xia, W. G.; Ruan, D.; Wang, S.; Zheng, C. T.

    2017-01-01

    Objective A study was conducted to test six threonine (Thr) levels (0.39%, 0.44%, 0.49%, 0.54%, 0.59%, and 0.64%) to estimate the optimal dietary Thr requirements for Longyan laying ducks from 17 to 45 wk of age. Methods Nine hundred Longyan ducks aged 17 wk were assigned randomly to the six dietary treatments, where each treatment comprised six replicate pens with 25 ducks per pen. Results Increasing the Thr level enhanced egg production, egg weight, egg mass, and the feed conversion ratio (FCR) (linearly or quadratically; p<0.05). The Haugh unit score, yolk color, albumen height, and the weight, percentage, thickness, and breaking strength of the eggshell did not response to increases in the Thr levels, but the albumen weight and its proportion increased significantly (p<0.05), whereas the yolk weight and its proportion decreased significantly as the Thr levels increased. Conclusion According to a regression model, the optimal Thr requirement for egg production, egg mass, and FCR in Longyan ducks is 0.57%, while 0.58% is the optimal level for egg weight from 17 to 45 wk of age. PMID:27282968

  2. A Comparison of Several Techniques For Estimating The Average Volume Per Acre For Multipanel Data With Missing Panels

    Treesearch

    Dave Gartner; Gregory A. Reams

    2001-01-01

    As Forest Inventory and Analysis changes from a periodic survey to a multipanel annual survey, a transition will occur where only some of the panels have been resurveyed. Several estimation techniques use data from the periodic survey in addition to the data from the partially completed multipanel data. These estimation techniques were compared using data from two...

  3. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy.

  4. Estimating equilibrium ensemble averages using multiple time slices from driven nonequilibrium processes: theory and application to free energies, moments, and thermodynamic length in single-molecule pulling experiments.

    PubMed

    Minh, David D L; Chodera, John D

    2011-01-14

    Recently discovered identities in statistical mechanics have enabled the calculation of equilibrium ensemble averages from realizations of driven nonequilibrium processes, including single-molecule pulling experiments and analogous computer simulations. Challenges in collecting large data sets motivate the pursuit of efficient statistical estimators that maximize use of available information. Along these lines, Hummer and Szabo developed an estimator that combines data from multiple time slices along a driven nonequilibrium process to compute the potential of mean force. Here, we generalize their approach, pooling information from multiple time slices to estimate arbitrary equilibrium expectations. Our expression may be combined with estimators of path-ensemble averages, including existing optimal estimators that use data collected by unidirectional and bidirectional protocols. We demonstrate the estimator by calculating free energies, moments of the polymer extension, the thermodynamic metric tensor, and the thermodynamic length in a model single-molecule pulling experiment. Compared to estimators that only use individual time slices, our multiple time-slice estimators yield substantially smoother estimates and achieve lower variance for higher-order moments.

  5. Technical Methods Report: Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs. NCEE 2009-4040

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley

    2009-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This report uses a causal inference and instrumental variables framework to examine the…

  6. Areal-Averaged Spectral Surface Albedo in an Atlantic Coastal Area: Estimation from Ground-Based Transmission

    DOE PAGES

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; ...

    2017-07-12

    Tower-based data combined with high-resolution satellite products have been used to produce surface albedo at various spatial scales over land. Because tower-based albedo data are available at only a few sites, surface albedos using these combined data are spatially limited. Moreover, tower-based albedo data are not representative of highly heterogeneous regions. To produce areal-averaged and spectrally-resolved surface albedo for regions with various degrees of surface heterogeneity, we have developed a transmission-based retrieval and demonstrated its feasibility for relatively homogeneous land surfaces. Here we demonstrate its feasibility for a highly heterogeneous coastal region. We use the atmospheric transmission measured during amore » 19-month period (June 2009 – December 2010) by a ground-based Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (0.415, 0.5, 0.615, 0.673 and 0.87 µm) at the Department of Energy’s Atmospheric Radiation Measurement (ARM) Mobile Facility (AMF) site located on Graciosa Island. We compare the MFRSR-retrieved areal-averaged surface albedo with albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations, and also a composite-based albedo. Lastly, we demonstrate that these three methods produce similar spectral signatures of surface albedo; however, the MFRSR-retrieved albedo, is higher on average (≤0.04) than the MODIS-based areal-averaged surface albedo and the largest difference occurs in winter.« less

  7. Accelerated multiple-pass moving average: a novel algorithm for baseline estimation in CE and its application to baseline correction on real-time bases.

    PubMed

    Solis, Alejandro; Rex, Mathew; Campiglia, Andres D; Sojo, Pedro

    2007-04-01

    We present a novel algorithm for baseline estimation in CE. The new algorithm which we have named as accelerated multiple-pass moving average (AMPMA) is combined to three preexisting low-pass filters, spike-removal, moving average, and multi-pass moving average filter, to achieve real-time baseline correction with commercial instrumentation. The successful performance of AMPMA is demonstrated with simulated and experimental data. Straightforward comparison of experimental data clearly shows the improvement AMPMA provides to the linear fitting, LOD, and accuracy (absolute error) of CE analysis.

  8. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the estimating methods and rationale used in developing cost estimates and budgets; (v) Provide for... management systems; and (4) Is subject to applicable financial control systems. Estimating system means the Contractor's policies, procedures, and practices for budgeting and planning controls, and...

  9. Derivation and validation of a prediction rule for estimating advanced colorectal neoplasm risk in average-risk Chinese.

    PubMed

    Cai, Quan-Cai; Yu, En-Da; Xiao, Yi; Bai, Wen-Yuan; Chen, Xing; He, Li-Ping; Yang, Yu-Xiu; Zhou, Ping-Hong; Jiang, Xue-Liang; Xu, Hui-Min; Fan, Hong; Ge, Zhi-Zheng; Lv, Nong-Hua; Huang, Zhi-Gang; Li, You-Ming; Ma, Shu-Ren; Chen, Jie; Li, Yan-Qing; Xu, Jian-Ming; Xiang, Ping; Yang, Li; Lin, Fu-Lin; Li, Zhao-Shen

    2012-03-15

    No prediction rule is currently available for advanced colorectal neoplasms, defined as invasive cancer, an adenoma of 10 mm or more, a villous adenoma, or an adenoma with high-grade dysplasia, in average-risk Chinese. In this study between 2006 and 2008, a total of 7,541 average-risk Chinese persons aged 40 years or older who had complete colonoscopy were included. The derivation and validation cohorts consisted of 5,229 and 2,312 persons, respectively. A prediction rule was developed from a logistic regression model and then internally and externally validated. The prediction rule comprised 8 variables (age, sex, smoking, diabetes mellitus, green vegetables, pickled food, fried food, and white meat), with scores ranging from 0 to 14. Among the participants with low-risk (≤3) or high-risk (>3) scores in the validation cohort, the risks of advanced neoplasms were 2.6% and 10.0% (P < 0.001), respectively. If colonoscopy was used only for persons with high risk, 80.3% of persons with advanced neoplasms would be detected while the number of colonoscopies would be reduced by 49.2%. The prediction rule had good discrimination (area under the receiver operating characteristic curve = 0.74, 95% confidence interval: 0.70, 0.78) and calibration (P = 0.77) and, thus, provides accurate risk stratification for advanced neoplasms in average-risk Chinese.

  10. Technical Methods Report: The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions. NCEE 2009-0061 rev.

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2009-01-01

    This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…

  11. In vitro estimation of mean sound speed based on minimum average phase variance in medical ultrasound imaging.

    PubMed

    Yoon, Changhan; Lee, Yuhwa; Chang, Jin Ho; Song, Tai-Kyong; Yoo, Yangmo

    2011-10-01

    Effective receive beamforming in medical ultrasound imaging is important for enhancing spatial and contrast resolution. In current ultrasound receive beamforming, a constant sound speed (e.g., 1540m/s) is assumed. However, the variations of sound speed in soft tissues could introduce phase distortions, leading to degradation in spatial and contrast resolution. This degradation becomes even more severe in imaging fatty tissues (e.g., breast) and with obese patients. In this paper, a mean sound speed estimation method where phase variance of radio-frequency channel data in the region of interest is evaluated is presented for improving spatial and contrast resolution. The proposed estimation method was validated by the Field II simulation and the tissue mimicking phantom experiments. In the simulation, the sound speed of the medium was set to 1450m/s and the proposed method was capable of capturing this value correctly. From the phantom experiments, the -18-dB lateral resolution of the point target at 50mm obtained with the estimated mean sound speed was improved by a factor of 1.3, i.e., from 3.9mm to 2.9mm. The proposed estimation method also provides an improvement of 0.4 in the contrast-to-noise ratio, i.e., from 2.4 to 2.8. These results indicate that the proposed mean sound speed estimation method could enhance the spatial and contrast resolution in the medical ultrasound imaging systems. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. FFT averaging of multichannel BCG signals from bed mattress sensor to improve estimation of heart beat interval.

    PubMed

    Kortelainen, Juha M; Virkkala, Jussi

    2007-01-01

    A multichannel pressure sensing Emfit foil was integrated to a bed mattress for measuring ballistocardiograph signals during sleep. We calculated the heart beat interval with cepstrum method, by applying FFT for short time windows including pair of consequent heart beats. We decreased the variance of FFT by averaging the multichannel data in the frequency domain. Relative error of our method in reference to electrocardiograph RR interval was only 0.35% for 15 night recordings with six normal subjects, when 12% of data was automatically removed due to movement artifacts. Background motivation for this work is given from the studies applying heart rate variability for the sleep staging.

  13. Towards the estimation of reach-averaged discharge from SWOT data using a Manning's equation derived algorithm. Application to the Garonne River between Tonneins-La Reole

    NASA Astrophysics Data System (ADS)

    Berthon, Lucie; Biancamaria, Sylvain; Goutal, Nicole; Ricci, Sophie; Durand, Michael

    2014-05-01

    The future NASA-CNES-CSA Surface Water and Ocean Topogragraphy (SWOT) satellite mission will be launched in 2020 and will deliver maps of water surface elevation, slope and extent with an un-precedented resolution of 100m. A river discharge algorithm was proposed by Durand et al. 2013, based on Manning's equation to estimate reach-averaged discharge from SWOT data. In the present study, this algorithm was applied to a 50-km reach on the Garonne River with an averaged slope of 2.8m per 10000m, averaged width of 180m between Tonneins and La Reole. The dynamics of this reach is satisfyingly represented by the 1D model MASCARET and validated against in-situ water level observations in Marmande. Major assumptions of permanent flow and uniform conditions lie under the Manning's equation choice. Here, we aim at highlighting the limits of validity of these assumptions for the Garonne River during a typical flood event in order to estimate the applicability of the discharge algorithm over averaged reach. Manning-estimated and MASCARET discharges are compared for non-permanent and permanent flow for different reach averaging (100m to 10 km). It was shown that the Manning equation increasingly over-estimates the MASCARET discharge as the reach averaging length increases. It is shown that the Manning overestimate is due to the effect of the sub-reach parameter covariances. In order to further explain these results, this comparison was carried out for a simplified case study with a parametric bathymetry described either by a flat bottom ; constant slope or local slope variations.

  14. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (d) of this clause, and provides for a system that— (1) Is maintained, reliable, and consistently... provide for the use of appropriate source data, utilize sound estimating techniques and good judgment...) Identify and document the sources of data and the estimating methods and rationale used in developing cost...

  15. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  16. How Well Can We Estimate Areal-Averaged Spectral Surface Albedo from Ground-Based Transmission in an Atlantic Coastal Area?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.

    2015-10-15

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  17. How well can we estimate areal-averaged spectral surface albedo from ground-based transmission in the Atlantic coastal area?

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina

    2015-10-01

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  18. Estimates of Average Glandular Dose with Auto-modes of X-ray Exposures in Digital Breast Tomosynthesis.

    PubMed

    Kamal, Izdihar; Chelliah, Kanaga K; Mustafa, Nawal

    2015-05-01

    The aim of this research was to examine the average glandular dose (AGD) of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50) and 20% glandular and 80% adipose tissue (20/80) commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA) with auto-time, auto-filter and auto-kilovolt modes. The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy) for two dimension (2D) and 2.48 mGy for three dimensional (3D) images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error.

  19. Estimates of Average Glandular Dose with Auto-modes of X-ray Exposures in Digital Breast Tomosynthesis

    PubMed Central

    Kamal, Izdihar; Chelliah, Kanaga K.; Mustafa, Nawal

    2015-01-01

    Objectives: The aim of this research was to examine the average glandular dose (AGD) of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. Methods: This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50) and 20% glandular and 80% adipose tissue (20/80) commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA) with auto-time, auto-filter and auto-kilovolt modes. Results: The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy) for two dimension (2D) and 2.48 mGy for three dimensional (3D) images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. Conclusion: The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error. PMID:26052465

  20. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation.

    PubMed

    Lee, Eugenia E; Stewart, Barclay; Zha, Yuanting A; Groen, Thomas A; Burkle, Frederick M; Kushner, Adam L

    2016-08-10

    Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People's Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies.

  2. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation

    PubMed Central

    Lee, Eugenia E.; Stewart, Barclay; Zha, Yuanting A.; Groen, Thomas A.; Burkle, Frederick M.; Kushner, Adam L.

    2016-01-01

    Background: Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   Methods: The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Results: Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People’s Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. Conclusion: As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies. PMID:27617165

  3. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  4. Estimating the Average Diameter of a Population of Spheres from Observed Diameters of Random Two-Dimensional Sections

    NASA Technical Reports Server (NTRS)

    Kong, Maiying; Bhattacharya, Rabi N.; James, Christina; Basu, Abhijit

    2003-01-01

    Size distributions of chondrules, volcanic fire-fountain or impact glass spherules, or of immiscible globules in silicate melts (e.g., in basaltic mesostasis, agglutinitic glass, impact melt sheets) are imperfectly known because the spherical objects are usually so strongly embedded in the bulk samples that they are nearly impossible to separate. Hence, measurements are confined to two-dimensional sections, e.g. polished thin sections that are commonly examined under reflected light optical or backscattered electron microscopy. Three kinds of approaches exist in the geologic literature for estimating the mean real diameter of a population of 3D spheres from 2D observations: (1) a stereological approach with complicated calculations; (2) an empirical approach in which independent 3D size measurements of a population of spheres separated from their parent sample and their 2D cross sectional diameters in thin sections have produced an array of somewhat contested conversion equations; and (3) measuring pairs of 2D diameters of upper and lower surfaces of cross sections each sphere in thin sections using transmitted light microscopy. We describe an entirely probabilistic approach and propose a simple factor of 4/x (approximately equal to 1.27) to convert the 2D mean size to 3D mean size.

  5. Statistical Methods of Estimating Average Rainfall over Large Space-Timescales Using Data from the TRMM Precipitation Radar.

    NASA Astrophysics Data System (ADS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Kwiatkowski, J.

    2001-03-01

    Data from the Tropical Rainfall Measuring Mission (TRMM) precipitation radar represent the first global rain-rate dataset acquired by a spaceborne weather radar. Because the radar operates at an attenuating wavelength, one of the principal issues concerns the accuracy of the attenuation correction algorithms. One way to test these algorithms is by means of a statistical method in which the probability distribution of rain rates at the high end is inferred by measurements at the low to intermediate range and by the assumption that the rain rates are lognormally distributed. Investigation of this method and the area-time integral methods using a global dataset provides an indication of how well methods of this kind can be expected to perform over different space-timescales and climatological regions using the sparsely sampled TRMM radar data. Identification of statistical relationships among the rain parameters and an understanding of the rain-rate distribution as a function of time and space may help to test the validity of the high-resolution rain-rate estimates.

  6. Estimated daily average per capita water ingestion by child and adult age categories based on USDA's 1994-1996 and 1998 continuing survey of food intakes by individuals.

    PubMed

    Kahn, Henry D; Stralka, Kathleen

    2009-05-01

    Water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for a number of age range groups. The age ranges, based on guidance from the US EPA's Risk Assessment Forum, are narrow for younger ages when development is rapid and wider for older ages when the rate of development decreases. Estimates are based on data from the United States Department of Agriculture's (USDA's) 1994-1996 and 1998 Continuing Survey of Food Intake by Individuals (CSFII). Water ingestion estimates include water ingested directly as a beverage and water added to foods and beverages during preparation at home or in local establishments. Water occurring naturally in foods or added by manufacturers to commercial products (beverage or food) is not included. Estimates are reported in milliliters (ml/person/day) and milliliters per kilogram of body weight (ml/kg/day). As a by-product of constructing estimates in terms of body weight of respondents, distributions of self-reported body weights based on the CSFII were estimated and are also reported here.

  7. Using cone-beam CT projection images to estimate the average and complete trajectory of a fiducial marker moving with respiration

    NASA Astrophysics Data System (ADS)

    Becker, N.; Smith, W. L.; Quirk, S.; Kay, I.

    2010-12-01

    Stereotactic body radiotherapy of lung cancer often makes use of a static cone-beam CT (CBCT) image to localize a tumor that moves during the respiratory cycle. In this work, we developed an algorithm to estimate the average and complete trajectory of an implanted fiducial marker from the raw CBCT projection data. After labeling the CBCT projection images based on the breathing phase of the fiducial marker, the average trajectory was determined by backprojecting the fiducial position from images of similar phase. To approximate the complete trajectory, a 3D fiducial position is estimated from its position in each CBCT project image as the point on the source-image ray closest to the average position at the same phase. The algorithm was tested with computer simulations as well as phantom experiments using a gold seed implanted in a programmable phantom capable of variable motion. Simulation testing was done on 120 realistic breathing patterns, half of which contained hysteresis. The average trajectory was reconstructed with an average root mean square (rms) error of less than 0.1 mm in all three directions, and a maximum error of 0.5 mm. The complete trajectory reconstruction had a mean rms error of less than 0.2 mm, with a maximum error of 4.07 mm. The phantom study was conducted using five different respiratory patterns with the amplitudes of 1.3 and 2.6 cm programmed into the motion phantom. These complete trajectories were reconstructed with an average rms error of 0.4 mm. There is motion information present in the raw CBCT dataset that can be exploited with the use of an implanted fiducial marker to sub-millimeter accuracy. This algorithm could ultimately supply the internal motion of a lung tumor at the treatment unit from the same dataset currently used for patient setup.

  8. Application of random regression model to estimate genetic parameters for average daily gains in Lori-Bakhtiari sheep breed of Iran.

    PubMed

    Farhangfar, H; Naeemipour, H; Zinvand, B

    2007-07-15

    A random regression model was applied to estimate (co) variances, heritabilities and additive genetic correlations among average daily gains. The data was a total of 10876 records belonging to 1828 lambs (progenies of 123 sires and 743 dams) born between 1995 and 2001 in a single large size flock of Lori-Bakhtiari sheep breed in Iran. In the model, fixed environmental effects of year-season of birth, sex, birth type, age of dam and random effects of direct and maternal additive genetic and permanent environment were included. Orthogonal polynomial regression (on the Legendre scale) of third order (cubic) was utilized to model the genetic and permanent environmental (co) variance structure throughout the growth trajectory. Direct and maternal heritability estimates of average daily gains ranged from 0.011 to 0.131 and 0.008 to 0.181, respectively in which pre-weaning average daily gain (0-3 in months) had the lowest direct and highest maternal heritability estimates among the other age groups. The highest and lowest positive direct additive genetic correlations were found to be 0.993 and 0.118 between ADG (0-9) and ADG (0-12) and between ADG (0-3) and ADG (0-12), respectively. The direct additive genetic correlations between adjacent age groups were more closely than between remote age groups.

  9. 31 CFR 205.23 - What requirements apply to estimates?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE RULES AND PROCEDURES FOR... the State's immediate cash needs: (a) The State must ensure that the estimate reasonably represents the flow of Federal funds under the Federal assistance program or program component to which the...

  10. Space Station core resupply and return requirements estimation

    NASA Technical Reports Server (NTRS)

    Wissinger, D. B.

    1988-01-01

    A modular methodology has been developed to model both NASA Space Station onboard resupply/return requirements and Space Shuttle delivery/return capabilities. This approach divides nonpayload Space Station logistics into seven independent categories, each of which is a function of several rates multiplied by user-definable onboard usage scenarios and Shuttle resupply profiles. The categories are summed to arrive at an overall resupply or return requirement. Unused Shuttle resupply and return capacities are also evaluated. The method allows an engineer to evaluate the transportation requirements for a candidate Space Station operational scenario.

  11. Areal Average Albedo (AREALAVEALB)

    DOE Data Explorer

    Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni

    2008-01-01

    he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.

  12. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  13. A Study on Estimation of Average Power Output Fluctuation of Clustered Photovoltaic Power Generation Systems in Urban District of a Few km2

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Suzuoki, Yasuo

    The fluctuation of the total power output of clustered PV systems would be smaller than that of single PV system because of the time difference in the power output fluctuation among PV systems at different locations. This effect, so called smoothing-effect, must be taken into account properly when the impact of clustered PV systems on electric power system is assessed. If the average power output of clustered PV systems can be estimated from the power output of single PV system, it is very useful and helpful for the impact assessment. In this study, we propose a simple method to estimate the total power output fluctuation of clustered PV systems. In the proposed method, a smoothing effect is assumed to be caused as a result of two factors, i.e. time difference of overhead clouds passing among PV systems and the random change in the size and/or shape of clouds. The first one is formulated as a low-pass filter, assuming that output fluctuation is transmitted to the same direction as the wind direction at the constant speed. The second one is taken into account by using a Fourier transform surrogate data. The parameters in the proposed method were selected, so that the estimated fluctuation can be similar with that of ensemble average fluctuation of data observed at 5 points used as a training data set. Then, by using the selected parameters, the fluctuation property was estimated for other data set. The results show that the proposed method is useful for estimating the total power output fluctuation of clustered PV systems.

  14. Comparison of pooled standard deviation and standardized-t bootstrap methods for estimating uncertainty about average methane emission from rice cultivation

    NASA Astrophysics Data System (ADS)

    Kang, Namgoo; Jung, Min-Ho; Jeong, Hyun-Cheol; Lee, Yung-Seop

    2015-06-01

    The general sample standard deviation and the Monte-Carlo methods as an estimate of confidence interval is frequently being used for estimates of uncertainties with regard to greenhouse gas emission, based on the critical assumption that a given data set follows a normal (Gaussian) or statistically known probability distribution. However, uncertainty estimated using those methods are severely limited in practical applications where it is challenging to assume the probability distribution of a data set or where the real data distribution form appears to deviate significantly from statistically known probability distribution models. In order to solve these issues encountered especially in reasonable estimation of uncertainty about the average of greenhouse gas emission, we present two statistical methods, the pooled standard deviation method (PSDM) and the standardized-t bootstrap method (STBM) based upon statistical theories. We also report interesting results of the uncertainties about the average of a data set of methane (CH4) emission from rice cultivation under the four different irrigation conditions in Korea, measured by gas sampling and subsequent gas analysis. Results from the applications of the PSDM and the STBM to these rice cultivation methane emission data sets clearly demonstrate that the uncertainties estimated by the PSDM were significantly smaller than those by the STBM. We found that the PSDM needs to be adopted in many cases where a data probability distribution form appears to follow an assumed normal distribution with both spatial and temporal variations taken into account. However, the STBM is a more appropriate method widely applicable to practical situations where it is realistically impossible with the given data set to reasonably assume or determine a probability distribution model with a data set showing evidence of fairly asymmetric distribution but severely deviating from known probability distribution models.

  15. A Bayesian Model Averaging Approach for Estimating the Relative Risk of Mortality Associated with Heat Waves in 105 U.S. Cities

    PubMed Central

    Bobb, Jennifer F.; Dominici, Francesca; Peng, Roger D.

    2011-01-01

    Summary Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this paper we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987–2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat wave risk estimation is sensitive to model choice. While model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. PMID:21447046

  16. Estimation of the leucine and histidine requirements for piglets fed a low-protein diet.

    PubMed

    Wessels, A G; Kluge, H; Mielenz, N; Corrent, E; Bartelt, J; Stangl, G I

    2016-11-01

    Reduction of the CP content in the diets of piglets requires supplementation with crystalline essential amino acids (AA). Data on the leucine (Leu) and histidine (His) requirements of young pigs fed low-CP diets are limited and have primarily been obtained from nonlinear models. However, these models do not consider the possible decline in appetite and growth that can occur when pigs are fed excessive amounts of AA such as Leu. Therefore, two dose-response studies were conducted to estimate the standardised ileal digestible (SID) Leu : lysine (Lys) and His : Lys required to optimise the growth performance of young pigs. In both studies, the average daily gain (ADG), average daily feed intake (ADFI) and gain-to-feed ratio (G : F) were determined during a 6-week period. To ensure that the diets had sub-limiting Lys levels, a preliminary Lys dose-response study was conducted. In the Leu study, 60 35-day-old piglets of both sexes were randomly assigned to one of five treatments and fed a low-CP diet (15%) with SID Leu : Lys levels of 83%, 94%, 104%, 115% or 125%. The His study used 120 31-day-old piglets of both sexes, which were allotted to one of five treatments and fed a low-CP diet (14%) with SID His : Lys levels of 22%, 26%, 30%, 34% or 38%. Linear broken-line, curvilinear-plateau and quadratic-function models were used for estimations of SID Leu : Lys and SID His : Lys. The minimum SID Leu : Lys level needed to maximise ADG, ADFI and G : F was, on average, 101% based on the linear broken-line and curvilinear-plateau models. Using the quadratic-function model, the minimum SID Leu : Lys level needed to maximise ADG, ADFI and G : F was 108%. Data obtained from the quadratic-function analysis further showed that a ±10% deviation from the identified Leu requirement was accompanied by a small decline in the ADG (-3%). The minimum SID His : Lys level needed to maximise ADG, ADFI and G : F was 27% and 28% using the linear broken-line and curvilinear-plateau models

  17. Estimated water requirements for the conventional flotation of copper ores

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water used by a conventional copper flotation plant. Water is required for many activities at a mine-mill site, including ore production and beneficiation, dust and fire suppression, drinking and sanitation, and minesite reclamation. The water required to operate a flotation plant may outweigh all of the other uses of water at a mine site, [however,] and the need to maintain a water balance is critical for the plant to operate efficiently. Process water may be irretrievably lost or not immediately available for reuse in the beneficiation plant because it has been used in the production of backfill slurry from tailings to provide underground mine support; because it has been entrapped in the tailings stored in the TSF, evaporated from the TSF, or leaked from pipes and (or) the TSF; and because it has been retained as moisture in the concentrate. Water retained in the interstices of the tailings and the evaporation of water from the surface of the TSF are the two most significant contributors to water loss at a conventional flotation circuit facility.

  18. EURRECA-Estimating zinc requirements for deriving dietary reference values.

    PubMed

    Lowe, Nicola M; Dykes, Fiona C; Skinner, Anna-Louise; Patel, Sujata; Warthon-Medina, Marisol; Decsi, Tamás; Fekete, Katalin; Souverein, Olga W; Dullemeijer, Carla; Cavelaars, Adriënne E; Serra-Majem, Lluis; Nissensohn, Mariela; Bel, Silvia; Moreno, Luis A; Hermoso, Maria; Vollhardt, Christiane; Berti, Cristiana; Cetin, Irene; Gurinovic, Mirjana; Novakovic, Romana; Harvey, Linda J; Collings, Rachel; Hall-Moran, Victoria

    2013-01-01

    Zinc was selected as a priority micronutrient for EURRECA, because there is significant heterogeneity in the Dietary Reference Values (DRVs) across Europe. In addition, the prevalence of inadequate zinc intakes was thought to be high among all population groups worldwide, and the public health concern is considerable. In accordance with the EURRECA consortium principles and protocols, a series of literature reviews were undertaken in order to develop best practice guidelines for assessing dietary zinc intake and zinc status. These were incorporated into subsequent literature search strategies and protocols for studies investigating the relationships between zinc intake, status and health, as well as studies relating to the factorial approach (including bioavailability) for setting dietary recommendations. EMBASE (Ovid), Cochrane Library CENTRAL, and MEDLINE (Ovid) databases were searched for studies published up to February 2010 and collated into a series of Endnote databases that are available for the use of future DRV panels. Meta-analyses of data extracted from these publications were performed where possible in order to address specific questions relating to factors affecting dietary recommendations. This review has highlighted the need for more high quality studies to address gaps in current knowledge, in particular the continued search for a reliable biomarker of zinc status and the influence of genetic polymorphisms on individual dietary requirements. In addition, there is a need to further develop models of the effect of dietary inhibitors of zinc absorption and their impact on population dietary zinc requirements.

  19. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  20. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under the...

  1. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under the...

  2. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under the...

  3. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under the...

  4. Average slip rate at the transition zone on the plate interface in the Nankai subduction zone, Japan, estimated from short-term SSE catalog

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Kimura, T.

    2013-12-01

    Short-term slow slip events (S-SSEs) in the Nankai subduction zone, Japan, have been monitored by borehole strainmeters and borehole accelerometers (tiltmeters) mainly. The scale of the S-SSE in this region is small (Mw5-6), and therefore there were two problems in S-SSE identification and estimation of the fault model. (1) There were few observatories that can detect crustal deformation associated with S-SSEs. Therefore, reliability of the estimated fault model was low. (2) The signal associated with the S-SSE is relatively small. Therefore, it was difficult to detect the S-SSE only from strainmeter and tiltmeter. The former problem has become resolvable to some extent by integrating the data of borehole strainmeter, tiltmeter and groundwater (pore pressure) of the National Institute of Advanced Industrial Science and Technology, tiltmeter of the National Research Institute for Earthquake Science and Disaster Prevention and borehole strainmeter of the Japan Meteorological Agency. For the latter, by using horizontal redundant component of a multi-component strainmeter, which consists generally of four horizontal extensometers, it has become possible to extract tectonic deformation efficiently and detect a S-SSE using only strainmeter data. Using the integrated data and newly developed technique, we started to make a catalog of S-SSE in the Nankai subduction zone. For example, in central Mie Prefecture, we detect and estimate fault model of eight S-SSEs from January 2010 to September 2012. According to our estimates, the average slip rate of S-SSE is 2.7 cm/yr. Ishida et al. [2013] estimated the slip rate as 2.6-3.0 cm/yr from deep low-frequency tremors, and this value is consistent with our estimation. Furthermore, the slip deficit rate in this region evaluated by the analysis of GPS data from 2001 to 2004 is 1.0 - 2.6 cm/yr [Kobayashi et al., 2006], and the convergence rate of the Philippine Sea plate in this region is estimated as 5.0 - 7.0 cm/yr. The difference

  5. Estimation of spatial soil moisture averages in a large gully of the Loess Plateau of China through statistical and modeling solutions

    NASA Astrophysics Data System (ADS)

    Gao, Xiaodong; Wu, Pute; Zhao, Xining; Wang, Jiawen; Shi, Yinguang; Zhang, Baoqing; Tian, Lei; Li, Hongbing

    2013-04-01

    SummaryCharacterizing root-zone soil moisture patterns in large gullies is challenging as relevant datasets are scarce and difficult to collect. Therefore, we explored several statistical and modeling approaches, mainly focusing on time stability analysis, for estimating spatial soil moisture averages from point observations and precipitation time series, using 3-year root-zone (0-20, 20-40, 40-60 and 60-80 cm) soil moisture datasets for a large gully in the Loess Plateau, China. We also developed a new metric, the root mean square error (RMSE) of estimated mean soil moisture, to identify time-stable locations. The time stability analysis revealed that different time-stable locations were identified at various depths. These locations were shown to be temporally robust, by cross-validation, and more likely to be located in ridges than in pipes or plane surfaces. However, we found that MRD (mean relative difference) operators, used to predict spatial soil moisture averages by applying a constant offset, could not be transferred across root zone layers for most time-stable locations. Random combination analysis revealed that at most four randomly selected locations were needed for accurate estimation of mean soil moisture time series. Finally, a simple empirical model was developed to predict root-zone soil moisture dynamics in large gullies from precipitation time series. The results showed that the model reproduced root-zone soil moisture well in dry seasons, whereas relatively large estimation error was observed during wet seasons. This implies that only precipitation observations might be not enough to accurately predict root-zone soil moisture dynamics in large gullies, and time series of soil moisture loss coefficient should be modeled and included.

  6. Estimation of dietary arginine requirements for Longyan laying ducks.

    PubMed

    Xia, Weiguang; Fouad, Ahmed Mohamed; Chen, Wei; Ruan, Dong; Wang, Shuang; Fan, Qiuli; Wang, Ying; Cui, Yiyan; Zheng, Chuntian

    2017-01-01

    This study aimed to establish the arginine requirements of Longyan ducks from 17 to 31 wk of age based on egg production, egg quality, plasma, and ovarian indices, as well as the expression of vitellogenesis-related genes. In total, 660 Longyan ducks with similar body weight at 15 wk of age were assigned randomly to 5 treatments, each with 6 replicates of 22 birds, and fed a corn-corn gluten meal basal diet (0.66% arginine) supplemented with either 0, 0.20%, 0.40%, 0.60%, or 0.80% arginine. Dietary arginine did not affect egg production by laying ducks, but it increased (linear, P < 0.01) the egg weight at 22 to 31 and 17 to 31 wk of age. Dietary arginine increased the yolk color score (linearly, P < 0.05) and the yolk percentage (quadratic, P < 0.05), where the maximum values were obtained with 1.26% arginine. Dietary arginine affected the total shell percentage and shell thickness, with the highest values using 1.46% arginine (P < 0.01). The weight and number of small yellow follicles (SYFs) increased (quadratic, P < 0.05) with the dietary arginine level and there was a quadratic response (P < 0.05) in terms of the SYFs weight/ovarian weight; the highest values were obtained in ducks fed 1.26% arginine. The plasma arginine concentration exhibited a quadratic (P < 0.05) response to dietary arginine. The plasma progesterone concentration decreased (linear, P < 0.05) as dietary arginine increased. The mRNA abundance of the very low density lipoprotein receptor-b increased in the second large yellow follicle membranes (quadratic, P < 0.05) with the dietary arginine level, where the highest value occurred with 1.26% arginine. According to the regression model, the dietary arginine requirements for Longyan laying ducks aged 17 to 31 wk are 1.06%, 1.13%, 1.22%, and 1.11% to obtain the maximum yolk percentage, SYFs number, SYFs weight, and SYFs weight/ovarian weight, respectively. © 2016 Poultry Science Association Inc.

  7. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Dataset.

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias; Bell, Thomas L.; Zhang, Yu; Wood, Eric F.

    2003-11-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multiyear radar dataset covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100-, 200-, and 500-km space domains; 1-, 5-, and 30-day rainfall accumulations; and regular sampling time intervals of 1, 3, 6, 8, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and nonparametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  8. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  9. On the joint use of propensity and prognostic scores in estimation of the Average Treatment Effect on the Treated: A simulation study

    PubMed Central

    Leacy, Finbarr P.; Stuart, Elizabeth A.

    2013-01-01

    Summary Propensity and prognostic score methods seek to improve the quality of causal inference in non-randomized or observational studies by replicating the conditions found in a controlled experiment, at least with respect to observed characteristics. Propensity scores model receipt of the treatment of interest; prognostic scores model the potential outcome under a single treatment condition. While the popularity of propensity score methods continues to grow, prognostic score methods and methods combining propensity and prognostic scores have thus far received little attention. To this end, we performed a simulation study that compared subclassification and full matching on a single estimated propensity or prognostic score with three approaches combining estimated propensity and prognostic scores: full matching on a Mahalanobis distance combining the estimated propensity and prognostic scores (FULL-MAHAL); full matching on the estimated prognostic propensity score within propensity score calipers (FULL-PGPPTY); and subclassification on an estimated propensity and prognostic score grid with 5 × 5 subclasses (SUBCLASS(5*5)). We considered settings in which one, both or neither score model was misspecified. The data generating mechanisms varied in the degree of linearity and additivity in the true treatment assignment and outcome models. FULL-MAHAL and FULL-PGPPTY exhibited strong to superior performance in root mean square error terms across all simulation settings and scenarios. Methods combining propensity and prognostic scores were no less robust to model misspecification than single-score methods even when both score models were incorrectly specified. Our findings support the joint use of propensity and prognostic scores in estimation of the average treatment effect on the treated. PMID:24151187

  10. Stone Attenuation Values Measured by Average Hounsfield Units and Stone Volume as Predictors of Total Laser Energy Required During Ureteroscopic Lithotripsy Using Holmium:Yttrium-Aluminum-Garnet Lasers.

    PubMed

    Ofude, Mitsuo; Shima, Takashi; Yotsuyanagi, Satoshi; Ikeda, Daisuke

    2017-04-01

    To evaluate the predictors of the total laser energy (TLE) required during ureteroscopic lithotripsy (URS) using the holmium:yttrium-aluminum-garnet (Ho:YAG) laser for a single ureteral stone. We retrospectively analyzed the data of 93 URS procedures performed for a single ureteral stone in our institution from November 2011 to September 2015. We evaluated the association between TLE and preoperative clinical data, such as age, sex, body mass index, and noncontrast computed tomographic findings, including stone laterality, location, maximum diameter, volume, stone attenuation values measured using average Hounsfield units (HUs), and presence of secondary signs (severe hydronephrosis, tissue rim sign, and perinephric stranding). The mean maximum stone diameter, volume, and average HUs were 9.2 ± 3.8 mm, 283.2 ± 341.4 mm(3), and 863 ± 297, respectively. The mean TLE and operative time were 2.93 ± 3.27 kJ and 59.1 ± 28.1 minutes, respectively. Maximum stone diameter, volume, average HUs, severe hydronephrosis, and tissue rim sign were significantly correlated with TLE (Spearman's rho analysis). Stepwise multiple linear regression analysis defining stone volume, average HUs, severe hydronephrosis, and tissue rim sign as explanatory variables showed that stone volume and average HUs were significant predictors of TLE (standardized coefficients of 0.565 and 0.320, respectively; adjusted R(2) = 0.55, F = 54.7, P <.001). Stone attenuation values measured by average HUs and stone volume were strong predictors of TLE during URS using Ho:YAG laser procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Estimating soil phosphorus requirements and limits from oxalate extract data.

    PubMed

    D'Angelo, E M; Vandiviere, M V; Thom, W O; Sikora, F

    2003-01-01

    Excessive fertilizer and manure phosphorus (P) inputs to soils elevates P in soil solution and surface runoff, which can lead to freshwater eutrophication. Runoff P can be related to soil test P and P sorption saturation, but these approaches are restricted to a limited range of soil types or are difficult to determine on a routine basis. The purpose of this study was to determine whether easily measurable soil characteristics were related to the soil phosphorus requirements (P(req), the amount of P sorbed at a particular solution P level). The P(req) was determined for 18 chemically diverse soils from sorption isotherm data (corrected for native sorbed P) and was found to be highly correlated to the sum of oxalate-extractable Al and Fe (R2 > 0.90). Native sorbed P, also determined from oxalate extraction, was subtracted from the P(req) to determine soil phosphorus limits (PL, the amount of P that can be added to soil to reach P(req)). Using this approach, the PL to reach 0.2 mg P L(-1) in solution ranged between -92 and 253 mg P kg(-1). Negative values identified soils with surplus P, while positive values showed soils with P deficiency. The results showed that P, Al, and Fe in oxalate extracts of soils held promise for determining PL to reach up to 10 mg P L(-1) in solution (leading to potential runoff from many soils). The soil oxalate extraction test could be integrated into existing best management practices for improving soil fertility and protecting water quality.

  12. Ground-water pumpage and artificial recharge estimates for calendar year 2000 and average annual natural recharge and interbasin flow by hydrographic area, Nevada

    USGS Publications Warehouse

    Lopes, Thomas J.; Evetts, David M.

    2004-01-01

    Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth

  13. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... withdrawal for consumption in the following situations may be made without depositing the estimated Customs... or manufacturer may enter or withdraw for consumption cigars, cigarettes, and cigarette papers and... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of...

  14. Theoretical framework to estimate spatially averaged rainfalls conditional on river discharges and point rainfall measurements from a single location: an application to Western Greece

    NASA Astrophysics Data System (ADS)

    Langousis, A.; Kaleris, V.

    2012-11-01

    We focus on the special case of catchments covered by a single raingauge, and develop a theoretical framework to obtain estimates of spatial rainfall averages conditional on rainfall measurements from a single location, and the flow conditions at the catchment outlet. In doing so we use: (a) statistical tools to identify and correct inconsistencies between daily rainfall occurrence and amount and the flow conditions at the outlet of the basin, (b) concepts from multifractal theory to relate the fraction of wet intervals in point rainfall measurements and that in spatial rainfall averages, while accounting for the shape and size of the catchment, the size, lifetime and advection velocity of rainfall generating features and the location of the raingauge inside the basin, and (c) semi-theoretical arguments to assure consistency between rainfall and runoff volumes at an inter-annual level, implicitly accounting for spatial heterogeneities of rainfall caused by orographic influences. In an application study, using point rainfall records from Glafkos river basin in Western Greece, we find the suggested approach to demonstrate significant skill in resolving rainfall-runoff incompatibilities at a daily level, while reproducing the statistics of spatial rainfall averages at both monthly and annual time scales, independently of the location of the raingauge and the magnitude of the observed deviations between point rainfall measurements and spatial rainfall averages. The developed scheme should serve as an important tool for the effective calibration of rainfall-runoff models in basins covered by a single raingauge and, also, improve hydrologic impact assessment at a river basin level under changing climatic conditions.

  15. Theoretical framework to estimate spatial rainfall averages conditional on river discharges and point rainfall measurements from a single location: an application to western Greece

    NASA Astrophysics Data System (ADS)

    Langousis, A.; Kaleris, V.

    2013-03-01

    We focus on the special case of catchments covered by a single rain gauge and develop a theoretical framework to obtain estimates of spatial rainfall averages conditional on rainfall measurements from a single location, and the flow conditions at the catchment outlet. In doing so we use (a) statistical tools to identify and correct inconsistencies between daily rainfall occurrence and amount and the flow conditions at the outlet of the basin; (b) concepts from multifractal theory to relate the fraction of wet intervals in point rainfall measurements and that in spatial rainfall averages, while accounting for the shape and size of the catchment, the size, lifetime and advection velocity of rainfall-generating features and the location of the rain gauge inside the basin; and (c) semi-theoretical arguments to assure consistency between rainfall and runoff volumes at an inter-annual level, implicitly accounting for spatial heterogeneities of rainfall caused by orographic influences. In an application study, using point rainfall records from the Glafkos river basin in western Greece, we find the suggested approach to demonstrate significant skill in resolving rainfall-runoff incompatibilities at a daily level, while reproducing the statistics of spatial rainfall averages at both monthly and annual time scales, independent of the location of the rain gauge and the magnitude of the observed deviations between point rainfall measurements and spatial rainfall averages. The developed scheme should serve as an important tool for the effective calibration of rainfall-runoff models in basins covered by a single rain gauge and, also, improve hydrologic impact assessment at a river basin level under changing climatic conditions.

  16. Estimating glomerular filtration rate (GFR) in children. The average between a cystatin C- and a creatinine-based equation improves estimation of GFR in both children and adults and enables diagnosing Shrunken Pore Syndrome.

    PubMed

    Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders

    2017-09-01

    Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFRcystatin C) and a creatinine-based (eGFRcreatinine) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFRcystatin C and eGFRcreatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFRcystatin C compared to eGFRcreatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFRcystatin C and a eGFRcreatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFRcystatin and eGFRcreatinine may help identify pediatric patients with Shrunken Pore Syndrome.

  17. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  18. Methodology to Estimate the Longitudinal Average Attributable Fraction of Guideline-recommended Medications for Death in Older Adults With Multiple Chronic Conditions.

    PubMed

    Allore, Heather G; Zhan, Yilei; Cohen, Andrew B; Tinetti, Mary E; Trentalange, Mark; McAvay, Gail

    2016-08-01

    Persons with multiple chronic conditions receive multiple guideline-recommended medications to improve outcomes such as mortality. Our objective was to estimate the longitudinal average attributable fraction for 3-year survival of medications for cardiovascular conditions in persons with multiple chronic conditions and to determine whether heterogeneity occurred by age. Medicare Current Beneficiary Survey participants (N = 8,578) with two or more chronic conditions, enrolled from 2005 to 2009 with follow-up through 2011, were analyzed. We calculated the longitudinal extension of the average attributable fraction for oral medications (beta blockers, renin-angiotensin system blockers, and thiazide diuretics) indicated for cardiovascular conditions (atrial fibrillation, coronary artery disease, heart failure, and hypertension), on survival adjusted for 18 participant characteristics. Models stratified by age (≤80 and >80 years) were analyzed to determine heterogeneity of both cardiovascular conditions and medications. Heart failure had the greatest average attributable fraction (39%) for mortality. The fractional contributions of beta blockers, renin-angiotensin system blockers, and thiazides to improve survival were 10.4%, 9.3%, and 7.2% respectively. In age-stratified models, of these medications thiazides had a significant contribution to survival only for those aged 80 years or younger. The effects of the remaining medications were similar in both age strata. Most cardiovascular medications were attributed independently to survival. The two cardiovascular conditions contributing independently to death were heart failure and atrial fibrillation. The medication effects were similar by age except for thiazides that had a significant contribution to survival in persons younger than 80 years. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Methodology to Estimate the Longitudinal Average Attributable Fraction of Guideline-recommended Medications for Death in Older Adults With Multiple Chronic Conditions

    PubMed Central

    Zhan, Yilei; Cohen, Andrew B.; Tinetti, Mary E.; Trentalange, Mark; McAvay, Gail

    2016-01-01

    Background: Persons with multiple chronic conditions receive multiple guideline-recommended medications to improve outcomes such as mortality. Our objective was to estimate the longitudinal average attributable fraction for 3-year survival of medications for cardiovascular conditions in persons with multiple chronic conditions and to determine whether heterogeneity occurred by age. Methods: Medicare Current Beneficiary Survey participants (N = 8,578) with two or more chronic conditions, enrolled from 2005 to 2009 with follow-up through 2011, were analyzed. We calculated the longitudinal extension of the average attributable fraction for oral medications (beta blockers, renin–angiotensin system blockers, and thiazide diuretics) indicated for cardiovascular conditions (atrial fibrillation, coronary artery disease, heart failure, and hypertension), on survival adjusted for 18 participant characteristics. Models stratified by age (≤80 and >80 years) were analyzed to determine heterogeneity of both cardiovascular conditions and medications. Results: Heart failure had the greatest average attributable fraction (39%) for mortality. The fractional contributions of beta blockers, renin–angiotensin system blockers, and thiazides to improve survival were 10.4%, 9.3%, and 7.2% respectively. In age-stratified models, of these medications thiazides had a significant contribution to survival only for those aged 80 years or younger. The effects of the remaining medications were similar in both age strata. Conclusions: Most cardiovascular medications were attributed independently to survival. The two cardiovascular conditions contributing independently to death were heart failure and atrial fibrillation. The medication effects were similar by age except for thiazides that had a significant contribution to survival in persons younger than 80 years. PMID:26748093

  20. An ensemble average method to estimate absolute TEC using radio beacon-based differential phase measurements: Applicability to regions of large latitudinal gradients in plasma density

    NASA Astrophysics Data System (ADS)

    Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.

    2014-12-01

    A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.

  1. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    SciTech Connect

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)

  2. Feasibility of non-invasive temperature estimation by the assessment of the average gray-level content of B-mode images.

    PubMed

    Teixeira, C A; Alvarenga, A V; Cortela, G; von Krüger, M A; Pereira, W C A

    2014-08-01

    This paper assesses the potential of the average gray-level (AVGL) from ultrasonographic (B-mode) images to estimate temperature changes in time and space in a non-invasive way. Experiments were conducted involving a homogeneous bovine muscle sample, and temperature variations were induced by an automatic temperature regulated water bath, and by therapeutic ultrasound. B-mode images and temperatures were recorded simultaneously. After data collection, regions of interest (ROIs) were defined, and the average gray-level variation computed. For the selected ROIs, the AVGL-Temperature relation were determined and studied. Based on uniformly distributed image partitions, two-dimensional temperature maps were developed for homogeneous regions. The color-coded temperature estimates were first obtained from an AVGL-Temperature relation extracted from a specific partition (where temperature was independently measured by a thermocouple), and then extended to the other partitions. This procedure aimed to analyze the AVGL sensitivity to changes not only in time but also in space. Linear and quadratic relations were obtained depending on the heating modality. We found that the AVGL-Temperature relation is reproducible over successive heating and cooling cycles. One important result was that the AVGL-Temperature relations extracted from one region might be used to estimate temperature in other regions (errors inferior to 0.5 °C) when therapeutic ultrasound was applied as a heating source. Based on this result, two-dimensional temperature maps were developed when the samples were heated in the water bath and also by therapeutic ultrasound. The maps were obtained based on a linear relation for the water bath heating, and based on a quadratic model for the therapeutic ultrasound heating. The maps for the water bath experiment reproduce an acceptable heating/cooling pattern, and for the therapeutic ultrasound heating experiment, the maps seem to reproduce temperature profiles

  3. Grid-point requirements for large eddy simulation: Chapman's estimates revisited

    NASA Astrophysics Data System (ADS)

    Choi, Haecheon; Moin, Parviz

    2012-01-01

    Resolution requirements for large eddy simulation (LES), estimated by Chapman [AIAA J. 17, 1293 (1979)], are modified using accurate formulae for high Reynolds number boundary layer flow. The new estimates indicate that the number of grid points (N) required for wall-modeled LES is proportional to ReLx , but a wall-resolving LES requires N ˜ReLx 13 /7 , where Lx is the flat-plate length in the streamwise direction. On the other hand, direct numerical simulation, resolving the Kolmogorov length scale, requires N ˜ReLx 37 /14 .

  4. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation

  5. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-06-01

    To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). Not applicable. Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary.

  6. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  7. Visual Estimation of Spatial Requirements for Locomotion in Novice Wheelchair Users

    ERIC Educational Resources Information Center

    Higuchi, Takahiro; Takada, Hajime; Matsuura, Yoshifusa; Imanaka, Kuniyasu

    2004-01-01

    Locomotion using a wheelchair requires a wider space than does walking. Two experiments were conducted to test the ability of nonhandicapped adults to estimate the spatial requirements for wheelchair use. Participants judged from a distance whether doorlike apertures of various widths were passable or not passable. Experiment 1 showed that…

  8. Visual Estimation of Spatial Requirements for Locomotion in Novice Wheelchair Users

    ERIC Educational Resources Information Center

    Higuchi, Takahiro; Takada, Hajime; Matsuura, Yoshifusa; Imanaka, Kuniyasu

    2004-01-01

    Locomotion using a wheelchair requires a wider space than does walking. Two experiments were conducted to test the ability of nonhandicapped adults to estimate the spatial requirements for wheelchair use. Participants judged from a distance whether doorlike apertures of various widths were passable or not passable. Experiment 1 showed that…

  9. An estimation of the protein requirements of Iberian x Duroc 50:50 crossbred growing pigs.

    PubMed

    Rojas-Cano, M L; Ruiz-Guerrero, V; Lara, L; Nieto, R; Aguilera, J F

    2014-04-01

    The effects of dietary protein content on the rates of gain and protein deposition were studied in Iberian (IB) × Duroc (DU) 50:50 barrows at 2 stages of growth [10.6 ± 0.2 (n = 28) and 60.0 ± 0.4 (n = 24) kg initial BW]. Two feeding, digestibility, and N-balance trials were performed. At each stage of growth, they were allocated in individual pens and given restrictedly (at 0.9 × ad libitum intake) one of 4 pelleted diets of similar energy concentration (13.8 to 14.5 MJ ME/kg DM), formulated to provide 4 different (ideal) CP contents (236, 223, 208, and 184 g CP/kg DM in the first trial, and 204, 180, 143, and 114 g CP/kg DM in the second trial). Feed allowance was offered in 2 daily equal meals. The average concentration of Lys was 6.59 ± 0.13 g /100 g CP for all diets. Whatever the stage of growth, average daily BW gain and gain to feed ratio were unchanged by increases in dietary CP content (477 ± 7 and 1,088 ± 20 g, and 0.475 ± 0.027 and 0.340 ± 0.113, respectively, in the first and second trial). In pigs growing from 10 to 27 kg BW, the average rate of N retention increased linearly (P < 0.01) on increasing the protein content in the diet up to a break point, so a linear-plateau dose response was observed. Pigs fed diets providing 208 to 236 g/kg DM did not differ in rate of protein deposition (PD). A maximum value of 87 (13.93 g N retained × 6.25) g PD/d was obtained when the diet supplied at least 208 g CP/kg DM. The broken-line regression analysis estimated dietary CP requirements at 211 g ideal CP (15.2 g total Lys)/kg DM. In the fattening pigs, there was a quadratic response (P < 0.01) in the rate of N retention as dietary CP content increased. Maximum N retention (18.7 g/d) was estimated from the first derivative of the function that relates the observed N retained (g/d) and dietary CP content (g/kg DM). This maximum value would be obtained by feeding a diet containing 185 g ideal CP (13.3 g total Lys)/kg DM and represents the maximum capacity

  10. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  11. Estimating crop water requirements of a command area using multispectral video imagery and geographic information systems

    NASA Astrophysics Data System (ADS)

    Ahmed, Rashid Hassan

    This research focused on the potential use of multispectral video remote sensing for irrigation water management. Two methods for estimating crop evapotranspiration were investigated, the energy balance estimation from multispectral video imagery and use of reflectance-based crop coefficients from multitemporal multispectral video imagery. The energy balance method was based on estimating net radiation, and soil and sensible heat fluxes, using input from the multispectral video imagery. The latent heat flux was estimated as a residual. The results were compared to surface heat fluxes measured on the ground. The net radiation was estimated within 5% of the measured values. However, the estimates of sensible and soil heat fluxes were not consistent with the measured values. This discrepancy was attributed to the methods for estimating the two fluxes. The degree of uncertainty in the parameters used in the methods made their application too limited for extrapolation to large agricultural areas. The second method used reflectance-based crop coefficients developed from the multispectral video imagery using alfalfa as a reference crop. The daily evapotranspiration from alfalfa was estimated using a nearby weather station. With the crop coefficients known for a canal command area, irrigation scheduling was simulated using the soil moisture balance method. The estimated soil moisture matched the actual soil moisture measured using the neutron probe method. Also, the overall water requirement estimated by this method was found to be in close agreement with the canal water deliveries. The crop coefficient method has great potential for irrigation management of large agricultural areas.

  12. An allometric model to estimate fluid requirements in children following burn injury.

    PubMed

    Ansermino, J Mark; Vandebeek, Christine A; Myers, Dorothy

    2010-04-01

    To evaluate the ability of an allometric 3/4 Power Model combined with the Galveston Formula (Galveston-3/4 PM Formula) to predict fluid resuscitation requirements in children suffering burn injuries in comparison with the frequently used Parkland Formula and Galveston Formula using the Du Bois formula for surface area estimation (Galveston-DB Formula). To demonstrate that the Galveston-3/4 PM Formula is clinically equivalent to the Galveston-DB Formula for the estimation of fluid requirements in pediatric burn injury cases. Fluid resuscitation requirements differ in children suffering burn injuries when compared to adults. The Parkland Formula works well for normal weight adults but underestimates fluid requirements when indiscriminately applied to pediatric burn patients. The Galveston-DB Formula accounts for the change in body composition with age by using a body surface area (BSA) model but requires the measurement of height. The allometric model, using an exponent of 3/4, accounts for the dependence of a physiological variable on body mass without requiring height measurement and can be applied to estimate fluid requirements after burn injury in children. Comparisons were performed between the hourly calculated fluid requirements for the first 8 h following 20%, 40%, and 60% BSA burns using the Parkland Formula, the Galveston-DB Formula and Galveston-3/4 PM Formula for children 2-23 kg. In children less than 23 kg, the fluid requirements predicted by the Galveston-3/4 PM Formula are well correlated with those predicted by the Galveston-DB Formula (R2 = 0.997, P < 0.0001) and are much better than of the predictions made with the Parkland Formula, especially for children <10 kg. For the purposes of clinical estimation of fluid requirements, the Galveston-3/4 PM Formula is indistinguishable from the Galveston-DB Formula in children 23 kg or less.

  13. [Estimation model for water requirement of greenhouse tomato under drip irrigation].

    PubMed

    Liu, Hao; Sun, Jing-Sheng; Liang, Yuan-Yuan; Wang, Cong-Cong; Duan, Ai-Wang

    2011-05-01

    Based on the modified Penman-Monteith equation, and through the analysis of the relationships between crop coefficient and cumulative temperature, a new model for estimating the water requirement of greenhouse tomato under drip irrigation was built. The model was validated with the measured data of plant transpiration and soil evaporation in May 2-13 (flowering-fruit-developing stage) and June 9-20 (fruit-maturing stage) , 2009. This model was suitable for the estimation of reference evapotranspiration (ET(0)) in greenhouse. The crop coefficient of greenhouse tomato was correlated as a quadratic function of cumulative temperature. The mean relative error between measured and estimated values was less than 10%, being able to estimate the water requirement of greenhouse tomato under drip irrigation.

  14. Quaternary estimates of average slip-rates for active faults in the Mongolian Altay Mountains: the advantages and assumptions of multiple dating techniques

    NASA Astrophysics Data System (ADS)

    Gregory, L. C.; Walker, R. T.; Thomas, A. L.; Amgaa, T.; Bayasgalan, G.; Amgalan, B.; West, A.

    2010-12-01

    Active faults in the Altay Mountains, western Mongolia, produce surface expressions that are generally well-preserved due to the arid central-Asian climate. Motion along the right-lateral strike-slip and oblique-reverse faults has displaced major river systems by kilometres over millions of years and there are clear scarps and linear features in the landscape along the surface traces of active fault strands. With combined remote sensing and field work, we have identified sites with surface features that have been displaced by tens of metres as a result of cumulative motion along faults. In an effort to accurately quantify an average slip-rate for the faults, we used multiple dating techniques to provide an age constraint for the displaced landscapes. At one site on the Olgiy fault, we applied 10Be terrestrial cosmogenic nuclides (TCN) and uranium-series geochronology on boulder tops and in-situ formed carbonate rinds, respectively. Based on a displacement of approximately 17m, and geochronology results that range from 20-60ky, we resolve a slip-rate of less than 1 mm/yr. We have also applied optically stimulated luminescence (OSL), 10Be TCN, and U-series methods on the Ar Hotol fault. Each of these dating techniques provides unique constraints on the relationship between the ‘age’ of a displaced surface and the actual amount of displacement, and each has inherent assumptions. We will consider the advantages and assumptions made in utilising these techniques in western Mongolia- e.g. U-series dating of carbonate rinds can provide a minimum age for alluvial fan deposition, and inheritance must be considered when using TCN techniques on boulder tops. This will be put into the context of estimating accurate and geologically relevant slip-rates, and improving our understanding of the active deformation of the Mongolian Altay.

  15. Estimation of the hydraulic conductivity of a two-dimensional fracture network using effective medium theory and power-law averaging

    NASA Astrophysics Data System (ADS)

    Zimmerman, R. W.; Leung, C. T.

    2009-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through generated stochastically two-dimensional fracture networks. The centers and orientations of the fractures are uniformly distributed, whereas their lengths follow a lognormal distribution. The aperture of each fracture is correlated with its length, either through direct proportionality, or through a nonlinear relationship. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this value by starting with the individual fracture conductances, and using various upscaling methods. Kirkpatrick’s effective medium approximation, which works well for pore networks on a core scale, generally underestimates the conductivity of the fracture networks. We attribute this to the fact that the conductances of individual fracture segments (between adjacent intersections with other fractures) are correlated with each other, whereas Kirkpatrick’s approximation assumes no correlation. The power-law averaging approach proposed by Desbarats for porous media is able to match the numerical value, using power-law exponents that generally lie between 0 (geometric mean) and 1 (harmonic mean). The appropriate exponent can be correlated with statistical parameters that characterize the fracture density.

  16. Estimating Manpower Requirements. A Background Paper Prepared for the Graduate Medical Education Advisory Committee. Report No. 76-114.

    ERIC Educational Resources Information Center

    Health Resources Administration (DHEW/PHS), Bethesda, MD. Bureau of Health Manpower.

    This report on estimating physician manpower requirements is intended as a history and summary of the state of the art in manpower requirements estimation and forecasting. It describes the various ways in which manpower requirements have been estimated in recent years and discusses the variety of concepts, methods, definitions, and approaches that…

  17. ESTIMATED DAILY AVERAGE PER CAPITA WATER INGESTION BY CHILD AND ADULT AGE CATEGORIES BASED ON USDA'S 1994-96 AND 1998 CONTINUING SURVEY OF FOOD INTAKES BY INDIVIDUALS (JOURNAL ARTICLE)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The ...

  18. ESTIMATED DAILY AVERAGE PER CAPITA WATER INGESTION BY CHILD AND ADULT AGE CATEGORIES BASED ON USDA'S 1994-96 AND 1998 CONTINUING SURVEY OF FOOD INTAKES BY INDIVIDUALS (JOURNAL ARTICLE)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The ...

  19. Estimated Daily Average Per Capita Water Ingestion by Child and Adult Age Categories Based on USDA's 1994-96 and 1998 Continuing Survey of Food Intakes by Individuals (Journal Article)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The a...

  20. Pedestrian headform testing: inferring performance at impact speeds and for headform masses not tested, and estimating average performance in a range of real-world conditions.

    PubMed

    Hutchinson, T Paul; Anderson, Robert W G; Searson, Daniel J

    2012-01-01

    Tests are routinely conducted where instrumented headforms are projected at the fronts of cars to assess pedestrian safety. Better information would be obtained by accounting for performance over the range of expected impact conditions in the field. Moreover, methods will be required to integrate the assessment of secondary safety performance with primary safety systems that reduce the speeds of impacts. Thus, we discuss how to estimate performance over a range of impact conditions from performance in one test and how this information can be combined with information on the probability of different impact speeds to provide a balanced assessment of pedestrian safety. Theoretical consideration is given to 2 distinct aspects to impact safety performance: the test impact severity (measured by the head injury criterion, HIC) at a speed at which a structure does not bottom out and the speed at which bottoming out occurs. Further considerations are given to an injury risk function, the distribution of impact speeds likely in the field, and the effect of primary safety systems on impact speeds. These are used to calculate curves that estimate injuriousness for combinations of test HIC, bottoming out speed, and alternative distributions of impact speeds. The injuriousness of a structure that may be struck by the head of a pedestrian depends not only on the result of the impact test but also the bottoming out speed and the distribution of impact speeds. Example calculations indicate that the relationship between the test HIC and injuriousness extends over a larger range than is presently used by the European New Car Assessment Programme (Euro NCAP), that bottoming out at speeds only slightly higher than the test speed can significantly increase the injuriousness of an impact location and that effective primary safety systems that reduce impact speeds significantly modify the relationship between the test HIC and injuriousness. Present testing regimes do not take fully into

  1. [Estimating the impacts of future climate change on water requirement and water deficit of winter wheat in Henan Province, China].

    PubMed

    Ji, Xing-jie; Cheng, Lin; Fang, Wen-song

    2015-09-01

    Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future.

  2. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Special income averaging rules for...

  3. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    ERIC Educational Resources Information Center

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  4. Use Of Crop Canopy Size To Estimate Water Requirements Of Vegetable Crops

    USDA-ARS?s Scientific Manuscript database

    Planting time, plant density, variety, and cultural practices vary widely for horticultural crops. It is difficult to estimate crop water requirements for crops with these variations. Canopy size, or factional ground cover, as an indicator of intercepted sunlight, is related to crop water use. We...

  5. Estimation of Managerial and Technical Personnel Requirements in Selected Industries. Training for Industry Series, No. 2.

    ERIC Educational Resources Information Center

    United Nations Industrial Development Organization, Vienna (Austria).

    The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…

  6. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    ERIC Educational Resources Information Center

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  7. Estimation of Managerial and Technical Personnel Requirements in Selected Industries. Training for Industry Series, No. 2.

    ERIC Educational Resources Information Center

    United Nations Industrial Development Organization, Vienna (Austria).

    The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…

  8. Maintenance nitrogen requirement of the turkey breeder hen with an estimate of associated essential amino acid needs.

    PubMed

    Moran, E T; Ferket, P R; Blackman, J R

    1983-09-01

    Nonproducing, small-type breeder hens in excess of 65 weeks of age were used to represent the maintenance state. All birds had been in laying cages since 30 weeks and accustomed to 16 hr of 70 lx lighting at 16 C. Nitrogen (N) balance was performed in metabolism cages under the same conditions. Ad libitum intake of a common breeder ration led to an intake of ca. 47 kcal metabolizable energy (ME)/kg body weight (BW)/day, which was considered to represent the maintenace energy requirement. Nitrogen retained while consuming this feed averaged 172 mg/kg BW/day. Force-feeding a N-free diet to satisfy the maintenance energy requirement resulted in an 85 mg N/kg BW/day endogenous loss. Total maintenance nitrogen requirement was considered to approximate 257 mg/kg BW/day. Nitrogen retention after force-feeding corn-soybean meal rations having a progressive protein content indicated that the associated amino acids were more efficient in satisfying the endogenous than the total N requirement. A model that estimated maintenance amino acid requirements was assembled by combining the relative concentrations found in muscle and feathers to represent endogenous and retained N, respectively. For the most part, model values agreed with published results for the rooster; however, verification in balance studies was less than successful and believed to be attributable to hen variation in feather cover and protein reserves.

  9. Determining storm sampling requirements for improving precision of annual load estimates of nutrients from a small forested watershed.

    PubMed

    Ide, Jun'ichiro; Chiwa, Masaaki; Higashi, Naoko; Maruno, Ryoko; Mori, Yasushi; Otsuki, Kyoichi

    2012-08-01

    This study sought to determine the lowest number of storm events required for adequate estimation of annual nutrient loads from a forested watershed using the regression equation between cumulative load (∑L) and cumulative stream discharge (∑Q). Hydrological surveys were conducted for 4 years, and stream water was sampled sequentially at 15-60-min intervals during 24 h in 20 events, as well as weekly in a small forested watershed. The bootstrap sampling technique was used to determine the regression (∑L-∑Q) equations of dissolved nitrogen (DN) and phosphorus (DP), particulate nitrogen (PN) and phosphorus (PP), dissolved inorganic nitrogen (DIN), and suspended solid (SS) for each dataset of ∑L and ∑Q. For dissolved nutrients (DN, DP, DIN), the coefficient of variance (CV) in 100 replicates of 4-year average annual load estimates was below 20% with datasets composed of five storm events. For particulate nutrients (PN, PP, SS), the CV exceeded 20%, even with datasets composed of more than ten storm events. The differences in the number of storm events required for precise load estimates between dissolved and particulate nutrients were attributed to the goodness of fit of the ∑L-∑Q equations. Bootstrap simulation based on flow-stratified sampling resulted in fewer storm events than the simulation based on random sampling and showed that only three storm events were required to give a CV below 20% for dissolved nutrients. These results indicate that a sampling design considering discharge levels reduces the frequency of laborious chemical analyses of water samples required throughout the year.

  10. Requirements for characterization of DWPF canister welds and labels, and estimates of service life

    SciTech Connect

    Plodinec, M.J.; Harbour, J.R.; Marra, S.L.

    1993-01-11

    The Department of Energy has established specifications for the DWPF product, which require that the DWPF provide estimates of the service life of the canister label, provide assurance that the DWPF canister will be leaktight when shipped, demonstrate that the contents of the canistered waste form will not lead to internal corrosion of the canister. The DWPF has elected to meet these requirements, in part, by characterizing canisters produced in the facility during the Startup Test Program. This includes canisters filled on the pour turntable (normal conditions) and canisters filled on the drain turntable (credible upset conditions expected to be more severe due to higher temperatures). This document identifies the requirements for characterization of the canister fabrication welds and canister labels (characterization of canister closure welds is being performed by Equipment Engineering Section), and for estimation of their service life in DWPF`s Glass Waste Storage Building.

  11. Requirements for characterization of DWPF canister welds and labels, and estimates of service life

    SciTech Connect

    Plodinec, M.J.; Harbour, J.R.; Marra, S.L.

    1993-01-11

    The Department of Energy has established specifications for the DWPF product, which require that the DWPF provide estimates of the service life of the canister label, provide assurance that the DWPF canister will be leaktight when shipped, demonstrate that the contents of the canistered waste form will not lead to internal corrosion of the canister. The DWPF has elected to meet these requirements, in part, by characterizing canisters produced in the facility during the Startup Test Program. This includes canisters filled on the pour turntable (normal conditions) and canisters filled on the drain turntable (credible upset conditions expected to be more severe due to higher temperatures). This document identifies the requirements for characterization of the canister fabrication welds and canister labels (characterization of canister closure welds is being performed by Equipment Engineering Section), and for estimation of their service life in DWPF's Glass Waste Storage Building.

  12. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  13. Requirements for accurate estimation of anisotropic material parameters by magnetic resonance elastography: A computational study.

    PubMed

    Tweten, D J; Okamoto, R J; Bayly, P V

    2017-01-17

    To establish the essential requirements for characterization of a transversely isotropic material by magnetic resonance elastography (MRE). Three methods for characterizing nearly incompressible, transversely isotropic (ITI) materials were used to analyze data from closed-form expressions for traveling waves, finite-element (FE) simulations of waves in homogeneous ITI material, and FE simulations of waves in heterogeneous material. Key properties are the complex shear modulus μ2 , shear anisotropy ϕ=μ1/μ2-1, and tensile anisotropy ζ=E1/E2-1. Each method provided good estimates of ITI parameters when both slow and fast shear waves with multiple propagation directions were present. No method gave accurate estimates when the displacement field contained only slow shear waves, only fast shear waves, or waves with only a single propagation direction. Methods based on directional filtering are robust to noise and include explicit checks of propagation and polarization. Curl-based methods led to more accurate estimates in low noise conditions. Parameter estimation in heterogeneous materials is challenging for all methods. Multiple shear waves, both slow and fast, with different propagation directions, must be present in the displacement field for accurate parameter estimates in ITI materials. Experimental design and data analysis can ensure that these requirements are met. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Average Revisited in Context

    ERIC Educational Resources Information Center

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  15. An approach to estimating future manpower requirements in physical and occupational therapy.

    PubMed

    MacKinnon, J R; Stark, A J

    1984-01-01

    This paper reports the approach used in a physical therapy (PT) and occupational therapy (OT) manpower requirements study conducted in British Columbia, Canada. A total of 426 questionnaires were mailed to likely employers of PTs and OTs, and to PTs in private practice. After telephone reminders and a second mailing, the overall response rate was 83.3%. The results of the survey indicated that, by 1986, respective PT and OT department heads were anticipating a 60% increase in demand for PTs and a 102% increase for OTs, while agency administrators were suggesting a 76% increase for PTs and a 142% increase for OTs. Although a variety of factors--all largely beyond the control of both respondents and researchers--will determine the degree to which these estimates actually reflect the future demand for these manpower groups, it should be noted that, for both disciplines, the anticipated increase was substantially greater than the level experienced in the five years preceding the survey. The estimation approach used in this study considers nonrespondents; it is a procedure which permits investigators to offer a more accurate picture of the current manpower situation while using a more realistic base on which to estimate future requirements. The development of the requirement side in the supply/requirement equations of manpower studies may be well served in the future with this approach.

  16. Estimates of the energy deficit required to reverse the trend in childhood obesity in Australian schoolchildren

    PubMed Central

    Davey, Rachel; de Castella, F. Robert

    2015-01-01

    Abstract Objectives: To estimate: 1) daily energy deficit required to reduce the weight of overweight children to within normal range; 2) time required to reach normal weight for a proposed achievable (small) target energy deficit of 0.42 MJ/day; 3) impact that such an effect may have on prevalence of childhood overweight. Methods: Body mass index and fitness were measured in 31,424 Australian school children aged between 4.5 and 15 years. The daily energy deficit required to reduce weight to within normal range for the 7,747 (24.7%) overweight children was estimated. Further, for a proposed achievable target energy deficit of 0.42 MJ/day, the time required to reach normal weight was estimated. Results: About 18% of children were overweight and 6.6% obese; 69% were either sedentary or light active. If an energy deficit of 0.42 MJ/day could be achieved, 60% of overweight children would reach normal weight and the current prevalence of overweight of 24.7% (24.2%–25.1%) would be reduced to 9.2% (8.9%–9.6%) within about 15 months. Conclusions: The prevalence of overweight in Australian school children could be reduced significantly within one year if even a small daily energy deficit could be achieved by children currently classified as overweight or obese. PMID:26561382

  17. The effects of the variations in sea surface temperature and atmospheric stability in the estimation of average wind speed by SEASAT-SASS

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1984-01-01

    The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.

  18. The effects of the variations in sea surface temperature and atmospheric stability in the estimation of average wind speed by SEASAT-SASS

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1984-01-01

    The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.

  19. The Balance Super Learner: A robust adaptation of the Super Learner to improve estimation of the average treatment effect in the treated based on propensity score matching.

    PubMed

    Pirracchio, Romain; Carone, Marco

    2016-01-01

    Consistency of the propensity score estimators rely on correct specification of the propensity score model. The propensity score is frequently estimated using a main effect logistic regression. It has recently been shown that the use of ensemble machine learning algorithms, such as the Super Learner, could improve covariate balance and reduce bias in a meaningful manner in the case of serious model misspecification for treatment assignment. However, the loss functions normally used by the Super Learner may not be appropriate for propensity score estimation since the goal in this problem is not to optimize propensity score prediction but rather to achieve the best possible balance in the covariate distribution between treatment groups. In a simulation study, we evaluated the benefit of a modification of the Super Learner by propensity score estimation geared toward achieving covariate balance between the treated and untreated after matching on the propensity score. Our simulation study included six different scenarios characterized by various degrees of deviation from the usual main term logistic model for the true propensity score and outcome as well as the presence (or not) of instrumental variables. Our results suggest that the use of this adapted Super Learner to estimate the propensity score can further improve the robustness of propensity score matching estimators.

  20. Implications of a needs-based approach to estimating psychiatric workforce requirements.

    PubMed

    Faulkner, Larry R

    2003-01-01

    The author reviews a needs-based approach to estimating psychiatric workforce requirements that entails five determinations: (1) number of people with mental health problems, (2) number of people needing mental health treatment, (3) number of people needing psychiatric treatment, (4) amount of psychiatric time required to meet patient needs, and (5) amount of time psychiatrists have available to provide direct patient care. Questions, issues, and strategies raised by the needs-based approach are outlined. The author suggests that only a coordinated, carefully orchestrated effort among national psychiatric organizations will ensure that the future psychiatric workforce is adequate to meet the needs of the mentally ill.

  1. Technical note: A procedure to estimate glucose requirements of an activated immune system in steers.

    PubMed

    Kvidera, S K; Horst, E A; Abuajamieh, M; Mayorga, E J; Sanz Fernandez, M V; Baumgard, L H

    2016-11-01

    Infection and inflammation impede efficient animal productivity. The activated immune system ostensibly requires large amounts of energy and nutrients otherwise destined for synthesis of agriculturally relevant products. Accurately determining the immune system's in vivo energy needs is difficult, but a better understanding may facilitate developing nutritional strategies to maximize productivity. The study objective was to estimate immune system glucose requirements following an i.v. lipopolysaccharide (LPS) challenge. Holstein steers (148 ± 9 kg; = 15) were jugular catheterized bilaterally and assigned to 1 of 3 i.v.

  2. An approach to estimating human resource requirements to achieve the Millennium Development Goals.

    PubMed

    Dreesch, Norbert; Dolea, Carmen; Dal Poz, Mario R; Goubarev, Alexandre; Adams, Orvill; Aregawi, Maru; Bergstrom, Karin; Fogstad, Helga; Sheratt, Della; Linkins, Jennifer; Scherpbier, Robert; Youssef-Fox, Mayada

    2005-09-01

    In the context of the Millennium Development Goals, human resources represent the most critical constraint in achieving the targets. Therefore, it is important for health planners and decision-makers to identify what are the human resources required to meet those targets. Planning the human resources for health is a complex process. It needs to consider both the technical aspects related to estimating the number, skills and distribution of health personnel for meeting population health needs, and the political implications, values and choices that health policy- and decision-makers need to make within given resources limitations. After presenting an overview of the various methods for planning human resources for health, with their advantages and limitations, this paper proposes a methodological approach to estimating the requirements of human resources to achieve the goals set forth by the Millennium Declaration. The method builds on the service-target approach and functional job analysis.

  3. Technical Note: On the Matt-Shuttleworth approach to estimate crop water requirements

    NASA Astrophysics Data System (ADS)

    Lhomme, J. P.; Boudhina, N.; Masmoudi, M. M.

    2014-11-01

    The Matt-Shuttleworth method provides a way to make a one-step estimate of crop water requirements with the Penman-Monteith equation by translating the crop coefficients, commonly available in United Nations Food and Agriculture Organization (FAO) publications, into equivalent surface resistances. The methodology is based upon the theoretical relationship linking crop surface resistance to a crop coefficient and involves the simplifying assumption that the reference crop evapotranspiration (ET0) is equal to the Priestley-Taylor estimate with a fixed coefficient of 1.26. This assumption, used to eliminate the dependence of surface resistance on certain weather variables, is questionable; numerical simulations show that it can lead to substantial differences between the true value of surface resistance and its estimate. Consequently, the basic relationship between surface resistance and crop coefficient, without any assumption, appears to be more appropriate for inferring crop surface resistance, despite the interference of weather variables.

  4. Spent fuel disassembly hardware and other non-fuel bearing components: characterization, disposal cost estimates, and proposed repository acceptance requirements

    SciTech Connect

    Luksic, A.T.; McKee, R.W.; Daling, P.M.; Konzek, G.J.; Ludwick, J.D.; Purcell, W.L.

    1986-10-01

    There are two categories of waste considered in this report. The first is the spent fuel disassembly (SFD) hardware. This consists of the hardware remaining after the fuel pins have been removed from the fuel assembly. This includes end fittings, spacer grids, water rods (BWR) or guide tubes (PWR) as appropriate, and assorted springs, fasteners, etc. The second category is other non-fuel-bearing (NFB) components the DOE has agreed to accept for disposal, such as control rods, fuel channels, etc., under Appendix E of the standard utiltiy contract (10 CFR 961). It is estimated that there will be approximately 150 kg of SFD and NFB waste per average metric ton of uranium (MTU) of spent uranium. PWR fuel accounts for approximately two-thirds of the average spent-fuel mass but only 50 kg of the SFD and NFB waste, with most of that being spent fuel disassembly hardware. BWR fuel accounts for one-third of the average spent-fuel mass and the remaining 100 kg of the waste. The relatively large contribution of waste hardware in BWR fuel, will be non-fuel-bearing components, primarily consisting of the fuel channels. Chapters are devoted to a description of spent fuel disassembly hardware and non-fuel assembly components, characterization of activated components, disposal considerations (regulatory requirements, economic analysis, and projected annual waste quantities), and proposed acceptance requirements for spent fuel disassembly hardware and other non-fuel assembly components at a geologic repository. The economic analysis indicates that there is a large incentive for volume reduction.

  5. Bioenergetics model for estimating food requirements of female Pacific walruses (Odobenus rosmarus divergens)

    USGS Publications Warehouse

    Noren, S.R.; Udevitz, M.S.; Jay, C.V.

    2012-01-01

    Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.

  6. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  7. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  8. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  9. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  10. Model requirements for estimating and reporting soil C stock changes in national greenhouse gas inventories

    NASA Astrophysics Data System (ADS)

    Didion, Markus; Blujdea, Viorel; Grassi, Giacomo; Hernández, Laura; Jandl, Robert; Kriiska, Kaie; Lehtonen, Aleksi; Saint-André, Laurent

    2016-04-01

    Globally, soils are the largest terrestrial store of carbon (C) and small changes may contribute significantly to the global C balance. Due to the potential implications for climate change, accurate and consistent estimates of C fluxes at the large-scale are important as recognized, for example, in international agreements such as the United Nations Framework Convention on Climate Change (UNFCCC). Under the UNFCCC and also under the Kyoto Protocol it is required to report C balances annually. Most measurement-based soil inventories are currently not able to detect annual changes in soil C stocks consistently across space and representative at national scales. The use of models to obtain relevant estimates is considered an appropriate alternative under the UNFCCC and the Kyoto Protocol. Several soil carbon models have been developed but few models are suitable for a consistent application across larger-scales. Consistency is often limited by the lack of input data for models, which can result in biased estimates and, thus, the reporting criteria of accuracy (i.e., emission and removal estimates are systematically neither over nor under true emissions or removals) may be met. Based on a qualitative assessment of the ability to meet criteria established for GHG reporting under the UNFCCC including accuracy, consistency, comparability, completeness, and transparency, we identified the suitability of commonly used simulation models for estimating annual C stock changes in mineral soil in European forests. Among six discussed simulation models we found a clear trend toward models for providing quantitative precise site-specific estimates which may lead to biased estimates across space. To meet reporting needs for national GHG inventories, we conclude that there is a need for models producing qualitative realistic results in a transparent and comparable manner. Based on the application of one model along a gradient from Boreal forests in Finland to Mediterranean forests

  11. Estimating blood transfusion requirements in preparation for a major earthquake: the Tehran, Iran study.

    PubMed

    Tabatabaie, Morteza; Ardalan, Ali; Abolghasemi, Hassan; Holakouie Naieni, Kourosh; Pourmalek, Farshad; Ahmadi, Batool; Shokouhi, Mostafa

    2010-01-01

    Tehran, Iran, with a population of approximately seven million people, is at a very high risk for a devastating earthquake. This study aims to estimate the number of units of blood required at the time of such an earthquake. To assume the damage of an earthquake in Tehran, the researchers applied the Centre for Earthquake and Environmental Studies of Tehran/Japan International Cooperation Agency (CEST/JICA) fault-activation scenarios, and accordingly estimated the injury-to-death ratio (IDR), hospital admission rate (HAR), and blood transfusion rate (BTR). The data were based on Iran's major earthquakes during last two decades. The following values were considered for the analysis: (1) IDR = 1, 2, and 3; (2) HAR = 0.25 and 0.35; and (3) BTR = 0.05, 0.07, and 0.10. The American Association of Blood Banks' formula was adapted to calculate total required numbers of Type- O red blood cell (RBC) units. Calculations relied on the following assumptions: (1) no change in Tehran's vulnerability from CEST/JICA study time; (2) no functional damage to Tehran Blood Transfusion Post; and (3) standards of blood safety are secure during the disaster responses. Surge capacity was estimated based on the Bam earthquake experience. The maximum, optimum, and minimum blood deficits were calculated accordingly. No deficit was estimated in case of the Mosha fault activation and the optimum scenario of North Tehran fault. The maximum blood deficit was estimated from the activation of the Ray fault, requiring up to 107,293 and 95,127 units for the 0-24 hour and the 24-72 hour periods after the earthquake, respectively. The optimum deficit was estimated up to 46,824 and 16,528 units for 0-24 hour and 24-72 hour period after the earthquake, respectively. In most Tehran earthquake scenarios, a shortage of blood was estimated to surge the capacity of all blood transfusion posts around the country within first three days, as it might ask for a 2-8 times more than what the system had produced

  12. An examination of population exposure to traffic related air pollution: Comparing spatially and temporally resolved estimates against long-term average exposures at the home location.

    PubMed

    Shekarrizfard, Maryam; Faghih-Imani, Ahmadreza; Hatzopoulou, Marianne

    2016-05-01

    Air pollution in metropolitan areas is mainly caused by traffic emissions. This study presents the development of a model chain consisting of a transportation model, an emissions model, and atmospheric dispersion model, applied to dynamically evaluate individuals' exposure to air pollution by intersecting daily trajectories of individuals and hourly spatial variations of air pollution across the study domain. This dynamic approach is implemented in Montreal, Canada to highlight the advantages of the method for exposure analysis. The results for nitrogen dioxide (NO2), a marker of traffic related air pollution, reveal significant differences when relying on spatially and temporally resolved concentrations combined with individuals' daily trajectories compared to a long-term average NO2 concentration at the home location. We observe that NO2 exposures based on trips and activity locations visited throughout the day were often more elevated than daily NO2 concentrations at the home location. The percentage of all individuals with a lower 24-hour daily average at home compared to their 24-hour mobility exposure is 89.6%, of which 31% of individuals increase their exposure by more than 10% by leaving the home. On average, individuals increased their exposure by 23-44% while commuting and conducting activities out of home (compared to the daily concentration at home), regardless of air quality at their home location. We conclude that our proposed dynamic modelling approach significantly improves the results of traditional methods that rely on a long-term average concentration at the home location and we shed light on the importance of using individual daily trajectories to understand exposure.

  13. Studies on the tryptophan requirement of lactating sows. Part 2: Estimation of the tryptophan requirement by physiological criteria.

    PubMed

    Pampuch, F G; Paulicks, B R; Roth-Maier, D A

    2006-12-01

    Mature sows were fed for a total of 72 lactations with diets which provided an adequate supply of energy and nutrients except for tryptophan (Trp). By supplementing a basal diet [native 1.2 g Trp/kg, equivalent to 0.8 g apparent ileal digestible (AID) Trp or 0.9 g true ileal digestible (TID) Trp] with L-Trp, five further diets (2-6) containing 1.5-4.2 g Trp/kg were formulated. The dietary Trp content had no effect on amino acid contents in milk on days 20 and 21 of lactation, but Trp in blood plasma on day 28 of lactation reflected the alimentary Trp supply with an increase from 2.74 +/- 1.14 mg/l (diet 1) to 23.91 +/- 7.53 mg/l (diet 6; p < 0.001). There were no directional differences between the diets with regard to the other amino acids. Concentrations of urea in milk and blood were higher with diet 1 (211 and 272 mg/l, respectively) than with diets 3-6 (183 and 227 mg/l, respectively). Serotonin levels in the blood serum were lower with diet 1 (304 ng/ml) than the average of diets 4-6 (540 ng/ml). This study confirms previously given recommendations for the Trp content in the diet of lactating sows, estimated by means of performance, of 1.9 g AID Trp (equivalent to 2.0 g TID Trp; approximately 2.6 g gross Trp) per kg diet.

  14. Estimated maintenance and repair requirements for coal-fired propulsion systems. Final report

    SciTech Connect

    Little, D.E.; Murtagh, M.M.

    1982-06-01

    This study was directed toward identifying unique maintenance and repair requirements in terms of manpower and materials for coal-fired steam turbine propulsion plant ships. The method of approach included surveys of industrial and marine coal-fired plant operators and coal-firing equipment manufacturers to obtain a data base of manpower and material requirements for a range of plant sizes and operating scenarios. A national coal-fired plant was then developed and the maintenance data base adapted to the marine coal-fired system. From this information, required crew manpower was determined and compared to typical oil-fired systems and associated manpower availability evaluated. Material and contract manpower costs were assessed and parametric data developed to allow potential operators to estimate daily maintenance and repair costs.

  15. Estimation of the energy expenditure of grazing ruminants by incorporating dynamic body acceleration into a conventional energy requirement system.

    PubMed

    Miwa, M; Oishi, K; Anzai, H; Kumagai, H; Ieiri, S; Hirooka, H

    2017-02-01

    The estimation of energy expenditure (EE) of grazing animals is of great importance for efficient animal management on pasture. In the present study, a method is proposed to estimate EE in grazing animals based on measurements of body acceleration of animals in combination with the conventional Agricultural and Food Research Council (AFRC) energy requirement system. Three-dimensional body acceleration and heart rate were recorded for tested animals under both grazing and housing management. An acceleration index, vectorial dynamic body acceleration (VeDBA), was used to calculate activity allowance (AC) during grazing and then incorporate it into the AFRC system to estimate the EE (EE derived from VeDBA [EE]) of the grazing animals. The method was applied to 3 farm ruminant species (7 cattle, 6 goats, and 4 sheep). Energy expenditure based on heart rate (EE) was also estimated as a reference. The result showed that larger VeDBA and heart rate values were obtained under grazing management, resulting in greater EE and EE under grazing management than under housing management. There were large differences between the EE estimated from the 2 methods, where EE values were greater than EE (averages of 163.4 and 142.5% for housing and grazing management, respectively); the EE was lower than the EE, whereas the increase in EE under grazing in comparison with housing conditions was larger than that in EE. These differences may have been due to the use of an equation for estimating EE derived under laboratory conditions and due to the presence of the effects of physiological, psychological, and environmental factors in addition to physical activity being included in measurements for the heart rate method. The present method allowed us to separate activity-specific EE (i.e., AC) from overall EE, and, in fact, AC under grazing management were about twice times as large as those under housing management for farm ruminant animals. There is evidence that the conventional energy

  16. Using Complier Average Causal Effect Estimation to Determine the Impacts of the Good Behavior Game Preventive Intervention on Teacher Implementers.

    PubMed

    Berg, Juliette K; Bradshaw, Catherine P; Jo, Booil; Ialongo, Nicholas S

    2017-07-01

    Complier average causal effect (CACE) analysis is a causal inference approach that accounts for levels of teacher implementation compliance. In the current study, CACE was used to examine one-year impacts of PAX good behavior game (PAX GBG) and promoting alternative thinking strategies (PATHS) on teacher efficacy and burnout. Teachers in 27 elementary schools were randomized to PAX GBG, an integration of PAX GBG and PATHS, or a control condition. There were positive overall effects on teachers' efficacy beliefs, but high implementing teachers also reported increases in burnout across the school year. The CACE approach may offer new information not captured using a traditional intent-to-treat approach.

  17. The average ethanol content of beer in the U.S. and individual states: estimates for use in aggregate consumption statistics.

    PubMed

    Kerr, William C; Greenfield, Thomas K

    2003-01-01

    The purpose of this study is to describe the variation in the ethanol content of beer and specific categories of beer, and to illustrate the importance of accurate assessment of ethanol conversion factors for the calculation of apparent ethanol consumption from beer at the state and national levels in the U.S. Published sources of brand-level ethanol content, national brand share of beer categories and state beer category market shares are utilized to (1) estimate the mean ethanol content of each beer category in the U.S. for 1995 and 2000, (2) calculate per capita apparent consumption of ethanol from beer for 1995 and 2000 and for each state in 2000 and (3) describe trends in ethanol content for specific beer brands during the 1990s. The mean ethanol content of beer in the U.S. is found to increase from 4.33% (by volume) in 1995 to 4.66% in 2000. Using these estimates, per capita ethanol from beer is found to increase from 1.386 gallons in 1995 to 1.468 gallons in 2000. Use of a constant ethanol conversion, however, indicates a decrease. Application of ethanol content estimates to state-level per capita consumption for 2000 changes the relative rankings of 28 states, compared to the use of a constant 4.5% ethanol conversion. Improved ethanol conversion factor estimates are found to influence both time trends and the cross-sectional ranking of states, suggesting that analyses of both cross-sectional and time series aggregate ethanol consumption data that fail to consider variation in the ethanol content of beer may be biased.

  18. Irrigation Requirement Estimation using MODIS Vegetation Indices and Inverse Biophysical Modeling; A Case Study for Oran, Algeria

    NASA Technical Reports Server (NTRS)

    Bounoua, L.; Imhoff, M.L.; Franks, S.

    2008-01-01

    the study site, for the month of July, spray irrigation resulted in an irrigation amount of about 1.4 mm per occurrence with an average frequency of occurrence of 24.6 hours. The simulated total monthly irrigation for July was 34.85 mm. In contrast, the drip irrigation resulted in less frequent irrigation events with an average water requirement about 57% less than that simulated during the spray irrigation case. The efficiency of the drip irrigation method rests on its reduction of the canopy interception loss compared to the spray irrigation method. When compared to a country-wide average estimate of irrigation water use, our numbers are quite low. We would have to revise the reported country level estimates downward to 17% or less

  19. Dietary energy requirements in relatively healthy maintenance hemodialysis patients estimated from long-term metabolic studies1

    PubMed Central

    Shah, Anuja; Bross, Rachelle; Shapiro, Bryan B; Morrison, Gillian; Kopple, Joel D

    2016-01-01

    Background: Studies that examined dietary energy requirements (DERs) of patients undergoing maintenance hemodialysis (MHD) have shown mixed results. Many studies reported normal DERs, but some described increased energy needs. DERs in MHD patients have been estimated primarily from indirect calorimetry and from nitrogen balance studies. The present study measured DERs in MHD patients on the basis of their dietary energy intake and changes in body composition. Objective: This study assessed DERs in MHD patients who received a constant energy intake while changes in their body composition were measured. Design: Seven male and 6 female sedentary, clinically stable MHD patients received a constant mean (±SD) energy intake for 92.2 ± 7.9 d while residing in a metabolic research ward. Changes in fat and fat-free mass, measured by dual-energy X-ray absorptiometry, were converted to calorie equivalents and added to energy intake to calculate energy requirements. Results: The average DER was 31 ± 3 kcal · kg−1 · d−1 calculated from energy intake and change in fat and fat-free calories, which was 28 ± 197 kcal/d over the 92 d of the study. DERs of MHD patients correlated strongly with their body weight (r = 0.81, P = 0.002) and less closely with their measured resting energy expenditure expressed as kcal/d (r = 0.69, P = 0.01). Although the average observed DER in MHD patients was similar to published estimated values for normal sedentary individuals of similar age and sex, there was wide variability in DER among individual patients (range: 26–36 kcal · kg−1 · d−1). Conclusions: Average DERs of sedentary, clinically stable patients receiving MHD are similar to those of sedentary normal individuals. Our data do not support the theory that MHD patients have increased DERs. Due to the high variability in DERs, careful monitoring of the nutritional status of individual MHD patients is essential. This trial was registered at clinicaltrials.gov as NCT02194114

  20. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  1. Estimating sugarcane water requirements for biofuel feedstock production in Maui, Hawaii using satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Anderson, R. G.; Wang, D.

    2011-12-01

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop evapotranspiration (ETc). Generic Kc values are available for many crop types but not for sugarcane in Maui, Hawaii, which grows on a relatively unstudied biennial cycle. In this study, an algorithm is being developed to estimate sugarcane Kc using normalized difference vegetation index (NDVI) derived from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery. A series of ASTER NDVI maps were used to depict canopy development over time or fractional canopy cover (fc) which was measured with a handheld multispectral camera in the fields during satellite overpass days. Canopy cover was correlated with NDVI values. Then the NDVI based canopy cover was used to estimate Kc curves for sugarcane plants. The remotely estimated Kc and ETc values were compared and validated with ground-truth ETc measurements. The approach is a promising tool for large scale estimation of evapotranspiration of sugarcane or other biofuel crops.

  2. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  3. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  4. Estimation of fan pressure ratio requirements and operating performance for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Gloss, B. B.; Nystrom, D.

    1981-01-01

    The National Transonic Facility (NTF), a fan-driven, transonic, pressurized, cryogenic wind tunnel, will operate over the Mach number range of 0.10 to 1.20 with stagnation pressures varying from 1.00 to about 8.8 atm and stagnation temperatures varying from 77 to 340 K. The NTF is cooled to cryogenic temperatures by the injection of liquid nitrogen into the tunnel stream with gaseous nitrogen as the test gas. The NTF can also operate at ambient temperatures using a conventional chilled water heat exchanger with air on nitrogen as the test gas. The methods used in estimating the fan pressure ratio requirements are described. The estimated NTF operating envelopes at Mach numbers from 0.10 to 1.20 are presented.

  5. Refugee issues. Summary of WFP / UNHCR guidelines for estimating food and nutritional requirements.

    PubMed

    1997-12-01

    In line with recent recommendations by WHO and the Committee on International Nutrition, WFP and UNHCR will now use 2100 kcal/person/day as the initial energy requirement for designing food aid rations in emergencies. In an emergency situation, it is essential to establish such a value to allow for rapid planning and response to the food and nutrition requirements of an affected population. An in-depth assessment is often not possible in the early days of an emergency, and an estimated value is needed to make decisions about the immediate procurement and shipment of food. The initial level is applicable only in the early stages of an emergency. As soon as demographic, health, nutritional and food security information is available, the estimated per capita energy requirements should be adjusted accordingly. Food rations should complement any food that the affected population is able to obtain on its own through activities such as agricultural production, trade, labor, and small business. An understanding of the various mechanisms used by the population to gain access to food is essential to give an accurate estimate of food needs. Therefore, a prerequisite for the design of a longer-term ration is a thorough assessment of the degree of self-reliance and level of household food security. Frequent assessments are necessary to adequately determine food aid needs on an ongoing basis. The importance of ensuring a culturally acceptable, adequate basic ration for the affected population at the onset of an emergency is considered to be one of the basic principles in ration design. The quality of the ration provided, particularly in terms of micronutrients, is stressed in the guidelines, and levels provided will aim to conform with standards set by other technical agencies. full text

  6. Continuous series of catchment-averaged sensible heat flux from a Large Aperture Scintillometer: efficient estimation of stability conditions and importance of fluxes under stable conditions

    NASA Astrophysics Data System (ADS)

    De Lathauwer, E.; Samain, B.; Defloor, W.; Pauwels, V. R.

    2011-12-01

    A Large Aperture Scintillometer (LAS) observes the intensity of the atmospheric turbulence across large distances, which is related to the path averaged sensible heat flux, H. This sensible heat flux can then easily be inverted into evapotranspiration rates using the surface energy balance. In this prestentation, two problems in the derivation of continuous series of H from LAS-data are investigated and the importance of nighttime H -fluxes is assessed. Firstly, as a LAS is unable to determine the sign of H, the transition from unstable to stable conditions is evaluated in order to make continuous H-series. Therefore, different algorithms to judge the atmospheric stability for a LAS installed over a distance of 9.5km have been tested. The algorithm based on the diurnal cycle of the refractive index structure parameter, CN2, has been found to be very suitable and operationally the most appropriate. A second issue is the humidity correction for LAS-data, which is performed by using the Bowen ratio (β). As β is taken from ground-based measurements with data gaps, the number of resulting H -values is reduced. Not including this humidity correction results in a marginal error in H, but increases the completeness of the resulting H -series. Applying these conclusions to the two-year time series of the LAS, results in an almost continuous H -time series. As the majority of the time steps has been found to be under stable conditions, there is a clear impact of Hstable on H24h, the 24h average of H. For stable conditions, Hstable -values are mostly negative, and hence lower than the H = 0 assumption as is mostly adopted. For months where stable conditions prevail (Winter), H24h is overestimated using this assumption, and calculation of Hstable is recommended.

  7. Continuous series of catchment-averaged sensible heat flux from a Large Aperture Scintillometer: efficient estimation of stability conditions and importance of fluxes under stable conditions.

    NASA Astrophysics Data System (ADS)

    Samain, B.; Defloor, W.; Pauwels, V. R. N.

    2012-04-01

    A Large Aperture Scintillometer (LAS) observes the intensity of the atmospheric turbulence across large distances, which is related to the path averaged sensible heat flux, H. Two problems in the derivation of continuous series of H from LAS-data are investigated and the importance of nighttime H -fluxes is assessed. Firstly, as a LAS is unable to determine the sign of H, the transition from unstable to stable conditions is evaluated in order to make continuous H -series. Therefore, different algorithms to judge the atmospheric stability for a LAS installed over a distance of 9.5 km have been tested. The diurnal cycle of the refractive index structure parameter, CN2, results in the best suitable, operational algorithm. A second issue is the humidity correction for LAS-data, which is performed by using the Bowen ratio (β). As β is taken from ground-based measurements with data gaps, the number of resulting H -values is reduced. Not including this humidity correction results in a marginal error in H, but increases the completeness of the resulting H -series. Applying these conclusions to the two-year time series of the LAS, results in an almost continuous H -time series. As the majority of the time steps has been found to be under stable conditions, there is a clear impact of Hstable on H24h ,the 24h average of H. For stable conditions, Hstable -values are mostly negative, and hence lower than the H = 0 W/m2 assumption as is mostly adopted. For months where stable conditions prevail (Winter), H24h is overestimated using this assumption, and calculation of Hstable is recommended.

  8. Conservative estimation of whole-body-averaged SARs in infants with a homogeneous and simple-shaped phantom in the GHz region

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Ito, Naoki; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi

    2008-12-01

    We calculated the whole-body-averaged specific absorption rates (WBSARs) in a Japanese 9-month-old infant model and its corresponding homogeneous spheroidal and ellipsoidal models with 2/3 muscle tissue for 1-6 GHz far-field exposure. As a result, we found that in comparison with the WBSAR in the infant model, the ellipsoidal model with the same frontally projected area as that of the infant model provides an underestimate, whereas the ellipsoidal model with the same surface area yields an overestimate. In addition, the WBSARs in the homogenous infant models were found to be strongly affected by the electrical constant of tissue, and to be larger in the order of 2/3 muscle, skin and muscle tissues, regardless of the model shapes or polarization of incident waves. These findings suggest that the ellipsoidal model having the same surface area as that of the infant model and electrical constants of muscle tissue provides a conservative WBSAR over wide frequency bands. To confirm this idea, based on the Kaup index for Japanese 9-month-old infants, which is often used to represent the obesity of infants, we developed linearly reduced 9-month-old infant models and the corresponding muscle ellipsoidals and re-calculated their whole-body-averaged SARs with respect to body shapes. Our results reveal that the ellipsoidal model with the same surface area as that of a 9-month-old infant model gives a conservative WBSAR for different infant models, whose variability due to the model shape reaches 15%.

  9. A concise way to estimate the average density of interface states in an ITO-SiOx/n-Si heterojunction solar cell

    NASA Astrophysics Data System (ADS)

    Li, Y.; Han, B. C.; Gao, M.; Wan, Y. Z.; Yang, J.; Du, H. W.; Ma, Z. Q.

    2017-09-01

    On the basis of a photon-assisted high frequency capacitance-voltage (C-V) method (1 MHz C-V), an effective approach is developed to evaluate the average interface state density (Dit) of an ITO-SiOx/n-Si heterojunction structure. Tin-doped indium oxide (ITO) films with different thicknesses were directly deposited on (100) n-type crystalline silicon by magnetron sputtering to fabricate semiconductor-insulator-semiconductor (SIS) hetero-interface regions where an ultra-thin SiOx passivation layer was naturally created. The morphology of the SiOx layer was confirmed by X-ray photoelectron spectroscopy depth profiling and transmission electron microscope analysis. The thinness of this SiOx layer was the main reason for the SIS interface state density being more difficult to detect than that of a typical metal-oxide-semiconductor structure. A light was used for photon injection while measuring the C-V of the device, thus enabling the photon-assisted C-V measurement of the Dit. By quantifying decreases of the light-induced-voltage as a variation of the capacitance caused by parasitic charge at interface states the passivation quality within the interface of ITO-SiOx/n-Si could be reasonably evaluated. The average interface state density of these SIS devices was measured as 1.2-1.7 × 1011 eV-1 cm-2 and declined as the passivation layer was made thicker. The lifetime of the minority carriers, dark leakage current, and the other photovoltaic parameters of the devices were also used to determine the passivation.

  10. Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.

  11. An estimate of minimum number of brain stem neurons required for inhibition of a flexion reflex.

    PubMed

    Hentall, I D; Zorman, G; Kansky, S; Fields, H L

    1984-05-01

    The tail-flick reflex elicited by noxious heat in lightly anesthetized rats is known to be prevented by trains of low-amplitude current pulses passed through a monopolar microelectrode in the rostromedial medulla ( RMM ). The effect of the distance from such an electrode on the threshold of cell bodies was described in the preceding paper (11). This paper estimates the density of cell bodies in the RMM and, subsequently, estimates the number of cell bodies excited by the aforementioned pulses, a figure whose upper bound is between 30 and 75. The mean chronaxy for suppression of tail flick was found to be 162 microS. Correspondingly, for activation of spikes in somata of the RMM , it was found to be 170 microS. The axons belonging to these somata, located in the spinal lateral columns, had mean chronaxies of 360 microS. These comparisons favor the idea that cell bodies in the RMM , not axons, mediate the suppression of tail flick. Other evidence for this conclusion is given in the text. Resting activity in the RMM was found to average 6.33 Hz. Thus if the inhibitory process depends only on the instantaneous sum of activity in the many thousands of RMM neurons, all nocifensive reflexes should be continuously suppressed. But since this is not so, the relative timing of spikes in the population may also be critical. The synchronizing effect of electrical stimulation then explains the low number of cells needed to prevent the reflex.

  12. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  13. Competing conservation objectives for predators and prey: estimating killer whale prey requirements for Chinook salmon.

    PubMed

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  14. Competing Conservation Objectives for Predators and Prey: Estimating Killer Whale Prey Requirements for Chinook Salmon

    PubMed Central

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  15. Updated estimates of long-term average dissolved-solids loading in streams and rivers of the Upper Colorado River Basin

    USGS Publications Warehouse

    Tillman, Fred D; Anning, David W.

    2014-01-01

    The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating over 4.5 million acres of farmland, and annually generating about 12 billion kilowatt hours of hydroelectric power. The Upper Colorado River Basin, part of the Colorado River Basin, encompasses more than 110,000 mi2 and is the source of much of more than 9 million tons of dissolved solids that annually flows past the Hoover Dam. High dissolved-solids concentrations in the river are the cause of substantial economic damages to users, primarily in reduced agricultural crop yields and corrosion, with damages estimated to be greater than 300 million dollars annually. In 1974, the Colorado River Basin Salinity Control Act created the Colorado River Basin Salinity Control Program to investigate and implement a broad range of salinity control measures. A 2009 study by the U.S. Geological Survey, supported by the Salinity Control Program, used the Spatially Referenced Regressions on Watershed Attributes surface-water quality model to examine dissolved-solids supply and transport within the Upper Colorado River Basin. Dissolved-solids loads developed for 218 monitoring sites were used to calibrate the 2009 Upper Colorado River Basin Spatially Referenced Regressions on Watershed Attributes dissolved-solids model. This study updates and develops new dissolved-solids loading estimates for 323 Upper Colorado River Basin monitoring sites using streamflow and dissolved-solids concentration data through 2012, to support a planned Spatially Referenced Regressions on Watershed Attributes modeling effort that will investigate the contributions to dissolved-solids loads from irrigation and rangeland practices.

  16. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions

    PubMed Central

    Trites, Andrew W.; Rosen, David A. S.; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion—an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric—Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal—and they should

  17. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions.

    PubMed

    Ware, Colin; Trites, Andrew W; Rosen, David A S; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal-and they should also

  18. Preliminary estimates of galactic cosmic ray shielding requirements for manned interplanetary missions

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Nealy, John E.

    1988-01-01

    Estimates of radiation risk to the blood forming organs from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different constituents per layer. Calculated galactic cosmic ray doses and dose equivalents behind various thicknesses of aluminum and water shielding are presented for solar maximum and solar minimum periods. Estimates of risk to the blood forming organs are made using 5 cm depth dose/dose equivalent values for water. These results indicate that at least 5 g/sq cm (5 cm) of water of 6.5 g/sq cm (2.4 cm) of aluminum shield is required to reduce annual exposure below the current recommended limit of 50 rem. Because of the large uncertainties in fragmentation parameters, and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as 70 percent. Therefore, more detailed analyses with improved inputs could indicate the need for additional shielding.

  19. Preliminary estimates of galactic cosmic ray shielding requirements for manned interplanetary missions

    SciTech Connect

    Townsend, L.W.; Wilson, J.W.; Nealy, J.E.

    1988-10-01

    Estimates of radiation risk to the blood forming organs from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different constituents per layer. Calculated galactic cosmic ray doses and dose equivalents behind various thicknesses of aluminum and water shielding are presented for solar maximum and solar minimum periods. Estimates of risk to the blood forming organs are made using 5 cm depth dose/dose equivalent values for water. These results indicate that at least 5 g/sq cm (5 cm) of water of 6.5 g/sq cm (2.4 cm) of aluminum shield is required to reduce annual exposure below the current recommended limit of 50 rem. Because of the large uncertainties in fragmentation parameters, and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as 70 percent. Therefore, more detailed analyses with improved inputs could indicate the need for additional shielding.

  20. Estimation of nitrogen maintenance requirements and potential for nitrogen deposition in fast-growing chickens depending on age and sex.

    PubMed

    Samadi, F; Liebert, F

    2006-08-01

    Experiments were conducted to estimate daily N maintenance requirements (NMR) and the genetic potential for daily N deposition (ND(max)T) in fast-growing chickens depending on age and sex. In N-balance studies, 144 male and 144 female chickens (Cobb 500) were utilized in 4 consecutive age periods (I: 10 to 25 d; II: 30 to 45 d; III: 50 to 65 d; and IV: 70 to 85 d). The experimental diets contained high-protein soybean meal and crystalline amino acids as protein sources and 6 graded levels of protein supply (N1 = 6.6%; N2 = 13.0%; N3 = 19.6%; N4 = 25.1%; N5 = 31.8%; and N6 = 37.6% CP in DM). The connection between N intake and total N excretion was fitted for NMR determination by an exponential function. The average NMR value (252 mg of N/BW(kg)0.67 per d) was applied for further calculation of ND(max)T as the threshold value of the function between N intake and daily N balance. For estimating the threshold value, the principle of the Levenberg-Marquardt algorithm within the SPSS program (Version 11.5) was applied. As a theoretical maximum for ND(max)T, 3,592, 2,723, 1,702, and 1,386 mg of N/BW(kg)0.67 per d for male and 3,452, 2,604, 1,501, and 1,286 mg of N/BW(kg)0.67 per d for female fast-growing chickens (corresponding to age periods I to IV) were obtained. The determined model parameters were the precondition for modeling of the amino acid requirement based on an exponential N-utilization model and depended on performance and dietary amino acid efficiency. This procedure will be further developed and applied in the subsequent paper.

  1. A synergistic approach using optical and SAR data to estimate crop's irrigation requirements

    NASA Astrophysics Data System (ADS)

    Rolim, João.; Navarro Ferreira, Ana; Saraiva, Cátia; Catalão, João.

    2016-10-01

    A study conducted in the scope of the Alcantara initiative in Angola shown that optical and SAR images allows the estimation of crop's irrigation requirements (CIR) based on a soil water balance model (IrrigRotation). The methodology was applied to east central Portugal, to evaluate its transferability in cases of different climatic conditions and crop types. SPOT-5 Take-5 and Sentinel-1A data from April to September 2015 are used to generate NDVI and backscattering maize crop time series. Both time series are then correlated and a linear regression equation is computed for some maize parcels identified in the test area. Next, basal crop coefficients (Kcb) are determined empirically from the Kcb-NDVI relationships applied within the PLEIADeS project and also from the Kcb-SAR relationships retrieved from the linear fit of both EO data for other maize parcels. These Kcb allow to overcome a major drawback related to the use of the FAO tabulated Kcb, only available for the initial, mid and late season of a certain crop type. More frequent Kcb values also allow a better identification of the crop's phenological stages lengths. CIR estimated from EO data are comparable to the ones obtained with tabulated FAO 56 Kcb values for crops produced under standard conditions, while for crops produced in suboptimal conditions, EO data allow to improve the estimation of the CIR. Although CIR results are promising, further research is required in order to improve the Kcb initial and Kcb end values to avoid the overestimation of the CIR.

  2. A Method for Automated Classification of Parkinson’s Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI

    PubMed Central

    Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.

    2016-01-01

    Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  3. An estimation of the average residence times and onshore-offshore diffusivities of beached microplastics based on the population decay of tagged meso- and macrolitter.

    PubMed

    Hinata, Hirofumi; Mori, Keita; Ohno, Kazuki; Miyao, Yasuyuki; Kataoka, Tomoya

    2017-09-15

    Residence times of microplastics were estimated based on the dependence of meso- and macrolitter residence times on their upward terminal velocities (UTVs) in the ocean obtained by one- and two-year mark-recapture experiments conducted on Wadahama Beach, Nii-jima Island, Japan. A significant linear relationship between the residence time and UTV was found in the velocity range of about 0.3-0.9ms(-1), while there was no significant difference between the residence times obtained in the velocity range of about 0.9-1.4ms(-1). This dependence on the UTV would reflect the uprush-backwash response of the target items to swash waves on the beach. By extrapolating the linear relationship down to the velocity range of microplastics, the residence times of microplastics and the 1D onshore-offshore diffusion coefficients were inferred, and are one to two orders of magnitude greater than the coefficients of the macroplastics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. An iterative procedure for estimating areally averaged heat flux using planetary boundary layer mixed layer height and locally measured heat flux

    SciTech Connect

    Coulter, R. L.; Gao, W.; Lesht, B. M.

    2000-04-04

    Measurements at the central facility of the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) are intended to verify, improve, and develop parameterizations in radiative flux models that are subsequently used in General Circulation Models (GCMs). The reliability of this approach depends upon the representativeness of the local measurements at the central facility for the site as a whole or on how these measurements can be interpreted so as to accurately represent increasingly large scales. The variation of surface energy budget terms over the SGP CART site is extremely large. Surface layer measurements of the sensible heat flux (H) often vary by a factor of 2 or more at the CART site (Coulter et al. 1996). The Planetary Boundary Layer (PBL) effectively integrates the local inputs across large scales; because the mixed layer height (h) is principally driven by H, it can, in principal, be used for estimates of surface heat flux over scales on the order of tens of kilometers. By combining measurements of h from radiosondes or radar wind profiles with a one-dimensional model of mixed layer height, they are investigating the ability of diagnosing large-scale heat fluxes. The authors have developed a procedure using the model described by Boers et al. (1984) to investigate the effect of changes in surface sensible heat flux on the mixed layer height. The objective of the study is to invert the sense of the model.

  5. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    NASA Astrophysics Data System (ADS)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  6. An Estimate of Diesel High-Efficiency Clean Combustion Impacts on FTP-75 Aftertreatment Requirements (SAE Paper Number 2006-01-3311)

    SciTech Connect

    Sluder, Scott; Wagner, Robert M

    2006-01-01

    A modified Mercedes 1.7-liter, direct-injection diesel engine was operated in both normal and high-efficiency clean combustion (HECC) combustion modes. Four steady-state engine operating points that were previously identified by the Ad-hoc fuels working group were used as test points to allow estimation of the hot-start FTP-75 emissions levels in both normal and HECC combustion modes. The results indicate that operation in HECC modes generally produce reductions in NOX and PM emissions at the expense of CO, NMHC, and H2CO emissions. The FTP emissions estimates indicate that aftertreatment requirements for NOX are reduced, while those for PM may not be impacted. Cycle-average aftertreatment requirements for CO, NMHC, and H2CO may be challenging, especially at the lowest temperature conditions.

  7. Laboratory estimation of degree-day developmental requirements of Phlebotomus papatasi (Diptera: Psychodidae).

    PubMed

    Kasap, Ozge Erisoz; Alten, Bulent

    2005-12-01

    Cutaneous leishmaniasis is one of the most important vector-borne endemic diseases in Turkey. The main objective of this study was to evaluate the influence of temperature on the developmental rates of one important vector of leishmaniasis, Phlebotomus papatasi (Scopoli, 1786) (Diptera: Psychodidae). Eggs from laboratory-reared colonies of Phlebotomus papatasi were exposed to six constant temperature regimes from 15 to 32 degrees C with a daylength of 14 h and relative humidity of 65-75%. No adult emergence was observed at 15 degrees C. Complete egg to adult development ranged from 27.89 +/- 1.88 days at 32 degrees C to 246.43 +/- 13.83 days at 18 degrees C. The developmental zero values were estimated to vary from 11.6 degrees C to 20.25 degrees C depending on life stages, and egg to adult development required 440.55 DD above 20.25 degrees C.

  8. Estimating the residency expansion required to avoid projected primary care physician shortages by 2035.

    PubMed

    Petterson, Stephen M; Liaw, Winston R; Tran, Carol; Bazemore, Andrew W

    2015-03-01

    The purpose of this study was to calculate the projected primary care physician shortage, determine the amount and composition of residency growth needed, and estimate the impact of retirement age and panel size changes. We used the 2010 National Ambulatory Medical Care Survey to calculate utilization of ambulatory primary care services and the US Census Bureau to project demographic changes. To determine the baseline number of primary care physicians and the number retiring at 66 years, we used the 2014 American Medical Association Masterfile. Using specialty board and American Osteopathic Association figures, we estimated the annual production of primary care residents. To calculate shortages, we subtracted the accumulated primary care physician production from the accumulated number of primary care physicians needed for each year from 2015 to 2035. More than 44,000 primary care physicians will be needed by 2035. Current primary care production rates will be unable to meet demand, resulting in a shortage in excess of 33,000 primary care physicians. Given current production, an additional 1,700 primary care residency slots will be necessary by 2035. A 10% reduction in the ratio of population per primary care physician would require more than 3,000 additional slots by 2035, whereas changing the expected retirement age from 66 years to 64 years would require more than 2,400 additional slots. To eliminate projected shortages in 2035, primary care residency production must increase by 21% compared with current production. Delivery models that shift toward smaller ratios of population to primary care physicians may substantially increase the shortage. © 2015 Annals of Family Medicine, Inc.

  9. Estimating the Residency Expansion Required to Avoid Projected Primary Care Physician Shortages by 2035

    PubMed Central

    Petterson, Stephen M.; Liaw, Winston R.; Tran, Carol; Bazemore, Andrew W.

    2015-01-01

    PURPOSE The purpose of this study was to calculate the projected primary care physician shortage, determine the amount and composition of residency growth needed, and estimate the impact of retirement age and panel size changes. METHODS We used the 2010 National Ambulatory Medical Care Survey to calculate utilization of ambulatory primary care services and the US Census Bureau to project demographic changes. To determine the baseline number of primary care physicians and the number retiring at 66 years, we used the 2014 American Medical Association Masterfile. Using specialty board and American Osteopathic Association figures, we estimated the annual production of primary care residents. To calculate shortages, we subtracted the accumulated primary care physician production from the accumulated number of primary care physicians needed for each year from 2015 to 2035. RESULTS More than 44,000 primary care physicians will be needed by 2035. Current primary care production rates will be unable to meet demand, resulting in a shortage in excess of 33,000 primary care physicians. Given current production, an additional 1,700 primary care residency slots will be necessary by 2035. A 10% reduction in the ratio of population per primary care physician would require more than 3,000 additional slots by 2035, whereas changing the expected retirement age from 66 years to 64 years would require more than 2,400 additional slots. CONCLUSIONS To eliminate projected shortages in 2035, primary care residency production must increase by 21% compared with current production. Delivery models that shift toward smaller ratios of population to primary care physicians may substantially increase the shortage. PMID:25755031

  10. Estimating Irrigation Water Requirements using MODIS Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah

    2007-01-01

    An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.

  11. Estimating Irrigation Water Requirements using MODIS Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah

    2007-01-01

    An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.

  12. Estimates of the wind speeds required for particle motion on Mars

    NASA Technical Reports Server (NTRS)

    Pollack, J. B.; Haberle, R.; Greeley, R.; Iversen, J.

    1976-01-01

    Threshold wind speeds for setting particles into motion on Mars are estimated by evaluating experimentally observed threshold friction velocities and determining the ratio of this velocity to the threshold wind speed at the top of earth's atmospheric boundary layer (ABL). Turning angles between the direction of the wind at the top of the ABL and the wind stress at the surface are also estimated. Detailed consideration is given to the dependence of the threshold wind speed at the top of the ABL on particle diameter, surface pressure, air temperature, atmospheric stability and composition, surface roughness, and interparticle cohesion. The results are applied to interpret a number of phenomena that have been observed on Mars and are attributable to aeolian processes. It is shown that: (1) minimum threshold wind speeds of about 50 to 100 m/sec are required to cause particle motion on Mars under 'favorable' conditions; (2) particle motion should be infrequent and strongly correlated with proximity to small topographical features; (3) in general, particle motion occurs more readily at night than during the day, in winter polar areas than equatorial areas around noon, and for H2O or CO2 ice particles than for silicate particles; and (4) the boundary between saltating and suspendible particles is located at a particle diameter of about 100 microns.

  13. A new remote sensing procedure for the estimation of crop water requirements

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, M.; Loukas, A.; Mylopoulos, N.

    2015-06-01

    The objective of this work is the development of a new approach for the estimation of water requirements for the most important crops located at Karla Watershed, central Greece. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used as a basis for the derivation of actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat ETM+ imagery. MODIS imagery has been also used, and a spatial downscaling procedure is followed between the two sensors for the derivation of a new NDVI product with a spatial resolution of 30 m x 30 m. GER 1500 spectro-radiometric measurements are additionally conducted during 2012 growing season. Cotton, alfalfa, corn and sugar beets fields are utilized, based on land use maps derived from previous Landsat 7 ETM+ images. A filtering process is then applied to derive NDVI values after acquiring Landsat ETM+ based reflectance values from the GER 1500 device. ETrF vs NDVI relationships are produced and then applied to the previous satellite based downscaled product in order to finally derive a 30 m x 30 m daily ETrF map for the study area. CropWat model (FAO) is then applied, taking as an input the new crop coefficient values with a spatial resolution of 30 m x 30 m available for every crop. CropWat finally returns daily crop water requirements (mm) for every crop and the results are analyzed and discussed.

  14. Estimation of the minimum food requirement using the respiration rate of medusa of Aurelia aurita in Sihwa Lake

    NASA Astrophysics Data System (ADS)

    Han, Chang-hoon; Chae, Jinho; Jin, Jonghyeok; Yoon, Wonduk

    2012-06-01

    We examined the respiration rate of Aurelia aurita medusae at 20 °C and 28 °C to evaluate minimum metabolic demands of medusae population in Sihwa Lake, Korea during summer. While weight specific respiration rates of medusae were constant and irrespective to the wet weight (8-220 g), they significantly varied in respect to temperatures ( p<0.001, 0.11±0.03 mg C g-1 of medusa d-1 at 20°C and 0.28±0.11 mg C g-1 of medusa d-1 at 28 °C in average, where Q 10 value was 2.62). The respiration rate of medusae was defined as a function of temperature ( T, °C) and body weight ( W, g) according to the equation, R=0.13×2.62( T-20)/10 W 0.93. Population minimum food requirement ( PMFR) was estimated from the respiration rate as 15.06 and 4.86 mg C m-3 d-1 in June and July, respectively. During this period, increase in bell diameter and wet weight was not significant ( p=1 in the both), suggesting that the estimated PMFR closely represented the actual food consumption in the field. From July to August, medusae grew significantly at 0.052 d-1, thus the amount of food ingested by medusae population in situ was likely to exceed the PMFR (1.27 mg C m-3 d-1) during the period. In conclusion, the medusae population of higher density during June and July had limited amount of food, while those of lower in July and August ingested enough food for growth.

  15. The Weighted Average Method 'WAM' for dental age estimation: a simpler method for children at the 10 year threshold: "it is vain to do with more when less will suffice" William of Ockham 1288-1358.".

    PubMed

    Roberts, Graham J; McDonald, Fraser; Neil, Monica; Lucas, Victoria S

    2014-08-01

    The mathematical principle of weighting averages to determine the most appropriate numerical outcome is well established in economic and social studies. It has seen little application in forensic dentistry. This study re-evaluated the data from a previous study of age assessment at the 10 year threshold. A semiautomatic process of weighting averages by n-td, x-tds, sd-tds, se-tds, 1/sd-tds, 1/se-tds was prepared in an Excel worksheet and the different weighted mean values reported. In addition the Fixed Effects and Random Effects models for Meta-Analysis were used and applied to the same data sets. In conclusion it has been shown that the most accurate age estimation method is to use the Random Effects Model for the mathematical procedures.

  16. Constraints on LISA Pathfinder’s self-gravity: design requirements, estimates and testing procedures

    NASA Astrophysics Data System (ADS)

    Armano, M.; Audley, H.; Auger, G.; Baird, J.; Binetruy, P.; Born, M.; Bortoluzzi, D.; Brandt, N.; Bursi, A.; Caleno, M.; Cavalleri, A.; Cesarini, A.; Cruise, M.; Danzmann, K.; de Deus Silva, M.; Desiderio, D.; Piersanti, E.; Diepholz, I.; Dolesi, R.; Dunbar, N.; Ferraioli, L.; Ferroni, V.; Fitzsimons, E.; Flatscher, R.; Freschi, M.; Gallegos, J.; García Marirrodriga, C.; Gerndt, R.; Gesa, L.; Gibert, F.; Giardini, D.; Giusteri, R.; Grimani, C.; Grzymisch, J.; Harrison, I.; Heinzel, G.; Hewitson, M.; Hollington, D.; Hueller, M.; Huesler, J.; Inchauspé, H.; Jennrich, O.; Jetzer, P.; Johlander, B.; Karnesis, N.; Kaune, B.; Korsakova, N.; Killow, C.; Lloro, I.; Liu, L.; López-Zaragoza, J. P.; Maarschalkerweerd, R.; Madden, S.; Mance, D.; Martín, V.; Martin-Polo, L.; Martino, J.; Martin-Porqueras, F.; Mateos, I.; McNamara, P. W.; Mendes, J.; Mendes, L.; Moroni, A.; Nofrarias, M.; Paczkowski, S.; Perreur-Lloyd, M.; Petiteau, A.; Pivato, P.; Plagnol, E.; Prat, P.; Ragnit, U.; Ramos-Castro, J.; Reiche, J.; Romera Perez, J. A.; Robertson, D.; Rozemeijer, H.; Rivas, F.; Russano, G.; Sarra, P.; Schleicher, A.; Slutsky, J.; Sopuerta, C. F.; Sumner, T.; Texier, D.; Thorpe, J. I.; Tomlinson, R.; Trenkel, C.; Vetrugno, D.; Vitale, S.; Wanner, G.; Ward, H.; Warren, C.; Wass, P. J.; Wealthy, D.; Weber, W. J.; Wittchen, A.; Zanoni, C.; Ziegler, T.; Zweifel, P.

    2016-12-01

    LISA Pathfinder satellite was launched on 3 December 2015 toward the Sun-Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. LTP achieves measurements of differential acceleration of free-falling test masses (TMs) with sensitivity below 3× {10}-14 {{m}} {{{s}}}-2 {{Hz}}-1/2 within the 1-30 mHz frequency band in one-dimension. The spacecraft itself is responsible for the dominant differential gravitational field acting on the two TMs. Such a force interaction could contribute a significant amount of noise and thus threaten the achievement of the targeted free-fall level. We prevented this by balancing the gravitational forces to the sub nm s-2 level, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In this paper, we will introduce the gravitational balance requirements and design, and then discuss our predictions for the balance that will be achieved in flight.

  17. Electrofishing effort required to estimate biotic condition in southern Idaho Rivers

    USGS Publications Warehouse

    Maret, Terry R.; Ott, Douglas S.; Herlihy, Alan T.

    2007-01-01

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions.

  18. Assessment of radar resolution requirements for soil moisture estimation from simulated satellite imagery. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.

    1982-01-01

    Radar simulations were performed at five-day intervals over a twenty-day period and used to estimate soil moisture from a generalized algorithm requiring only received power and the mean elevation of a test site near Lawrence, Kansas. The results demonstrate that the soil moisture of about 90% of the 20-m by 20-m pixel elements can be predicted with an accuracy of + or - 20% of field capacity within relatively flat agricultural portions of the test site. Radar resolutions of 93 m by 100 m with 23 looks or coarser gave the best results, largely because of the effects of signal fading. For the distribution of land cover categories, soils, and elevation in the test site, very coarse radar resolutions of 1 km by 1 km and 2.6 km by 3.1 km gave the best results for wet moisture conditions while a finer resolution of 93 m by 100 m was found to yield superior results for dry to moist soil conditions.

  19. Constraints on LISA Pathfinder's Self-Gravity: Design Requirements, Estimates and Testing Procedures

    NASA Technical Reports Server (NTRS)

    Armano, M.; Audley, H.; Auger, G.; Baird, J.; Binetruy, P.; Born, M.; Bortoluzzi, M.; Brandt, Nico; Bursi, Alessandro; Slutsky. J.; hide

    2016-01-01

    LISA Pathfinder satellite was launched on 3 December 2015 toward the Sun Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. LTP achieves measurements of differential acceleration of free-falling test masses (TMs) with sensitivity below 3 x 10(exp -14) m s(exp -2) Hz(exp - 1/2) within the 130 mHz frequency band in one dimension. The spacecraft itself is responsible for the dominant differential gravitational field acting on the two TMs. Such a force interaction could contribute a significant amount of noise and thus threaten the achievement of the targeted free-fall level. We prevented this by balancing the gravitational forces to the sub nm s(exp -2) level, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In this paper, we will introduce the gravitational balance requirements and design, and then discuss our predictions for the balance that will be achieved in flight.

  20. Evaluation of a method estimating real-time individual lysine requirements in two lines of growing-finishing pigs.

    PubMed

    Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J

    2015-04-01

    The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be

  1. Self-similarity of higher-order moving averages

    NASA Astrophysics Data System (ADS)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  2. Deterministic estimate of hypocentral pore fluid pressure of the M5.8 Pawnee, Oklahoma earthquake: Lower pre-injection pressure requires lower resultant pressure for slip

    NASA Astrophysics Data System (ADS)

    Levandowski, W. B.; Walsh, F. R. R.; Yeck, W.

    2016-12-01

    Quantifying the increase in pore-fluid pressure necessary to cause slip on specific fault planes can provide actionable information for stakeholders to potentially mitigate hazard. Although the M5.8 Pawnee earthquake occurred on a previously unmapped fault, we can retrospectively estimate the pore-pressure perturbation responsible for this event. We first estimate the normalized local stress tensor by inverting focal mechanisms surrounding the Pawnee Fault. Faults are generally well oriented for slip, with instabilities averaging 96% of maximum. Next, with an estimate of the weight of local overburden we solve for the pore pressure needed at the hypocenters. Specific to the Pawnee fault, we find that hypocentral pressure 43-104% of hydrostatic (accounting for uncertainties in all relevant parameters) would have been sufficient to cause slip. The dominant source of uncertainty is the pressure on the fault prior to fluid injection. Importantly, we find that lower pre-injection pressure requires lower resultant pressure to cause slip, decreasing from a regional average of 30% above hydrostatic pressure if the hypocenters begin at hydrostatic pressure to 6% above hydrostatic pressure with no pre-injection fluid. This finding suggests that underpressured regions such as northern Oklahoma are predisposed to injection-induced earthquakes. Although retrospective and forensic, similar analyses of other potentially induced events and comparisons to natural earthquakes will provide insight into the relative importance of fault orientation, the magnitude of the local stress field, and fluid-pressure migration in intraplate seismicity.

  3. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  4. PCK and Average

    ERIC Educational Resources Information Center

    Watson, Jane; Callingham, Rosemary

    2013-01-01

    This paper considers the responses of 26 teachers to items exploring their pedagogical content knowledge (PCK) about the concept of average. The items explored teachers' knowledge of average, their planning of a unit on average, and their understanding of students as learners in devising remediation for two student responses to a problem. Results…

  5. Estimating the Reliability of Dynamic Variables Requiring Rater Judgment: A Generalizability Paradigm.

    ERIC Educational Resources Information Center

    Webber, Larry; And Others

    Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool.…

  6. Estimation of the left ventricular relaxation time constant tau requires consideration of the pressure asymptote.

    PubMed

    Langer, S F J; Habazettl, H; Kuebler, W M; Pries, A R

    2005-01-01

    The left ventricular isovolumic pressure decay, obtained by cardiac catheterization, is widely characterized by the time constant tau of the exponential regression p(t)=Pomega+(P0-Pomega)exp(-t/tau). However, several authors prefer to prefix Pomega=0 instead of coestimating the pressure asymptote empirically; others present tau values estimated by both methods that often lead to discordant results and interpretation of lusitropic changes. The present study aims to clarify the relations between the tau estimates from both methods and to decide for the more reliable estimate. The effect of presetting a zero asymptote on the tau estimate was investigated mathematically and empirically, based on left ventricular pressure decay data from isolated ejecting rat and guinea pig hearts at different preload and during spontaneous decrease of cardiac function. Estimating tau with preset Pomega=0 always yields smaller values than the regression with empirically estimated asymptote if the latter is negative and vice versa. The sequences of tau estimates from both methods can therefore proceed in reverse direction if tau and Pomega change in opposite directions between the measurements. This is exemplified by data obtained during an increasing preload in spontaneously depressed isolated hearts. The estimation of the time constant of isovolumic pressure fall with a preset zero asymptote is heavily biased and cannot be used for comparing the lusitropic state of the heart in hemodynamic conditions with considerably altered pressure asymptotes.

  7. Number of trials required to estimate a free-energy difference, using fluctuation relations

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Jarzynski, Christopher

    2016-05-01

    The difference Δ F between free energies has applications in biology, chemistry, and pharmacology. The value of Δ F can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a Δ F estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006), 10.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of Δ F . Estimating Δ F from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations.

  8. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  9. Proof of age required--estimating age in adults without birth records.

    PubMed

    Phillips, Christine; Narayanasamy, Shanti

    2010-07-01

    Many adults from refugee source countries do not have documents of birth, either because they have been lost in flight, or because the civil infrastructure is too fragile to support routine recording of birth. In Western countries, date of birth is used as a basic identifier, and access to services and support tends to be age regulated. Doctors are not infrequently asked to write formal reports estimating the true age of adult refugees; however, there are no existing guidelines to assist in this task. To provide an overview of methods to estimate age in living adults, and outline recommendations for best practice. Age should be estimated through physical examination; life history, matching local or national events with personal milestones; and existing nonformal documents. Accuracy of age estimation should be subject to three tests: biological plausibility, historical plausibility, and corroboration from reputable sources.

  10. Physically-based Methods for the Estimation of Crop Water Requirements from E.O. Optical Data

    USDA-ARS?s Scientific Manuscript database

    The estimation of evapotranspiration (ET) represent the basic information for the evaluation of crop water requirements. A widely used method to compute ET is based on the so-called "crop coefficient" (Kc), defined as the ratio of total evapotranspiration by reference evapotranspiration ET0. The val...

  11. Utility of multi temporal satellite images for crop water requirements estimation and irrigation management in the Jordan Valley

    USDA-ARS?s Scientific Manuscript database

    Identifying the spatial and temporal distribution of crop water requirements is a key for successful management of water resources in the dry areas. Climatic data were obtained from three automated weather stations to estimate reference evapotranspiration (ETO) in the Jordan Valley according to the...

  12. Estimated quantitative amino acid requirements for Florida pompano reared in low-salinity

    USDA-ARS?s Scientific Manuscript database

    As with most marine carnivores, Florida pompano require relatively high crude protein diets to obtain optimal growth. Precision formulations to match the dietary indispensable amino acid (IAA) pattern to a species’ requirements can be used to lower the overall dietary protein. However IAA requirem...

  13. A Decision Tool to Evaluate Budgeting Methodologies for Estimating Facility Recapitalization Requirements

    DTIC Science & Technology

    2008-03-01

    REQUIREMENTS   THESIS Presented to the Faculty Department of Systems and Engineering Management Graduate School of Engineering and... Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree...of Master of Science in Engineering Management Krista M. Hickman, BS Captain, USAF March 2008 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION

  14. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  15. Development of Procedures for Generating Alternative Allied Health Manpower Requirements and Supply Estimates.

    ERIC Educational Resources Information Center

    Applied Management Sciences, Inc., Silver Spring, MD.

    This report presents results of a project to assess the adequacy of existing data sources on the supply of 21 allied health occupations in order to develop improved data collection strategies and improved procedures for estimation of manpower needs. Following an introduction, chapter 2 provides a discussion of the general phases of the project and…

  16. Minimizing instrumentation requirement for estimating crop water stress index and transpiration of maize

    USDA-ARS?s Scientific Manuscript database

    Research was conducted in northern Colorado in 2011 to estimate the Crop Water Stress Index (CWSI) and actual water transpiration (Ta) of maize under a range of irrigation regimes. The main goal was to obtain these parameters with minimum instrumentation and measurements. The results confirmed that ...

  17. Accounting for reporting fatigue is required to accurately estimate incidence in voluntary reporting health schemes.

    PubMed

    Gittins, Matthew; McNamee, Roseanne; Holland, Fiona; Carter, Lesley-Anne

    2017-01-01

    Accurate estimation of the true incidence of ill-health is a goal of many surveillance systems. In surveillance schemes including zero reporting to remove ambiguity with nonresponse, reporter fatigue might increase the likelihood of a false zero case report in turn underestimating the true incidence rate and creating a biased downward trend over time. Multilevel zero-inflated negative binomial models were fitted to incidence case reports of three surveillance schemes running between 1996 and 2012 in the United Kingdom. Estimates of the true annual incidence rates were produced by weighting the reported number of cases by the predicted excess zero rate in addition to the within-scheme standard adjustment for response rate and the participation rate. Time since joining the scheme was associated with the odds of excess zero case reports for most schemes, resulting in weaker calendar trends. Estimated incidence rates (95% confidence interval) per 100,000 person years, were approximately doubled to 30 (21-39), 137 (116-157), 33 (27-39), when excess zero-rate adjustment was applied. If we accept that excess zeros are in reality nonresponse by busy reporters, then usual estimates of incidence are likely to be significantly underestimated and previously thought strong downward trends overestimated. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho

    USGS Publications Warehouse

    Watkins, Carson J.; Quist, Michael; Shepard, Bradley B.; Ireland, Susan C.

    2016-01-01

    This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.

  19. Estimating the required logistical resources to support the development of a sustainable corn stover bioeconomy in the USA

    SciTech Connect

    Ebadian, Mahmood; Sokhansanj, Shahabaddine; Webb, Erin

    2016-11-23

    In this paper, the logistical resources required to develop a bioeconomy based on corn stover in the USA are quantified, including field equipment, storage sites, transportation and handling equipment, workforce, corn growers, and corn lands. These resources are essential to mobilize large quantities of corn stover from corn fields to biorefineries. The logistical resources are estimated over the lifetime of the biorefineries. Seventeen corn-growing states are considered for the logistical resource assessment. Over 6.8 billion gallons of cellulosic ethanol can be produced annually from 108 million dry tons of corn stover in these states. The maximum number of required field equipment (i.e., choppers, balers, collectors, loaders, and tractors) is estimated to be 194 110 units with a total economic value of about 26 billion dollars. In addition, 40 780 trucks and flatbed trailers would be required to transport bales from corn fields and storage sites to biorefineries with a total economic value of 4.0 billion dollars. About 88 899 corn growers need to be contracted with an annual net income of over 2.1 billion dollars. About 1903 storage sites would be required to hold 53.1 million dry tons of inventory after the harvest season. These storage sites would take up about 35 320.2 acres and 4077 loaders with an economic value of 0.4 billion dollars would handle this inventory. The total required workforce to run the logistics operations is estimated to be 50 567. Furthermore, the magnitude of the estimated logistical resources demonstrates the economic and social significance of the corn stover bioeconomy in rural areas in the USA.

  20. Estimating the required logistical resources to support the development of a sustainable corn stover bioeconomy in the USA

    DOE PAGES

    Ebadian, Mahmood; Sokhansanj, Shahabaddine; Webb, Erin

    2016-11-23

    In this paper, the logistical resources required to develop a bioeconomy based on corn stover in the USA are quantified, including field equipment, storage sites, transportation and handling equipment, workforce, corn growers, and corn lands. These resources are essential to mobilize large quantities of corn stover from corn fields to biorefineries. The logistical resources are estimated over the lifetime of the biorefineries. Seventeen corn-growing states are considered for the logistical resource assessment. Over 6.8 billion gallons of cellulosic ethanol can be produced annually from 108 million dry tons of corn stover in these states. The maximum number of required fieldmore » equipment (i.e., choppers, balers, collectors, loaders, and tractors) is estimated to be 194 110 units with a total economic value of about 26 billion dollars. In addition, 40 780 trucks and flatbed trailers would be required to transport bales from corn fields and storage sites to biorefineries with a total economic value of 4.0 billion dollars. About 88 899 corn growers need to be contracted with an annual net income of over 2.1 billion dollars. About 1903 storage sites would be required to hold 53.1 million dry tons of inventory after the harvest season. These storage sites would take up about 35 320.2 acres and 4077 loaders with an economic value of 0.4 billion dollars would handle this inventory. The total required workforce to run the logistics operations is estimated to be 50 567. Furthermore, the magnitude of the estimated logistical resources demonstrates the economic and social significance of the corn stover bioeconomy in rural areas in the USA.« less

  1. Shadow Radiation Shield Required Thickness Estimation for Space Nuclear Power Units

    NASA Astrophysics Data System (ADS)

    Voevodina, E. V.; Martishin, V. M.; Ivanovsky, V. A.; Prasolova, N. O.

    The paper concerns theoretical possibility of visiting orbital transport vehicles based on nuclear power unit and electric propulsion system on the Earth's orbit by astronauts to maintain work with payload from the perspective of radiation safety. There has been done estimation of possible time of the crew's staying in the area of payload of orbital transport vehicles for different reactor powers, which is a consistent part of nuclear power unit.

  2. Where can pixel counting area estimates meet user-defined accuracy requirements?

    NASA Astrophysics Data System (ADS)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  3. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  4. Aggregation and Averaging.

    ERIC Educational Resources Information Center

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  5. Averaging Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Tegai, S. Ph.; Drobov, I. V.

    2017-07-01

    We tried to average the Schwarzschild solution for the gravitational point source by analogy with the same problem in Newtonian gravity or electrostatics. We expected to get a similar result, consisting of two parts: the smoothed interior part being a sphere filled with some matter content and an empty exterior part described by the original solution. We considered several variants of generally covariant averaging schemes. The averaging of the connection in the spirit of Zalaletdinov's macroscopic gravity gave unsatisfactory results. With the transport operators proposed in the literature it did not give the expected Schwarzschild solution in the exterior part of the averaged spacetime. We were able to construct a transport operator that preserves the Newtonian analogy for the outward region but such an operator does not have a clear geometrical meaning. In contrast, using the curvature as the primary averaged object instead of the connection does give the desired result for the exterior part of the problem in a fine way. However for the interior part, this curvature averaging does not work because the Schwarzschild curvature components diverge as 1 /r3 near the center and therefore are not integrable.

  6. Bayesian Model Averaging for Propensity Score Analysis.

    PubMed

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  7. Estimating Sugarcane Water Requirements for Biofuel Feedstock Production in Maui, Hawaii Using Satellite Imagery

    USDA-ARS?s Scientific Manuscript database

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop eva...

  8. Estimation of F-15 Peacetime Maintenance Manpower Requirements Using the Logistics Composite Model.

    DTIC Science & Technology

    1976-12-01

    Sequence of Simulation Runs. .. ... ..... .... .. 30 *Determination of.Manpower Requirements .. .. .. .... .. 30 Model Validation...17 6. LCOM F-15 TFTW Maintenance Organization Structure ....... 20 7. Sequence of Simulation Runs ...... ............. ... 31 8. Type I...constraints to simulate a sequence of maintenance activities. When the flying schedule calls for aircraft to start mission preparation, LCOM designates

  9. Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers

    EPA Science Inventory

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...

  10. Estimation of waste package performance requirements for a nuclear waste repository in basalt

    SciTech Connect

    Wood, B J

    1980-07-01

    A method of developing waste package performance requirements for specific nuclides is described, and based on federal regulations concerning permissible concentrations in solution at the point of discharge to the accessible environment, a simple and conservative transport model, and baseline and potential worst-case release scenarios.

  11. Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers

    EPA Science Inventory

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...

  12. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    NASA Astrophysics Data System (ADS)

    Valleriani, Angelo

    2016-08-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.

  13. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    PubMed Central

    Valleriani, Angelo

    2016-01-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811

  14. Threaded average temperature thermocouple

    NASA Technical Reports Server (NTRS)

    Ward, Stanley W. (Inventor)

    1990-01-01

    A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

  15. [In vitro estimation using radioactive phosphorus of the phosphorus requirements of rumen microorganisms].

    PubMed

    Durand, M; Beaumatin, P; Dumay, C

    1983-01-01

    Microbial requirements for P were assumed to be a function of the amount of microbial protein synthesis (microbial growth) and of the quantity of organic matter (OM) fermented in the rumen. The relationships among P incorporation into microbial matter and protein synthesis, ammonia utilization, volatile fatty acid (VFA) production and organic matter fermented (OMF) were studied in short-term incubations (3 h) using 32P-labelled phosphate. The amount of P incorporated was calculated from extracellular phosphate pool specific activity and the radioactivity incorporated into the microbial sediment during incubation (table 1). The inocula came from sheep fed a protein-free purified diet. In order to vary the intensity of fermentation, carbohydrates with a wide range of degrees of enzymatic susceptibility were used as substrates and the medium was either provided or was deficient in S and trace elements (table 4). Nitrogen was supplied as ammonium salts. Linear regression analyses showed that P incorporation was positively correlated with the criteria of protein synthesis and OM fermentation (figs. 1, 2, 3, 4). However, there was significant phosphorus incorporation when the value for nitrogen incorporation was zero (equation A: (Pi (mg) = 0.162 NH3-N + 0.376; r = 0.9). This was assumed to result either from energetic uncoupling (fermentation without concomitant bacterial growth) or from the lysis of cold microbial cells only. Equation A would reflect total P incorporation and equation A' Pi (mg) = 0.162 NH3-N (mg), net P incorporation. It was assumed that in vitro microbial requirements for P were in the range of 30-70 mg of P/liter of medium for 3-hour incubation, depending on the intensity of fermentation. From a mean value of microbial N yield of 30 g/kg of DOMR (organic matter apparently digested in the rumen), it was calculated that the total and net P requirements in vivo were 6 and 4.9 g/kg of DOMR, respectively, corresponding to 3.9 and 3.2 g/kg of DOM

  16. The average enzyme principle

    PubMed Central

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-01-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076

  17. CNS tumor induction by radiotherapy: A report of four new cases and estimate of dose required

    SciTech Connect

    Cavin, L.W.; Dalrymple, G.V.; McGuire, E.L.; Maners, A.W.; Broadwater, J.R. )

    1990-02-01

    We have analyzed 60 cases of intra-axial brain tumors associated with antecedent radiation therapy. These include four new cases. The patients had originally received radiation therapy for three reasons: (a) cranial irradiation for acute lymphoblastic leukemia (ALL), (b) definitive treatment of CNS neoplasia, and (c) treatment of benign disease (mostly cutaneous infections). The number of cases reported during the past decade has greatly increased as compared to previous years. Forty-six of the 60 intra-axial tumors have been reported since 1978. The relative risk of induction of an intra-axial brain tumor by radiation therapy is estimated to be more than 100, as compared to individuals who have not had head irradiation.

  18. Estimation of the lead thickness required to shield scattered radiation from synchrotron radiation experiments

    NASA Astrophysics Data System (ADS)

    Wroblewski, Thomas

    2015-03-01

    In the enclosure of synchrotron radiation experiments using a monochromatic beam, secondary radiation arises from two effects, namely fluorescence and scattering. While fluorescence can be regarded as isotropic, the angular dependence of Compton scattering has to be taken into account if the shielding shall not become unreasonably thick. The scope of this paper is to clarify how the different factors starting from the spectral properties of the source and the attenuation coefficient of the shielding, over the spectral and angular distribution of the scattered radiation and the geometry of the experiment influence the thickness of lead required to keep the dose rate outside the enclosure below the desired threshold.

  19. Calculation of the number of bits required for the estimation of the bit error ratio

    NASA Astrophysics Data System (ADS)

    Almeida, Álvaro J.; Silva, Nuno A.; Muga, Nelson J.; André, Paulo S.; Pinto, Armando N.

    2014-08-01

    We present a calculation of the required number of bits to be received in a system of communications in order to achieve a given level of confidence. The calculation assumes a binomial distribution function for the errors. The function is numerically evaluated and the results are compared with the ones obtained from Poissonian and Gaussian approximations. The performance in terms of the signal-to-noise ratio is also studied. We conclude that for higher number of errors in detection the use of approximations allows faster and more efficient calculations, without loss of accuracy.

  20. Averaging of TNTC counts.

    PubMed Central

    Haas, C N; Heller, B

    1988-01-01

    When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211

  1. Determining average yarding distance.

    Treesearch

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  2. Estimation of the dietary riboflavin required to maximize tissue riboflavin concentration in juvenile shrimp (Penaeus monodon).

    PubMed

    Chen, H Y; Hwang, G

    1992-12-01

    The riboflavin requirements of marine shrimp (Penaeus monodon) were evaluated in a 15-wk feeding trial. Juvenile shrimp (initial mean weight, 0.13 +/- 0.05 g) were fed purified diets containing seven levels (0, 8, 12, 16, 20, 40 and 80 mg/kg diet) of supplemental riboflavin. There were no significant differences in weight gains, feed efficiency ratios and survival of shrimp over the dietary riboflavin range. The riboflavin concentrations in shrimp bodies increased with the increasing vitamin supplementation. Hemolymph (blood) glutathione reductase activity coefficient was not a sensitive and specific indicator of riboflavin status of the shrimp. The dietary riboflavin level required for P. monodon was found to be 22.3 mg/kg diet, based on the broken-line model analysis of body riboflavin concentrations. Shrimp fed unsupplemented diet (riboflavin concentration of 0.48 mg/kg diet) for 15 wk showed signs of deficiency: light coloration, irritability, protuberant cuticle at intersomites and short-head dwarfism.

  3. Impact of microbial efficiency to predict MP supply when estimating protein requirements of growing beef cattle from performance.

    PubMed

    Watson, A K; Klopfenstein, T J; Erickson, G E; MacDonald, J C; Wilkerson, V A

    2017-07-01

    Data from 16 trials were compiled to calculate microbial CP (MCP) production and MP requirements of growing cattle on high-forage diets. All cattle were individually fed diets with 28% to 72% corn cobs in addition to either alfalfa, corn silage, or sorghum silage at 18% to 60% of the diet (DM basis). The remainder of the diet consisted of protein supplement. Source of protein within the supplement varied and included urea, blood meal, corn gluten meal, dry distillers grains, feather meal, meat and bone meal, poultry by-product meal, soybean meal, and wet distillers grains. All trials included a urea-only treatment. Intake of all cattle within an experiment was held constant, as a percentage of BW, established by the urea-supplemented group. In each trial the base diet (forage and urea supplement) was MP deficient. Treatments consisted of increasing amounts of test protein replacing the urea supplement. As protein in the diet increased, ADG plateaued. Among experiments, ADG ranged from 0.11 to 0.73 kg. Three methods of calculating microbial efficiency were used to determine MP supply. Gain was then regressed against calculated MP supply to determine MP requirement for maintenance and gain. Method 1 (based on a constant 13% microbial efficiency as used by the beef NRC model) predicted an MP maintenance requirement of 3.8 g/kg BW and 385 g MP/kg gain. Method 2 calculated microbial efficiency using low-quality forage diets and predicted MP requirements of 3.2 g/kg BW for maintenance and 448 g/kg for gain. Method 3 (based on an equation predicting MCP yield from TDN intake, proposed by the Beef Cattle Nutrient Requirements Model [BCNRM]) predicted MP requirements of 3.1 g/kg BW for maintenance and 342 g/kg for gain. The factorial method of calculating MP maintenance requirements accounts for scurf, endogenous urinary, and metabolic fecal protein losses and averaged 4.2 g/kg BW. Cattle performance data demonstrate formulating diets to meet the beef NRC model recommended

  4. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  5. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... family's deficit by the due date for the final report required in § 1037.730. The emission credits used... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... Averaging. (a) Averaging is the exchange of emission credits among your vehicle families. You may average...

  6. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... family's deficit by the due date for the final report required in § 1037.730. The emission credits used... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... Averaging. (a) Averaging is the exchange of emission credits among your vehicle families. You may average...

  7. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    PubMed

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  8. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  9. Using a generalized version of the Titius-Bode relation to extrapolate the patterns seen in Kepler multi-exoplanet systems, and estimate the average number of planets in circumstellar habitable zones

    NASA Astrophysics Data System (ADS)

    Lineweaver, Charles H.

    2015-08-01

    The Titius-Bode (TB) relation’s successful prediction of the period of Uranus was the main motivation that led to the search for another planet between Mars and Jupiter. This search led to the discovery of the asteroid Ceres and the rest of the asteroid belt. The TB relation can also provide useful hints about the periods of as-yet-undetected planets around other stars. In Bovaird & Lineweaver (2013) [1], we used a generalized TB relation to analyze 68 multi-planet systems with four or more detected exoplanets. We found that the majority of exoplanet systems in our sample adhered to the TB relation to a greater extent than the Solar System does. Thus, the TB relation can make useful predictions about the existence of as-yet-undetected planets in Kepler multi-planet systems. These predictions are one way to correct for the main obstacle preventing us from estimating the number of Earth-like planets in the universe. That obstacle is the incomplete sampling of planets of Earth-mass and smaller [2-5]. In [6], we use a generalized Titius-Bode relation to predict the periods of 228 additional planets in 151 of these Kepler multiples. These Titius-Bode-based predictions suggest that there are, on average, 2±1 planets in the habitable zone of each star. We also estimate the inclination of the invariable plane for each system and prioritize our planet predictions by their geometric probability to transit. We highlight a short list of 77 predicted planets in 40 systems with a high geometric probability to transit, resulting in an expected detection rate of ~15 per cent, ~3 times higher than the detection rate of our previous Titius-Bode-based predictions.References: [1] Bovaird, T. & Lineweaver, C.H (2013) MNRAS, 435, 1126-1138. [2] Dong S. & Zhu Z. (2013) ApJ, 778, 53 [3] Fressin F. et al. (2013) ApJ, 766, 81 [4] Petigura E. A. et al. (2013) PNAS, 110, 19273 [5] Silburt A. et al. (2014), ApJ (arXiv:1406.6048v2) [6] Bovaird, T., Lineweaver, C.H. & Jacobsen, S.K. (2015, in

  10. Estimates of power requirements for a manned Mars rover powered by a nuclear reactor

    SciTech Connect

    Morley, N.J.; El-Genk, M.S. Cataldo, R. Bloomfield, H.)

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are met using an SP-100 type reactor. The primary electric power needs, which include 30-kW{sub e} net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine (FPSE) yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle (CBC) using He/Xe as the working fluid. The specific mass of the nuclear reactor power systrem, including a man-rated radiation shield, ranged from 150-kg/kW{sub e} to 190-kg/kW{sub e} and the total mass of the Rover vehicle varied depend upon the cruising speed.

  11. Estimating the quantity of wind and solar required to displace storage-induced emissions.

    PubMed

    Hittinger, Eric; Azevedo, Ines M L

    2017-10-10

    The variable and non-dispatchable nature of wind and solar generation has been driving interest in energy storage as an enabling low-carbon technology that can help spur large-scale adoption of renewables. However, prior work has shown that adding energy storage alone for energy arbitrage in electricity systems across the U.S. routinely increases system emissions. While adding wind or solar reduces electricity system emissions, the emissions effect of both renewable generation and energy storage varies by location. In this work, we apply a marginal emissions approach to determine the net system CO2 emissions of co-located or electrically proximate wind/storage and solar/storage facilities across the U.S. and determine the amount of renewable energy required to offset the CO2 emissions resulting from operation of new energy storage. We find that it takes between 0.03 (Montana) and 4 MW (Michigan) of wind and between 0.25 (Alabama) and 17 MW (Michigan) of solar to offset the emissions from a 25 MW / 100 MWh storage device, depending on location and operational mode. Systems with a realistic combination of renewables and storage will result in net emissions reductions when compared to a grid without those systems, but the anticipated reductions are lower than a renewable-only addition.

  12. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  13. Estimates of power requirements for a manned Mars rover powered by a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are met using an SP-100 type reactor. The primary electric power needs, which include 30-kWe net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine (FPSE) yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle (CBC) using He/Xe as the working fluid. The specific mass of the nuclear reactor power systrem, including a man-rated radiation shield, ranged from 150-kg/kWe to 190-kg/kWe and the total mass of the Rover vehicle varied depend upon the cruising speed.

  14. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Technical Reports Server (NTRS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  15. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Technical Reports Server (NTRS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  16. Estimating mineral requirements of Nellore beef bulls fed with or without inorganic mineral supplementation and the influence on mineral balance.

    PubMed

    Zanetti, D; Godoi, L A; Estrada, M M; Engle, T E; Silva, B C; Alhadas, H M; Chizzotti, M L; Prados, L F; Rennó, L N; Valadares Filho, S C

    2017-04-01

    The objectives of this study were to quantify the mineral balance of Nellore cattle fed with and without Ca, P, and micromineral (MM) supplementation and to estimate the net and dietary mineral requirement for cattle. Nellore cattle ( = 51; 270.4 ± 36.6 kg initial BW and 8 mo age) were assigned to 1 of 3 groups: reference ( = 5), maintenance ( = 4), and performance ( = 42). The reference group was slaughtered prior to the experiment to estimate initial body composition. The maintenance group was used to collect values of animals at low gain and reduced mineral intake. The performance group was assigned to 1 of 6 treatments: sugarcane as the roughage source with a concentrate supplement composed of soybean meal and soybean hulls with and without Ca, P, and MM supplementation; sugarcane as the roughage source with a concentrate supplement composed of soybean meal and ground corn with and without Ca, P, and MM supplementation; and corn silage as the roughage source with a concentrate supplement composed of soybean meal and ground corn with and without Ca, P, and MM supplementation. Orthogonal contrasts were adopted to compare mineral intake, fecal and urinary excretion, and apparent retention among treatments. Maintenance requirements and true retention coefficients were generated with the aid of linear regression between mineral intake and mineral retention. Mineral composition of the body and gain requirements was assessed using nonlinear regression between body mineral content and mineral intake. Mineral intake and fecal and urinary excretion were measured. Intakes of Ca, P, S, Cu, Zn, Mn, Co, and Fe were reduced in the absence of Ca, P, and MM supplementation ( < 0.05). Fecal excretion of Ca, Cu, Zn, Mn, and Co was also reduced in treatments without supplementation ( < 0.01). Overall, excretion and apparent absorption and retention coefficients were reduced when minerals were not supplied ( < 0.05). The use of the true retention coefficient instead of the true

  17. Estimation of irrigation requirement for wheat in the southern Spain by using a soil water balance remote sensing driven

    NASA Astrophysics Data System (ADS)

    González, Laura; Bodas, Vicente; Espósito, Gabriel; Campos, Isidro; Aliaga, Jerónimo; Calera, Alfonso

    2013-04-01

    This paper aims to evaluate the use of a remote sensing-driven soil water balance to estimate irrigation water requirements of wheat. The applied methodology is based on the approach of the dual crop coefficient proposed in the FAO-56 manual (Allen et al., 1998), where the basal crop coefficient is derived from a time series of remote sensing multispectral imagery which describes the growing cycle of wheat. This approach allows the estimation of the evapotranspiration (ET) and irrigation water requirements by means of a soil water balance in the root layer. The assimilation of satellite data into the FAO-56 soil water balance is based on the relationship between spectral vegetation indices (VI) and the transpiration coefficient (Campos et al., 2010; Sánchez et al., 2010). Two approaches to plant transpiration estimation were analyzed, the basal crop coefficient methodology and the transpiration coefficient approach described in the FAO-56 (Allen et al., 1998) and FAO-66 (Steduto et al., 2012) manuals respectively. The model is computed at daily time step and the results analyzed in this work are the net irrigation water requirements and water stress estimates. Analysis of results has been done by comparison with irrigation data (irrigation dates and volume applied) provided by farmers in 28 plots of wheat for the period 2004-2012 in the Spanish region of La Mancha, southern Spain, under different meteorological conditions. Total irrigation dose during the growing season varies from 200 mm to 700 mm. In some of plots soil moisture sensors data are available, which allowed the comparison with modeled soil moisture. Net irrigation water requirements estimated by the proposed model shows a good agreement with data, having in account the efficiency of the different irrigation systems. Despite the irrigation doses are generally greater than irrigation water requirements, the crops could suffer water stress periods during the campaign, because real irrigation timing and

  18. The balanced survivor average causal effect.

    PubMed

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  19. An applied simulation model for estimating the supply of and requirements for registered nurses based on population health needs.

    PubMed

    Tomblin Murphy, Gail; MacKenzie, Adrian; Alder, Robert; Birch, Stephen; Kephart, George; O'Brien-Pallas, Linda

    2009-11-01

    Aging populations, limited budgets, changing public expectations, new technologies, and the emergence of new diseases create challenges for health care systems as ways to meet needs and protect, promote, and restore health are considered. Traditional planning methods for the professionals required to provide these services have given little consideration to changes in the needs of the populations they serve or to changes in the amount/types of services offered and the way they are delivered. In the absence of dynamic planning models that simulate alternative policies and test policy mixes for their relative effectiveness, planners have tended to rely on projecting prevailing or arbitrarily determined target provider-population ratios. A simulation model has been developed that addresses each of these shortcomings by simultaneously estimating the supply of and requirements for registered nurses based on the identification and interaction of the determinants. The model's use is illustrated using data for Nova Scotia, Canada.

  20. Genomic instability related to zinc deficiency and excess in an in vitro model: is the upper estimate of the physiological requirements recommended for children safe?

    PubMed

    Padula, Gisel; Ponzinibbio, María Virginia; Gambaro, Rocío Celeste; Seoane, Analía Isabel

    2017-08-01

    Micronutrients are important for the prevention of degenerative diseases due to their role in maintaining genomic stability. Therefore, there is international concern about the need to redefine the optimal mineral and vitamin requirements to prevent DNA damage. We analyzed the cytostatic, cytotoxic, and genotoxic effect of in vitro zinc supplementation to determine the effects of zinc deficiency and excess and whether the upper estimate of the physiological requirement recommended for children is safe. To achieve zinc deficiency, DMEM/Ham's F12 medium (HF12) was chelated (HF12Q). Lymphocytes were isolated from healthy female donors (age range, 5-10 yr) and cultured for 7 d as follows: negative control (HF12, 60 μg/dl ZnSO4); deficient (HF12Q, 12 μg/dl ZnSO4); lower level (HF12Q + 80 μg/dl ZnSO4); average level (HF12Q + 180 μg/dl ZnSO4); upper limit (HF12Q + 280 μg/dl ZnSO4); and excess (HF12Q + 380 μg/dl ZnSO4). The comet (quantitative analysis) and cytokinesis-block micronucleus cytome assays were used. Differences were evaluated with Kruskal-Wallis and ANOVA (p < 0.05). Olive tail moment, tail length, micronuclei frequency, and apoptotic and necrotic percentages were significantly higher in the deficient, upper limit, and excess cultures compared with the negative control, lower, and average limit ones. In vitro zinc supplementation at the lower and average limit (80 and 180 μg/dl ZnSO4) of the physiological requirement recommended for children proved to be the most beneficial in avoiding genomic instability, whereas the deficient, upper limit, and excess (12, 280, and 380 μg/dl) cultures increased DNA and chromosomal damage and apoptotic and necrotic frequencies.

  1. Estimated Budget Impact of Adopting the Affordable Care Act's Required Smoking Cessation Coverage on United States Healthcare Payers.

    PubMed

    Baker, Christine L; Ferrufino, Cheryl P; Bruno, Marianna; Kowal, Stacey

    2017-01-01

    Despite abundant information on the negative impacts of smoking, more than 40 million adult Americans continue to smoke. The Affordable Care Act (ACA) requires tobacco cessation as a preventive service with no patient cost share for all FDA-approved cessation medications. Health plans have a vital role in supporting smoking cessation by managing medication access, but uncertainty remains on the gaps between smoking cessation requirements and what is actually occurring in practice. This study presents current cessation patterns, real-world drug costs and plan benefit design data, and estimates the 1- to 5-year pharmacy budget impact of providing ACA-required coverage for smoking cessation products to understand the fiscal impact to a US healthcare plan. A closed cohort budget impact model was developed in Microsoft Excel(®) to estimate current and projected costs for US payers (commercial, Medicare, Medicaid) covering smoking cessation medicines, with assumptions for coverage and smoking cessation product utilization based on current, real-world national and state-level trends for hypothetical commercial, Medicare, and Medicaid plans with 1 million covered lives. A Markov methodology with five health states captures quit attempt and relapse patterns. Results include the number of smokers attempting to quit, number of successful quitters, annual costs, and cost per-member per-month (PMPM). The projected PMPM cost of providing coverage for smoking cessation medications is $0.10 for commercial, $0.06 for Medicare, and $0.07 for Medicaid plans, reflecting a low incremental PMPM impact of covering two attempts ranging from $0.01 for Medicaid to $0.02 for commercial and Medicare payers. The projected PMPM impact of covering two quit attempts with access to all seven cessation medications at no patient cost share remains low. Results of this study reinforce that the impact of adopting the ACA requirements for smoking cessation coverage will have a limited near

  2. Dietary Protein Intake in Young Children in Selected Low-Income Countries Is Generally Adequate in Relation to Estimated Requirements for Healthy Children, Except When Complementary Food Intake Is Low123

    PubMed Central

    Arsenault, Joanne E; Brown, Kenneth H

    2017-01-01

    Background: Previous research indicates that young children in low-income countries (LICs) generally consume greater amounts of protein than published estimates of protein requirements, but this research did not account for protein quality based on the mix of amino acids and the digestibility of ingested protein. Objective: Our objective was to estimate the prevalence of inadequate protein and amino acid intake by young children in LICs, accounting for protein quality. Methods: Seven data sets with information on dietary intake for children (6–35 mo of age) from 6 LICs (Peru, Guatemala, Ecuador, Bangladesh, Uganda, and Zambia) were reanalyzed to estimate protein and amino acid intake and assess adequacy. The protein digestibility–corrected amino acid score of each child’s diet was calculated and multiplied by the original (crude) protein intake to obtain an estimate of available protein intake. Distributions of usual intake were obtained to estimate the prevalence of inadequate protein and amino acid intake for each cohort according to Estimated Average Requirements. Results: The prevalence of inadequate protein intake was highest in breastfeeding children aged 6–8 mo: 24% of Bangladeshi and 16% of Peruvian children. With the exception of Bangladesh, the prevalence of inadequate available protein intake decreased by age 9–12 mo and was very low in all sites (0–2%) after 12 mo of age. Inadequate protein intake in children <12 mo of age was due primarily to low energy intake from complementary foods, not inadequate protein density. Conclusions: Overall, most children consumed protein amounts greater than requirements, except for the younger breastfeeding children, who were consuming low amounts of complementary foods. These findings reinforce previous evidence that dietary protein is not generally limiting for children in LICs compared with estimated requirements for healthy children, even after accounting for protein quality. However, unmeasured effects

  3. Dietary Protein Intake in Young Children in Selected Low-Income Countries Is Generally Adequate in Relation to Estimated Requirements for Healthy Children, Except When Complementary Food Intake Is Low.

    PubMed

    Arsenault, Joanne E; Brown, Kenneth H

    2017-02-15

    Background: Previous research indicates that young children in low-income countries (LICs) generally consume greater amounts of protein than published estimates of protein requirements, but this research did not account for protein quality based on the mix of amino acids and the digestibility of ingested protein.Objective: Our objective was to estimate the prevalence of inadequate protein and amino acid intake by young children in LICs, accounting for protein quality.Methods: Seven data sets with information on dietary intake for children (6-35 mo of age) from 6 LICs (Peru, Guatemala, Ecuador, Bangladesh, Uganda, and Zambia) were reanalyzed to estimate protein and amino acid intake and assess adequacy. The protein digestibility-corrected amino acid score of each child's diet was calculated and multiplied by the original (crude) protein intake to obtain an estimate of available protein intake. Distributions of usual intake were obtained to estimate the prevalence of inadequate protein and amino acid intake for each cohort according to Estimated Average Requirements.Results: The prevalence of inadequate protein intake was highest in breastfeeding children aged 6-8 mo: 24% of Bangladeshi and 16% of Peruvian children. With the exception of Bangladesh, the prevalence of inadequate available protein intake decreased by age 9-12 mo and was very low in all sites (0-2%) after 12 mo of age. Inadequate protein intake in children <12 mo of age was due primarily to low energy intake from complementary foods, not inadequate protein density.Conclusions: Overall, most children consumed protein amounts greater than requirements, except for the younger breastfeeding children, who were consuming low amounts of complementary foods. These findings reinforce previous evidence that dietary protein is not generally limiting for children in LICs compared with estimated requirements for healthy children, even after accounting for protein quality. However, unmeasured effects of infection and

  4. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  5. Temperature averaging thermal probe

    NASA Technical Reports Server (NTRS)

    Kalil, L. F.; Reinhardt, V. (Inventor)

    1985-01-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  6. Temperature averaging thermal probe

    NASA Astrophysics Data System (ADS)

    Kalil, L. F.; Reinhardt, V.

    1985-12-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  7. Barriers against required nurse estimation models applying in Iran hospitals from health system experts’ point of view

    PubMed Central

    Tabatabaee, Seyed Saeed; Nekoie-Moghadam, Mahmood; Vafaee-Najar, Ali; Amiresmaili, Mohammad Reza

    2016-01-01

    Introduction One of the strategies for accessing effective nursing care is to design and implement a nursing estimation model. The purpose of this research was to determine barriers in applying models or norms for estimating the size of a hospital’s nursing team. Methods This study was conducted from November 2015 to March 2016 among three levels of managers at the Ministry of Health, medical universities, and hospitals in Iran. We carried out a qualitative study using a Colaizzi method. We used semistructured and in-depth interviews by purposive, quota, and snowball sampling of 32 participants (10 informed experts in the area of policymaking in human resources in the Ministry of Health, 10 decision makers in employment and distribution of human resources in treatment and administrative chancellors of Medical Universities, and 12 nursing managers in hospitals). The data were analyzed by Atlas.ti software version 6.0.15. Results The following 14 subthemes emerged from data analysis: Lack of specific steward, weakness in attracting stakeholder contributions, lack of authorities trust to the models, lack of mutual interests between stakeholders, shortage of nurses, financial deficit, non-native models, designing models by people unfamiliar with nursing process, lack of attention to the nature of work in each ward, lack of attention to hospital classification, lack of transparency in defining models, reduced nurses available time, increased indirect activity of nurses, and outdated norms. The main themes were inappropriate planning and policymaking in high levels, resource constraints, and poor design of models and lack of updating the model. Conclusion The results of present study indicate that many barriers exist in applying models for estimating the size of a hospital’s nursing team. Therefore, for designing an appropriate nursing staff estimation model and implementing it, in addition to considering the present barriers, identifying the norm required features

  8. Estimating shallow groundwater availability in small catchments using streamflow recession and instream flow requirements of rivers in South Africa

    NASA Astrophysics Data System (ADS)

    Ebrahim, Girma Y.; Villholth, Karen G.

    2016-10-01

    Groundwater is an important resource for multiple uses in South Africa. Hence, setting limits to its sustainable abstraction while assuring basic human needs is required. Due to prevalent data scarcity related to groundwater replenishment, which is the traditional basis for estimating groundwater availability, the present article presents a novel method for determining allocatable groundwater in quaternary (fourth-order) catchments through information on streamflow. Using established methodologies for assessing baseflow, recession flow, and instream ecological flow requirement, the methodology develops a combined stepwise methodology to determine annual available groundwater storage volume using linear reservoir theory, essentially linking low flows proportionally to upstream groundwater storages. The approach was trialled for twenty-one perennial and relatively undisturbed catchments with long-term and reliable streamflow records. Using the Desktop Reserve Model, instream flow requirements necessary to meet the present ecological state of the streams were determined, and baseflows in excess of these flows were converted into a conservative estimates of allocatable groundwater storages on an annual basis. Results show that groundwater development potential exists in fourteen of the catchments, with upper limits to allocatable groundwater volumes (including present uses) ranging from 0.02 to 3.54 × 106 m3 a-1 (0.10-11.83 mm a-1) per catchment. With a secured availability of these volume 75% of the years, variability between years is assumed to be manageable. A significant (R2 = 0.88) correlation between baseflow index and the drainage time scale for the catchments underscores the physical basis of the methodology and also enables the reduction of the procedure by one step, omitting recession flow analysis. The method serves as an important complementary tool for the assessment of the groundwater part of the Reserve and the Groundwater Resource Directed Measures in

  9. Estimating the resources required in the roll-out of universal access to antiretroviral treatment in Zimbabwe

    PubMed Central

    Gregson, S; Dube, S; Mapfeka, E S; Mugurungi, O; Garnett, G P

    2011-01-01

    Objectives To develop projections of the resources required (person-years of drug supply and healthcare worker time) for universal access to antiretroviral treatment (ART) in Zimbabwe. Methods A stochastic mathematical model of disease progression, diagnosis, clinical monitoring and survival in HIV infected individuals. Findings The number of patients receiving ART is determined by many factors, including the strategy of the ART programme (method of initiation, frequency of patient monitoring, ability to include patients diagnosed before ART became available), other healthcare services (referral rates from antenatal clinics, uptake of HIV testing), demographic and epidemiological conditions (past and future trends in incidence rates and population growth) as well as the medical impact of ART (average survival and the relationship with CD4 count when initiated). The variations in these factors lead to substantial differences in long-term projections; with universal access by 2010 and no further prevention interventions, between 370 000 and almost 2 million patients could be receiving treatment in 2030—a fivefold difference. Under universal access, by 2010 each doctor will initiate ART for up to two patients every day and the case-load for nurses will at least triple as more patients enter care and start treatment. Conclusions The resources required by ART programmes are great and depend on the healthcare systems and the demographic/epidemiological context. This leads to considerable uncertainty in long-term projections and large variation in the resources required in different countries and over time. Understanding how current practices relate to future resource requirements can help optimise ART programmes and inform long-term public health planning. PMID:21636615

  10. Estimating the resources required in the roll-out of universal access to antiretroviral treatment in Zimbabwe.

    PubMed

    Hallett, T B; Gregson, S; Dube, S; Mapfeka, E S; Mugurungi, O; Garnett, G P

    2011-12-01

    To develop projections of the resources required (person-years of drug supply and healthcare worker time) for universal access to antiretroviral treatment (ART) in Zimbabwe. A stochastic mathematical model of disease progression, diagnosis, clinical monitoring and survival in HIV infected individuals. The number of patients receiving ART is determined by many factors, including the strategy of the ART programme (method of initiation, frequency of patient monitoring, ability to include patients diagnosed before ART became available), other healthcare services (referral rates from antenatal clinics, uptake of HIV testing), demographic and epidemiological conditions (past and future trends in incidence rates and population growth) as well as the medical impact of ART (average survival and the relationship with CD4 count when initiated). The variations in these factors lead to substantial differences in long-term projections; with universal access by 2010 and no further prevention interventions, between 370 000 and almost 2 million patients could be receiving treatment in 2030-a fivefold difference. Under universal access, by 2010 each doctor will initiate ART for up to two patients every day and the case-load for nurses will at least triple as more patients enter care and start treatment. The resources required by ART programmes are great and depend on the healthcare systems and the demographic/epidemiological context. This leads to considerable uncertainty in long-term projections and large variation in the resources required in different countries and over time. Understanding how current practices relate to future resource requirements can help optimise ART programmes and inform long-term public health planning.

  11. 36 CFR 228.116 - Information collection requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... information requirements as defined in 5 CFR part 1320 and have been approved for use by the Office of... Control No. 0596-0101. (c) Average estimated burden hours. (1) The average burden hours per response are...

  12. Achronal averaged null energy condition

    SciTech Connect

    Graham, Noah; Olum, Ken D.

    2007-09-15

    The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

  13. Use of molecular biomarkers to estimate manganese requirements for broiler chickens from 22 to 42 d of age.

    PubMed

    Lu, Lin; Chang, Bin; Liao, Xiudong; Wang, Runlian; Zhang, Liyang; Luo, Xugang

    2016-11-01

    The present study was carried out to evaluate dietary Mn requirements of broilers from 22 to 42 d of age using molecular biomarkers. Chickens were fed a conventional basal maize-soyabean meal diet supplemented with Mn as Mn sulphate in graded concentrations of 20 mg Mn/kg from 0 to 140 mg Mn/kg of diet for 21 d (from 22 to 42 d of age). The Mn response curves were fitted for ten parameters including heart Mn-containing superoxide dismutase (MnSOD) mRNA and its protein expression levels and the DNA-binding activities of specificity protein 1 (Sp1) and activating protein-2 (AP-2). Heart MnSOD mRNA and protein expression levels showed significant quadratic responses (P<0·01), and heart MnSOD activity showed a broken-line response (P<0·01), whereas Mn content and DNA-binding activities of Sp1 and AP-2 in the heart displayed linear responses (P<0·01) to dietary Mn concentrations, respectively. The estimates of dietary Mn requirements were 101, 104 and 94 mg/kg for full expressions of MnSOD mRNA level, MnSOD protein level and MnSOD activity in the heart, respectively. Our findings indicate that heart MnSOD mRNA expression level is a more reliable indicator than heart MnSOD protein expression level and its activity for the evaluation of Mn requirement of broilers, and about 100 mg Mn/kg of diet is required for the full expression of heart MnSOD in broilers fed the conventional basal maize-soyabean meal diet from 22 to 42 d of age.

  14. The Future of the Army’s Civilian Workforce: Comparing Projected Inventory with Anticipated Requirements and Estimating Cost Under Different Personnel Policies

    DTIC Science & Technology

    2014-01-01

    in percentage terms) include Medical and Veterinary , Transport Equipment Operations, and Human Resources Management. Similar to commands, under...YORE –5 to –1, YORE 0 to 4, and YORE 5 and above. We then applied OPM’s formula to estimate the average, per-employee cost of a RIF for an...Multiplying the number of employees involved in a RIF by the per-employee cost of a RIF produced a total estimated RIF cost. 10 The basic formula

  15. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  16. A RAPID NON-DESTRUCTIVE METHOD FOR ESTIMATING ABOVEGROUND BIOMASS OF SALT MARSH GRASSES

    EPA Science Inventory

    Understanding the primary productivity of salt marshes requires accurate estimates of biomass. Unfortunately, these estimates vary enough within and among salt marshes to require large numbers of replicates if the averages are to be statistically meaningful. Large numbers of repl...

  17. A RAPID NON-DESTRUCTIVE METHOD FOR ESTIMATING ABOVEGROUND BIOMASS OF SALT MARSH GRASSES

    EPA Science Inventory

    Understanding the primary productivity of salt marshes requires accurate estimates of biomass. Unfortunately, these estimates vary enough within and among salt marshes to require large numbers of replicates if the averages are to be statistically meaningful. Large numbers of repl...

  18. The Spectral Form Factor Is Not Self-Averaging

    SciTech Connect

    Prange, R.

    1997-03-01

    The form factor, k(t), is the spectral statistic which best displays nonuniversal quasiclassical deviations from random matrix theory. Recent estimations of k(t) for a single spectrum found interesting new effects of this type. It was supposed that k(t) is {ital self-averaging} and thus did not require an ensemble average. We here argue that this supposition sometimes fails and that for many important systems an ensemble average is essential to see detailed properties of k(t). In other systems, notably the nontrivial zeros of Riemann zeta function, it will be possible to see the nonuniversal properties by an analysis of a single spectrum. {copyright} {ital 1997} {ital The American Physical Society}

  19. 40 CFR 1036.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...'s deficit by the due date for the final report required in § 1036.730. The emission credits used to... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... § 1036.710 Averaging. (a) Averaging is the exchange of emission credits among your engine families. You...

  20. 40 CFR 1036.710 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...'s deficit by the due date for the final report required in § 1036.730. The emission credits used to... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF... § 1036.710 Averaging. (a) Averaging is the exchange of emission credits among your engine families. You...

  1. Thermal requirements and estimate number of generations of Palmistichus elaeisis (Hymenoptera: Eulophidae) in different Eucalyptus plantations regions.

    PubMed

    Pereira, F F; Zanuncio, J C; Oliveira, H N; Grance, E L V; Pastori, P L; Gava-Oliveira, M D

    2011-05-01

    To use Palmistichus elaeisis Delvare and LaSalle, 1993 (Hymenoptera: Eulophidae) in a biological control programme of Thyrinteina arnobia (Stoll, 1782) (Lepidoptera: Geometridae), it is necessary to study thermal requirements, because temperature can affect the metabolism and bioecological aspects. The objective was to determine the thermal requirements and estimate the number of generations of P. elaeisis in different Eucalyptus plantations regions. After 24 hours in contact with the parasitoid, the pupae was placed in 16, 19, 22, 25, 28 and 31 °C, 70 ± 10% of relative humidity and 14 hours of photophase. The duration of the life cycle of P. elaeisis was reduced with the increase in the temperature. At 31 °C the parasitoid could not finish the cycle in T. arnobia pupae. The emergence of P. elaeisis was not affected by the temperature, except at 31 °C. The number of individuals was between six and 1238 per pupae, being higher at 16 °C. The thermal threshold of development (Tb) and the thermal constant (K) of this parasitoid were 3.92 °C and 478.85 degree-days (GD), respectively, allowing for the completion of 14.98 generations per year in Linhares, Espírito Santo State, 13.87 in Pompéu and 11.75 in Viçosa, Minas Gerais State and 14.10 in Dourados, Mato Grosso do Sul State.

  2. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  3. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  5. Phytoplankton Productivity in an Arctic Fjord (West Greenland): Estimating Electron Requirements for Carbon Fixation and Oxygen Production

    PubMed Central

    Hancke, Kasper; Dalsgaard, Tage; Sejr, Mikael Kristian; Markager, Stiig; Glud, Ronnie Nøhr

    2015-01-01

    Accurate quantification of pelagic primary production is essential for quantifying the marine carbon turnover and the energy supply to the food web. Knowing the electron requirement (Κ) for carbon (C) fixation (ΚC) and oxygen (O2) production (ΚO2), variable fluorescence has the potential to quantify primary production in microalgae, and hereby increasing spatial and temporal resolution of measurements compared to traditional methods. Here we quantify ΚC and ΚO2 through measures of Pulse Amplitude Modulated (PAM) fluorometry, C fixation and O2 production in an Arctic fjord (Godthåbsfjorden, W Greenland). Through short- (2h) and long-term (24h) experiments, rates of electron transfer (ETRPSII), C fixation and/or O2 production were quantified and compared. Absolute rates of ETR were derived by accounting for Photosystem II light absorption and spectral light composition. Two-hour incubations revealed a linear relationship between ETRPSII and gross 14C fixation (R2 = 0.81) during light-limited photosynthesis, giving a ΚC of 7.6 ± 0.6 (mean ± S.E.) mol é (mol C)−1. Diel net rates also demonstrated a linear relationship between ETRPSII and C fixation giving a ΚC of 11.2 ± 1.3 mol é (mol C)−1 (R2 = 0.86). For net O2 production the electron requirement was lower than for net C fixation giving 6.5 ± 0.9 mol é (mol O2)−1 (R2 = 0.94). This, however, still is an electron requirement 1.6 times higher than the theoretical minimum for O2 production [i.e. 4 mol é (mol O2)−1]. The discrepancy is explained by respiratory activity and non-photochemical electron requirements and the variability is discussed. In conclusion, the bio-optical method and derived electron requirement support conversion of ETR to units of C or O2, paving the road for improved spatial and temporal resolution of primary production estimates. PMID:26218096

  6. Phytoplankton Productivity in an Arctic Fjord (West Greenland): Estimating Electron Requirements for Carbon Fixation and Oxygen Production.

    PubMed

    Hancke, Kasper; Dalsgaard, Tage; Sejr, Mikael Kristian; Markager, Stiig; Glud, Ronnie Nøhr

    2015-01-01

    Accurate quantification of pelagic primary production is essential for quantifying the marine carbon turnover and the energy supply to the food web. Knowing the electron requirement (Κ) for carbon (C) fixation (ΚC) and oxygen (O2) production (ΚO2), variable fluorescence has the potential to quantify primary production in microalgae, and hereby increasing spatial and temporal resolution of measurements compared to traditional methods. Here we quantify ΚC and ΚO2 through measures of Pulse Amplitude Modulated (PAM) fluorometry, C fixation and O2 production in an Arctic fjord (Godthåbsfjorden, W Greenland). Through short- (2h) and long-term (24h) experiments, rates of electron transfer (ETRPSII), C fixation and/or O2 production were quantified and compared. Absolute rates of ETR were derived by accounting for Photosystem II light absorption and spectral light composition. Two-hour incubations revealed a linear relationship between ETRPSII and gross 14C fixation (R2 = 0.81) during light-limited photosynthesis, giving a ΚC of 7.6 ± 0.6 (mean ± S.E.) mol é (mol C)-1. Diel net rates also demonstrated a linear relationship between ETRPSII and C fixation giving a ΚC of 11.2 ± 1.3 mol é (mol C)-1 (R2 = 0.86). For net O2 production the electron requirement was lower than for net C fixation giving 6.5 ± 0.9 mol é (mol O2)-1 (R2 = 0.94). This, however, still is an electron requirement 1.6 times higher than the theoretical minimum for O2 production [i.e. 4 mol é (mol O2)-1]. The discrepancy is explained by respiratory activity and non-photochemical electron requirements and the variability is discussed. In conclusion, the bio-optical method and derived electron requirement support conversion of ETR to units of C or O2, paving the road for improved spatial and temporal resolution of primary production estimates.

  7. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  8. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  9. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  10. VacciCost - A tool to estimate the resource requirements for implementing livestock vaccination campaigns. Application to peste des petits ruminants (PPR) vaccination in Senegal.

    PubMed

    Tago, Damian; Sall, Baba; Lancelot, Renaud; Pradel, Jennifer

    2017-09-01

    Vaccination is one of the main tools currently available to control animal diseases. In eradication campaigns, vaccination plays a crucial role by reducing the number of susceptible hosts with the ultimate goal of interrupting disease transmission. Nevertheless, mass vaccination campaigns may be very expensive and in some cases unprofitable. VacciCost is a tool designed to help decision-makers in the estimation of the resources required to implement mass livestock vaccination campaigns against regulated diseases. The tool focuses on the operational or running costs of the campaign, so acquisition of new equipment or vehicles is not considered. It takes into account different types of production systems to differentiate the vaccination productivity (number of animals vaccinated per day) in systems where animals are concentrated and easy to reach, from those characterized by small herds that are scattered and less accessible. The resource requirements are classified in eight categories: vaccines, injection supplies, personnel, transport, maintenance and overhead, training, social mobilization, and surveillance and monitoring. This categorization allows identifying the most expensive components of a vaccination campaign, which is crucial to design cost-reduction strategies. The use of the tool is illustrated using data collected in collaboration with Senegalese Veterinary Services regarding vaccination against peste des petits ruminants. The average daily number of animals vaccinated per vaccination team was found to be crucial for the costs of the campaign so significant savings can be obtained by implementing training to improve the performance of vaccination teams. Copyright © 2017 Centre de cooperation internationale en recherche agronomique pour le developpement (CIRAD). Published by Elsevier B.V. All rights reserved.

  11. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  12. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  13. Scalable Robust Principal Component Analysis using Grassmann Averages.

    PubMed

    Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael

    2015-12-23

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  14. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  15. Estimation of protein requirement for maintenance in adult parrots (Amazona spp.) by determining inevitable N losses in excreta.

    PubMed

    Westfahl, C; Wolf, P; Kamphues, J

    2008-06-01

    Especially in older pet birds, an unnecessary overconsumption of protein--presumably occurring in human custody--should be avoided in view of a potential decrease in the excretory organs' (liver, kidney) efficiency. Inevitable nitrogen (N)-losses enable the estimation of protein requirement for maintenance, because these losses have at least to be replaced to maintain N equilibrium. To determine the inevitable N losses in excreta of adult amazons (Amazona spp.), a frugivor-granivorous avian species from South America, adult amazons (n = 8) were fed a synthetic nearly N-free diet (in dry matter; DM: 37.8% starch, 26.6% sugar, 11.0% fat) for 9 days. Throughout the trial, feed and water intake were recorded, the amounts of excreta were measured and analysed for DM and ash content, N (Dumas analysis) and uric acid (enzymatic-photometric analysis) content. Effects of the N-free diet on body weight (BW) and protein-related blood parameters were quantified and compared with data collected during a previous 4-day period in which a commercial seed mixture was offered to the birds. After feeding an almost N-free diet for 9 days, under the conditions of a DM intake (20.1 g DM/bird/day) as in seeds and digestibility of organic matter comparable with those when fed seeds (82% and 76% respectively), it was possible to quantify the inevitable N losses via excrements to be 87.2 mg/bird/day or 172.5 mg/kg BW(0.75)/day. Assuming a utilization coefficient of 0.57 this leads to an estimated protein need of approximately 1.9 g/kg BW(0.75)/day (this value does not consider further N losses via feathers and desquamated cells; with the prerequisite that there is a balanced amino acid pattern).

  16. Use of Continuous Glucose Monitoring to Estimate Insulin Requirements in Patients with Type 1 Diabetes Mellitus During a Short Course of Prednisone

    PubMed Central

    Bevier, Wendy C.; Zisser, Howard C.; Jovanovič, Lois; Finan, Daniel A.; Palerm, Cesar C.; Seborg, Dale E.; Doyle, Francis J.

    2008-01-01

    Background Insulin requirements to maintain normoglycemia during glucocorticoid therapy and stress are often difficult to estimate. To simulate insulin resistance during stress, adults with type 1 diabetes mellitus (T1DM) were given a three-day course of prednisone. Methods Ten patients (7 women, 3 men) using continuous subcutaneous insulin infusion pumps wore the Medtronic Minimed CGMS® (Northridge, CA) device. Mean (standard deviation) age was 43.1 (14.9) years, body mass index 23.9 (4.7) kg/m2, hemoglobin A1c 6.8% (1.2%), and duration of diabetes 18.7 (10.8) years. Each patient wore the CGMS for one baseline day (day 1), followed by three days of self-administered prednisone (60 mg/dl; days 2–4), and one post-prednisone day (day 5). Results Analysis using Wilcoxon signed rank test (values are median [25th percentile, 75th percentile]) indicated a significant difference between day 1 and the mean of days on prednisone (days 2–4) for average glucose level (110.0 [81.0, 158.0] mg/dl vs 149.2 [137.7, 168.0] mg/dl; p = .022), area under the glucose curve and above the upper limit of 180 mg/dl per day (0.5 [0, 8.0] mg/dl·d vs 14.0 [7.7, 24.7] mg/dl·d; p = .002), and total daily insulin dose (TDI) , (0.5 [0.4, 0.6] U/kg·d vs 0.9 [0.8, 1.0] U/kg·d; p = .002). In addition, the TDI was significantly different for day 1 vs day 5 (0.5 [0.4, 0.6] U/kg·d vs 0.6 [0.5, 0.8] U/kg·d; p = .002). Basal rates and insulin boluses were increased by an average of 69% (range: 30–100%) six hours after the first prednisone dose and returned to baseline amounts on the evening of day 4. Conclusions For adults with T1DM, insulin requirements during prednisone induced insulin resistance may need to be increased by 70% or more to normalize blood glucose levels. PMID:19885233

  17. Tissue and external insulation estimates and their effects on prediction of energy requirements and of heat stress.

    PubMed

    Berman, A

    2004-05-01

    Published data were used to develop improved equations to predict tissue insulation (TI) and external insulation (EI) and their effects on maintenance requirements of Holstein cattle. These are used to calculate lower critical temperature (LCT), energy cost of exposure to temperatures below LCT, and excess heat accumulating in the body at temperatures above LCT. The National Research Council classifies TI by age groups and body condition score; and in the EI equation air velocity effects are linear and coat insulation values are derived from beef animals in cold climates. These lead to low LCT values, which are not compatible with known effects of environment on the performance of Holsteins in warm climates. Equations were developed to present TI as a function of body weight, improving prediction of TI for animals of similar age but differing in body weight. An equation was developed to predict rate of decrease of TI at ambient temperatures above LCT. Nonlinear equations were developed that account for wind effects as boundary layer insulation effects dependent on body weight and air velocity. Published data were used to develop adjustments for hair coat effects on EI in Holstein cows. While by NRC equations, wind has negligible effects on heat loss, the recalculated effects of air velocity on heat loss were consistent with published effects of forced ventilation on the responses of the Holstein cow. The derived LCT was higher by 10 to 20 degrees C than that calculated by NRC (2001) and accounted for known Holstein performance in temperate and warm climates. These equations pointed to tentative significant effects of cold (-10 degrees C) on energy requirements (7 Mcal/d) further increased by 1 m/s wind (15 Mcal/d), even in high-producing cows. Needs for increased heat dissipation and estimating heat stress development at ambient temperatures above the LCT are predicted. These equations can be used to revise NRC equations for heat exchange.

  18. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  19. The paired deuterated retinol dilution technique can be used to estimate the daily vitamin A intake required to maintain a targeted whole body vitamin A pool size in men.

    PubMed

    Haskell, Marjorie J; Jamil, Kazi M; Peerson, Janet M; Wahed, Mohammed A; Brown, Kenneth H

    2011-03-01

    The estimated average requirement (EAR) for vitamin A (VA) of adult males is based on the amount of dietary VA required to maintain adequate function and provide a modest liver VA reserve (0.07 μmol/g). In the present study, the paired-deuterated retinol dilution technique was used to estimate changes in VA pool size in Bangladeshi men from low-income, urban neighborhoods who had small initial VA pool sizes (0.059 ± 0.032 mmol, or 0.047 ± 0.025 μmol/g liver; n = 16). The men were supplemented for 60 d with 1 of 8 different levels of dietary VA, ranging from 100 to 2300 μg/d (2 men/dietary VA level). VA pool size was estimated before and after the supplementation period. The mean change (plus or minus) in VA pool size in the men was plotted against their corresponding levels of daily VA intake and a regression line was fit to the data. The level of intake at which the regression line crossed the x-axis (where estimates of VA pool size remained unchanged) was used as an estimate of the EAR. A VA intake of 254-400 μg/d was sufficient to maintain a small VA pool size (0.059 ± 0.032 mmol) in the Bangladeshi men, corresponding to a VA intake of 362-571 μg/d for a 70-kg U.S. man, which is lower than their current EAR of 625 μg/d. The data suggest that the paired-deuterated retinol dilution technique could be used for estimating the EAR for VA for population subgroups for which there are currently no direct estimates.

  20. Indicator Amino Acid-Derived Estimate of Dietary Protein Requirement for Male Bodybuilders on a Nontraining Day Is Several-Fold Greater than the Current Recommended Dietary Allowance.

    PubMed

    Bandegan, Arash; Courtney-Martin, Glenda; Rafii, Mahroukh; Pencharz, Paul B; Lemon, Peter Wr

    2017-02-08

    Background: Despite a number of studies indicating increased dietary protein needs in bodybuilders with the use of the nitrogen balance technique, the Institute of Medicine (2005) has concluded, based in part on methodologic concerns, that "no additional dietary protein is suggested for healthy adults undertaking resistance or endurance exercise."Objective: The aim of the study was to assess the dietary protein requirement of healthy young male bodybuilders ( with ≥3 y training experience) on a nontraining day by measuring the oxidation of ingested l-[1-(13)C]phenylalanine to (13)CO2 in response to graded intakes of protein [indicator amino acid oxidation (IAAO) technique].Methods: Eight men (means ± SDs: age, 22.5 ± 1.7 y; weight, 83.9 ± 11.6 kg; 13.0% ± 6.3% body fat) were studied at rest on a nontraining day, on several occasions (4-8 times) each with protein intakes ranging from 0.1 to 3.5 g ⋅ kg(-1) ⋅ d(-1), for a total of 42 experiments. The diets provided energy at 1.5 times each individual's measured resting energy expenditure and were isoenergetic across all treatments. Protein was fed as an amino acid mixture based on the protein pattern in egg, except for phenylalanine and tyrosine, which were maintained at constant amounts across all protein intakes. For 2 d before the study, all participants consumed 1.5 g protein ⋅ kg(-1) ⋅ d(-1) On the study day, the protein requirement was determined by identifying the breakpoint in the F(13)CO2 with graded amounts of dietary protein [mixed-effects change-point regression analysis of F(13)CO2 (labeled tracer oxidation in breath)].Results: The Estimated Average Requirement (EAR) of protein and the upper 95% CI RDA for these young male bodybuilders were 1.7 and 2.2 g ⋅ kg(-1) ⋅ d(-1), respectively.Conclusion: These IAAO data suggest that the protein EAR and recommended intake for male bodybuilders at rest on a nontraining day exceed the current recommendations of the Institute of Medicine by ∼2

  1. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  2. Distribution of population averaged observables in stochastic gene expression

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Bhaswati; Kalay, Ziya

    2014-03-01

    Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population averaged gene expression levels as a function of population, or sample size for several stochastic gene expression models to find out to what extent population averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function (pgf) of observables and Large Deviation Theory, we calculate the distribution of population averaged mRNA and protein levels as a function of model parameters and population size. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences for different models. We calculate the skewness and coefficient of variance for pgfs to estimate the sample size required for population average that contains information about gene expression models. This is relevant to experiments where a large number of data points are unavailable.

  3. A Site-sPecific Agricultural water Requirement and footprint Estimator (SPARE:WATER 1.0)

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.

    2013-07-01

    The agricultural water footprint addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). By considering site-specific properties when calculating the crop water footprint, this methodology can be used to support decision making in the agricultural sector on local to regional scale. We therefore developed the spatial decision support system SPARE:WATER that allows us to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirements and water footprints are assessed on a grid basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume inefficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water is defined as the water needed to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept, we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008, with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional estimation of crop water footprints.

  4. Rotational averaging of multiphoton absorption cross sections

    NASA Astrophysics Data System (ADS)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  5. Rotational averaging of multiphoton absorption cross sections.

    PubMed

    Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  6. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  7. Capital Requirements Estimating Model (CREMOD) for electric utilities. Volume I. Methodology description, model, description, and guide to model applications. [For each year up to 1990

    SciTech Connect

    Collins, D E; Gammon, J; Shaw, M L

    1980-01-01

    The Capital Requirements Estimating Model for the Electric Utilities (CREMOD) is a system of programs and data files used to estimate the capital requirements of the electric utility industry for each year between the current one and 1990. CREMOD disaggregates new electric plant capacity levels from the Mid-term Energy Forecasting System (MEFS) Integrating Model solution over time using actual projected commissioning dates. It computes the effect on aggregate capital requirements of dispersal of new plant and capital expenditures over relatively long construction lead times on aggregate capital requirements for each year. Finally, it incorporates the effects of real escalation in the electric utility construction industry on these requirements and computes the necessary transmission and distribution expenditures. This model was used in estimating the capital requirements of the electric utility sector. These results were used in compilation of the aggregate capital requirements for the financing of energy development as published in the 1978 Annual Report to Congress. This volume, Vol. I, explains CREMOD's methodology, functions, and applications.

  8. Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction

    PubMed Central

    Ahlfors, Seppo P.; Hinrichs, Hermann

    2016-01-01

    Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196

  9. Scaling registration of multiview range scans via motion averaging

    NASA Astrophysics Data System (ADS)

    Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan

    2016-07-01

    Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.

  10. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    SciTech Connect

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  11. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    SciTech Connect

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  12. Averaging and Metropolis iterations for positron emission tomography.

    PubMed

    Szirmay-Kalos, László; Magdics, Milán; Tóth, Balázs; Bükki, Tamás

    2013-03-01

    Iterative positron emission tomography (PET) reconstruction computes projections between the voxel space and the lines of response (LOR) space, which are mathematically equivalent to the evaluation of multi-dimensional integrals. The dimension of the integration domain can be very high if scattering needs to be compensated. Monte Carlo (MC) quadrature is a straightforward method to approximate high-dimensional integrals. As the numbers of voxels and LORs can be in the order of hundred millions and the projection also depends on the measured object, the quadratures cannot be precomputed, but Monte Carlo simulation should take place on-the-fly during the iterative reconstruction process. This paper presents modifications of the maximum likelihood, expectation maximization (ML-EM) iteration scheme to reduce the reconstruction error due to the on-the-fly MC approximations of forward and back projections. If the MC sample locations are the same in every iteration step of the ML-EM scheme, then the approximation error will lead to a modified reconstruction result. However, when random estimates are statistically independent in different iteration steps, then the iteration may either diverge or fluctuate around the solution. Our goal is to increase the accuracy and the stability of the iterative solution while keeping the number of random samples and therefore the reconstruction time low. We first analyze the error behavior of ML-EM iteration with on-the-fly MC projections, then propose two solutions: averaging iteration and Metropolis iteration. Averaging iteration averages forward projection estimates during the iteration sequence. Metropolis iteration rejects those forward projection estimates that would compromise the reconstruction and also guarantees the unbiasedness of the tracer density estimate. We demonstrate that these techniques allow a significant reduction of the required number of samples and thus the reconstruction time. The proposed methods are built into

  13. Investigation of Arterial Waves using Phase Averaging.

    PubMed

    Johnston, Clifton; Martinuzzi, Robert; Schaefer, Matthew

    2005-01-01

    In this paper the development of objective criteria for data reduction, parameter estimations and phenomenological description of arterial pressure pulses are presented. The additional challenge of distinguishing between the cyclical and incoherent contributions to the wave form is also considered. By applying the technique of phase averaging to a series of heart beats, a characteristic pulse was determined. It was shown that the beats from a paced heart are very similar and while beats from an unpaced heart will vary significantly in time and amplitude. The appropriate choice of a reference point is critical in generating phase averages that embody the characteristic behaviour.

  14. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false What is required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... must guarantee the excess. The guarantor may be the manufacturer. The guarantor may also be an...

  15. 12 CFR 714.5 - What is required if you rely on an estimated residual value greater than 25% of the original cost...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false What is required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... must guarantee the excess. The guarantor may be the manufacturer. The guarantor may also be an...

  16. Global robust image rotation from combined weighted averaging

    NASA Astrophysics Data System (ADS)

    Reich, Martin; Yang, Michael Ying; Heipke, Christian

    2017-05-01

    In this paper we present a novel rotation averaging scheme as part of our global image orientation model. This model is based on homologous points in overlapping images and is robust against outliers. It is applicable to various kinds of image data and provides accurate initializations for a subsequent bundle adjustment. The computation of global rotations is a combined optimization scheme: First, rotations are estimated in a convex relaxed semidefinite program. Rotations are required to be in the convex hull of the rotation group SO (3) , which in most cases leads to correct rotations. Second, the estimation is improved in an iterative least squares optimization in the Lie algebra of SO (3) . In order to deal with outliers in the relative rotations, we developed a sequential graph optimization algorithm that is able to detect and eliminate incorrect rotations. From the beginning, we propagate covariance information which allows for a weighting in the least squares estimation. We evaluate our approach using both synthetic and real image datasets. Compared to recent state-of-the-art rotation averaging and global image orientation algorithms, our proposed scheme reaches a high degree of robustness and accuracy. Moreover, it is also applicable to large Internet datasets, which shows its efficiency.

  17. Statistical strategies for averaging EC50 from multiple dose-response experiments.

    PubMed

    Jiang, Xiaoqi; Kopp-Schneider, Annette

    2015-11-01

    In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.

  18. Time-average dynamic speckle interferometry

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2014-05-01

    For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.

  19. Differences in concentration lengths computed using band-averaged mass extinction coefficients and band-averaged transmittance

    NASA Astrophysics Data System (ADS)

    Farmer, W. Michael

    1990-09-01

    An understanding of how broad-band transmittance is affected by the atmosphere is crucial to accurately predicting how broad-band sensors such as FLIRs will perform. This is particularly true for sensors required to function in an environment where countermeasures such as smokes/obscurants have been used to limit sensor performance. A common method of estimating the attenuation capabilities of smokes/obscurants released in the atmosphere to defeat broad-band sensors is to use a band averaged extinction coefficient with concentration length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation, and can lead to results for band averages of the relative transmittance that are significantly different from those obtained using the source spectra, sensor response, and normal atmospheric transmission. In this paper we discuss the differences that occur in predicting relative transmittance as a function of concentration length using band-averaged mass extinction coefficients or computing the band-averaged transmittance as a function of source spectra. Two examples are provided to illustrate the differences in results. The first example is applicable to 8- to l4-um band transmission through natural fogs. The second example considers 3- to 5-um transmission through phosphorus smoke produced at 17% and 90% relative humidity. The results show major differences in the prediction of concentration length values by the two methods when the relative transmittance falls below about 20%.

  20. Average chemical composition of the lunar surface

    NASA Technical Reports Server (NTRS)

    Turkevich, A. L.

    1973-01-01

    The available data on the chemical composition of the lunar surface at eleven sites (3 Surveyor, 5 Apollo and 3 Luna) are used to estimate the amounts of principal chemical elements (those present in more than about 0.5% by atom) in average lunar surface material. The terrae of the moon differ from the maria in having much less iron and titanium and appreciably more aluminum and calcium.

  1. Radial averages of astigmatic TEM images.

    PubMed

    Fernando, K Vince

    2008-10-01

    The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.

  2. First Order Estimates of Energy Requirements for Pollution Control. Interagency Energy-Environment Research and Development Program Report.

    ERIC Educational Resources Information Center

    Barker, James L.; And Others

    This U.S. Environmental Protection Agency report presents estimates of the energy demand attributable to environmental control of pollution from stationary point sources. This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes mobile sources such as trucks, and…

  3. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the

  4. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  5. Estimating Temperature Retrieval Accuracy Associated With Thermal Band Spatial Resolution Requirements for Center Pivot Irrigation Monitoring and Management

    NASA Astrophysics Data System (ADS)

    Ryan, R. E.; Irons, J. R.; Allen, R.; Spruce, J.; Underwood, L. W.; Pagnutti, M.

    2006-12-01

    This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400 800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30 240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400 m, 600 m, and 800 m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.

  6. Estimating Temperature Retrieval Accuracy Associated With Thermal Band Spatial Resolution Requirements for Center Pivot Irrigation Monitoring and Management

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Irons, James; Spruce, Joseph P.; Underwood, Lauren W.; Pagnutti, Mary

    2006-01-01

    This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400-800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30-240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400-m, 600-m, and 800-m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.

  7. Estimating Temperature Retrieval Accuracy Associated With Thermal Band Spatial Resolution Requirements for Center Pivot Irrigation Monitoring and Management

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Irons, James; Spruce, Joseph P.; Underwood, Lauren W.; Pagnutti, Mary

    2006-01-01

    This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400-800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30-240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400-m, 600-m, and 800-m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.

  8. Average variograms to guide soil sampling

    NASA Astrophysics Data System (ADS)

    Kerry, R.; Oliver, M. A.

    2004-10-01

    To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.

  9. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  10. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  12. ESTIMATED SPACE REQUIREMENTS FOR FT. WAYNE FACILITY TO BE JOINTLY OCCUPIED BY INDIANA UNIVERSITY AND PURDUE UNIVERSITY.

    ERIC Educational Resources Information Center

    COLLINS, RALPH L.; AND OTHERS

    THIS REPORT PRESENTS THE RESULTS OF THE JOINT PLANNING COMMITTEE OF INDIANA UNIVERSITY AND PURDUE UNIVERSITY IN TERMS OF THE AMOUNT AND TYPE OF SPACE THAT WILL BE REQUIRED BY 1965 AND BY 1972 IN A FACILITY TO BE JOINTLY OCCUPIED BY THE TWO UNIVERSITIES AT FORT WAYNE. IN GENERAL, A SIX-STEP PROCEDURE WAS FOLLOWED--(1) EACH INSTITUTION,…

  13. Estimates for ELF effects: noise-based thresholds and the number of experimental conditions required for empirical searches.

    PubMed

    Weaver, J C; Astumian, R D

    1992-01-01

    Interactions between physical fields and biological systems present difficult conceptual problems. Complete biological systems, even isolated cells, are exceedingly complex. This argues against the pursuit of theoretical models, with the possible consequence that only experimental studies should be considered. In contrast, electroma