Sample records for estimated average requirement

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  3. Clinical validity of the estimated energy requirement and the average protein requirement for nutritional status change and wound healing in older patients with pressure ulcers: A multicenter prospective cohort study.

    PubMed

    Iizaka, Shinji; Kaitani, Toshiko; Nakagami, Gojiro; Sugama, Junko; Sanada, Hiromi

    2015-11-01

    Adequate nutritional intake is essential for pressure ulcer healing. Recently, the estimated energy requirement (30 kcal/kg) and the average protein requirement (0.95 g/kg) necessary to maintain metabolic balance have been reported. The purpose was to evaluate the clinical validity of these requirements in older hospitalized patients with pressure ulcers by assessing nutritional status and wound healing. This multicenter prospective study carried out as a secondary analysis of a clinical trial included 194 patients with pressure ulcers aged ≥65 years from 29 institutions. Nutritional status including anthropometry and biochemical tests, and wound status by a structured severity tool, were evaluated over 3 weeks. Energy and protein intake were determined from medical records on a typical day and dichotomized by meeting the estimated average requirement. Longitudinal data were analyzed with a multivariate mixed-effects model. Meeting the energy requirement was associated with changes in weight (P < 0.001), arm muscle circumference (P = 0.003) and serum albumin level (P = 0.016). Meeting the protein requirement was associated with changes in weight (P < 0.001) and serum albumin level (P = 0.043). These markers decreased in patients who did not meet the requirement, but were stable or increased in those who did. Energy and protein intake were associated with wound healing for deep ulcers (P = 0.013 for both), improving exudates and necrotic tissue, but not for superficial ulcers. Estimated energy requirement and average protein requirement were clinically validated for prevention of nutritional decline and of impaired healing of deep pressure ulcers. © 2014 Japan Geriatrics Society.

  4. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  5. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  6. Dietary Protein Requirement of Men >65 Years Old Determined by the Indicator Amino Acid Oxidation Technique Is Higher than the Current Estimated Average Requirement.

    PubMed

    Rafii, Mahroukh; Chapman, Karen; Elango, Rajavel; Campbell, Wayne W; Ball, Ronald O; Pencharz, Paul B; Courtney-Martin, Glenda

    2016-03-09

    The current estimated average requirement (EAR) and RDA for protein of 0.66 and 0.8 g ⋅ kg -1 ⋅ d -1 , respectively, for adults, including older men, are based on nitrogen balance data analyzed by monolinear regression. Recent studies in young men and older women that used the indicator amino acid oxidation (IAAO) technique suggest that those values may be too low. This observation is supported by 2-phase linear crossover analysis of the nitrogen balance data. The main objective of this study was to determine the protein requirement for older men by using the IAAO technique. Six men aged >65 y were studied; each individual was tested 7 times with protein intakes ranging from 0.2 to 2.0 g ⋅ kg -1 ⋅ d -1 in random order for a total of 42 studies. The diets provided energy at 1.5 times the resting energy expenditure and were isocaloric. Protein was consumed hourly for 8 h as an amino acid mixture with the composition of egg protein with l-[1- 13 C]phenylalanine as the indicator amino acid. The group mean protein requirement was determined by applying a mixed-effects change-point regression analysis to F 13 CO 2 (label tracer oxidation in breath 13 CO 2 ), which identified a breakpoint in F 13 CO 2 in response to graded intakes of protein. The estimated protein requirement and RDA for older men were 0.94 and 1.24 g ⋅ kg -1 ⋅ d -1 , respectively, which are not different from values we published using the same method in young men and older women. The current intake recommendations for older adults for dietary protein of 0.66 g ⋅ kg -1 ⋅ d -1 for the EAR and 0.8 g ⋅ kg -1 ⋅ d -1 for the RDA appear to be underestimated by ∼30%. Future longer-term studies should be conducted to validate these results. This trial was registered at clinicaltrials.gov as NCT01948492. © 2016 American Society for Nutrition.

  7. Robust estimation for class averaging in cryo-EM Single Particle Reconstruction.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2014-01-01

    Single Particle Reconstruction (SPR) for Cryogenic Electron Microscopy (cryo-EM) aligns and averages the images extracted from micrographs to improve the Signal-to-Noise ratio (SNR). Outliers compromise the fidelity of the averaging. We propose a robust cross-correlation-like w-estimator for combating the effect of outliers on the average images in cryo-EM. The estimator accounts for the natural variation of signal contrast among the images and eliminates the need for a threshold for outlier rejection. We show that the influence function of our estimator is asymptotically bounded. Evaluations of the estimator on simulated and real cryo-EM images show good performance in the presence of outliers.

  8. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  9. Reliability Estimates for Undergraduate Grade Point Average

    ERIC Educational Resources Information Center

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  10. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  11. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdin, Kristine L.

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from themore » EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.« less

  12. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and

  13. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  14. GIS Tools to Estimate Average Annual Daily Traffic

    DOT National Transportation Integrated Search

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  15. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  16. A diameter distribution approach to estimating average stand dominant height in Appalachian hardwoods

    Treesearch

    John R. Brooks

    2007-01-01

    A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...

  17. Uncertainties in Estimates of Fleet Average Fuel Economy : A Statistical Evaluation

    DOT National Transportation Integrated Search

    1977-01-01

    Research was performed to assess the current Federal procedure for estimating the average fuel economy of each automobile manufacturer's new car fleet. Test vehicle selection and fuel economy estimation methods were characterized statistically and so...

  18. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  19. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  20. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  1. Estimation of average daily traffic on local roads in Kentucky.

    DOT National Transportation Integrated Search

    2016-07-01

    Kentucky Transportation Cabinet (KYTC) officials use annual average daily traffic (AADT) to estimate intersection : performance across the state maintained highway system. KYTC currently collects AADTs for state maintained : roads but frequently lack...

  2. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  3. SU-E-T-364: Estimating the Minimum Number of Patients Required to Estimate the Required Planning Target Volume Margins for Prostate Glands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhtiari, M; Schmitt, J; Sarfaraz, M

    2015-06-15

    Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less

  4. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  5. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation

  6. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    PubMed

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; P<.001). The difference between EER(PAR) and EER(IOM) values ranged from 40 kcal/d (1.2% higher than EER(IOM)) in obese (body mass index [BMI] ≥30) men to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI <25) women. There were significant inverse relationships between APAR and both obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (P<.001). The PAR protocol is an accurate method for deriving nationally representative estimates of EER and APAR values. These descriptive data provide novel quantitative baseline values for future investigations into associations of physical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  7. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  8. Estimation of annual average daily traffic for off-system roads in Florida

    DOT National Transportation Integrated Search

    1999-07-28

    Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination...

  9. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  10. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  11. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  12. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  13. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  14. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  15. Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.

    PubMed

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms.

  16. Estimating average annual per cent change in trend analysis

    PubMed Central

    Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K

    2009-01-01

    Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324

  17. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  18. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  19. Approximate sample sizes required to estimate length distributions

    USGS Publications Warehouse

    Miranda, L.E.

    2007-01-01

    The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.

  20. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  1. Identification and estimation of survivor average causal effects.

    PubMed

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  2. Identification and estimation of survivor average causal effects

    PubMed Central

    Tchetgen, Eric J Tchetgen

    2014-01-01

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

  3. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supanich, MP

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in themore » central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.« less

  4. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  5. Alternative Estimates of the Reliability of College Grade Point Averages. Professional File. Article 130, Spring 2013

    ERIC Educational Resources Information Center

    Saupe, Joe L.; Eimers, Mardy T.

    2013-01-01

    The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…

  6. Estimates of average annual tributary inflow to the lower Colorado River, Hoover Dam to Mexico

    USGS Publications Warehouse

    Owen-Joyce, Sandra J.

    1987-01-01

    Estimates of tributary inflow by basin or area and by surface water or groundwater are presented in this report and itemized by subreaches in tabular form. Total estimated average annual tributary inflow to the Colorado River between Hoover Dam and Mexico, excluding the measured tributaries, is 96,000 acre-ft or about 1% of the 7.5 million acre-ft/yr of Colorado River water apportioned to the States in the lower Colorado River basin. About 62% of the tributary inflow originates in Arizona, 30% in California, and 8% in Nevada. Tributary inflow is a small component in the water budget for the river. Most of the quantities of unmeasured tributary inflow were estimated in previous studies and were based on mean annual precipitation for 1931-60. Because mean annual precipitation for 1951-80 did not differ significantly from that of 1931-60, these tributary inflow estimates are assumed to be valid for use in 1984. Measured average annual runoff per unit drainage area on the Bill Williams River has remained the same. Surface water inflow from unmeasured tributaries is infrequent and is not captured in surface reservoirs in any of the States; it flows to the Colorado River gaging stations. Estimates of groundwater inflow to the Colorad River valley. Average annual runoff can be used in a water budget; although in wet years, runoff may be large enough to affect the calculation of consumptive use and to be estimated from hydrographs for the Colorado River valley are based on groundwater recharge estimates in the bordering areas, which have not significantly changed through time. In most areas adjacent to the Colorado River valley, groundwater pumpage is small and pumping has not significantly affected the quantity of groundwater discharged to the Colorado River valley. In some areas where groundwater pumpage exceeds the quantity of groundwater discharge and water levels have declined, the quantity of discharge probably has decreased and groundwater inflow to the Colorado

  7. Estimating the path-average rainwater content and updraft speed along a microwave link

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  8. JMAT 2.0 Operating Room Requirements Estimation Study

    DTIC Science & Technology

    2011-05-25

    Health Research Center 140 Sylvester Rd. San Diego, CA 92106-3521 Report No. 11-10J, supported by the Office of the Assistant...expected-value methodology for estimating OR requirements in a theater hospital; (b) algorithms for estimating a special case OR table requirement...assuming the probabilities of entering the OR are either 1 or 0; and (c) an Excel worksheet that calculates the special case OR table estimates

  9. Estimating psychiatric manpower requirements based on patients' needs.

    PubMed

    Faulkner, L R; Goldman, C R

    1997-05-01

    To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.

  10. [Estimation of average traffic emission factor based on synchronized incremental traffic flow and air pollutant concentration].

    PubMed

    Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng

    2014-04-01

    On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors.

  11. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  12. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    PubMed

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  13. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    PubMed

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  14. Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory

    ERIC Educational Resources Information Center

    Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena

    2013-01-01

    This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…

  15. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  16. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  17. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    NASA Astrophysics Data System (ADS)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  18. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    NASA Astrophysics Data System (ADS)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  19. Statistical strategies for averaging EC50 from multiple dose-response experiments.

    PubMed

    Jiang, Xiaoqi; Kopp-Schneider, Annette

    2015-11-01

    In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.

  20. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  1. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  2. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  3. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  4. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  5. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  6. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  7. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  8. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  9. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation.

    PubMed

    Lee, Eugenia E; Stewart, Barclay; Zha, Yuanting A; Groen, Thomas A; Burkle, Frederick M; Kushner, Adam L

    2016-08-10

    Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People's Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies.

  10. Prediction equation for estimating total daily energy requirements of special operations personnel.

    PubMed

    Barringer, N D; Pasiakos, S M; McClung, H L; Crombie, A P; Margolis, L M

    2018-01-01

    Special Operations Forces (SOF) engage in a variety of military tasks with many producing high energy expenditures, leading to undesired energy deficits and loss of body mass. Therefore, the ability to accurately estimate daily energy requirements would be useful for accurate logistical planning. Generate a predictive equation estimating energy requirements of SOF. Retrospective analysis of data collected from SOF personnel engaged in 12 different SOF training scenarios. Energy expenditure and total body water were determined using the doubly-labeled water technique. Physical activity level was determined as daily energy expenditure divided by resting metabolic rate. Physical activity level was broken into quartiles (0 = mission prep, 1 = common warrior tasks, 2 = battle drills, 3 = specialized intense activity) to generate a physical activity factor (PAF). Regression analysis was used to construct two predictive equations (Model A; body mass and PAF, Model B; fat-free mass and PAF) estimating daily energy expenditures. Average measured energy expenditure during SOF training was 4468 (range: 3700 to 6300) Kcal·d- 1 . Regression analysis revealed that physical activity level ( r  = 0.91; P  < 0.05) and body mass ( r  = 0.28; P  < 0.05; Model A), or fat-free mass (FFM; r  = 0.32; P  < 0.05; Model B) were the factors that most highly predicted energy expenditures. Predictive equations coupling PAF with body mass (Model A) and FFM (Model B), were correlated ( r  = 0.74 and r  = 0.76, respectively) and did not differ [mean ± SEM: Model A; 4463 ± 65 Kcal·d - 1 , Model B; 4462 ± 61 Kcal·d - 1 ] from DLW measured energy expenditures. By quantifying and grouping SOF training exercises into activity factors, SOF energy requirements can be predicted with reasonable accuracy and these equations used by dietetic/logistical personnel to plan appropriate feeding regimens to meet SOF nutritional requirements

  11. Improvement of Method for Estimation of Site Amplification Factor Based on Average Shear-wave Velocity of Ground

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Makoto; Midorikawa, Saburoh

    The empirical equation for estimating the site amplification factor of ground motion by the average shear-wave velocity of ground (AVS) is examined. In the existing equations, the coefficient on dependence of the amplification factor on the AVS was treated as constant. The analysis showed that the coefficient varies with change of the AVS for short periods. A new estimation equation was proposed considering the dependence on the AVS. The new equation can represent soil characteristics that the softer soil has the longer predominant period, and can make better estimations for short periods than the existing method.

  12. A law of order estimation and leading-order terms for a family of averaged quantities on a multibaker chain system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Hideshi, E-mail: ishida@me.es.osaka-u.ac.jp

    2014-06-15

    In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. Thesemore » deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.« less

  13. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  14. Statistical average estimates of high latitude field-aligned currents from the STARE and SABRE coherent VHF radar systems

    NASA Astrophysics Data System (ADS)

    Kosch, M. J.; Nielsen, E.

    Two bistatic VHF radar systems, STARE and SABRE, have been employed to estimate ionospheric electric fields in the geomagnetic latitude range 61.1 - 69.3° (geographic latitude range 63.8 - 72.6°) over northern Scandinavia. 173 days of good backscatter from all four radars have been analysed during the period 1982 to 1986, from which the average ionospheric divergence electric field versus latitude and time is calculated. The average magnetic field-aligned currents are computed using an AE-dependent empirical model of the ionospheric conductance. Statistical Birkeland current estimates are presented for high and low values of the Kp and AE indices as well as positive and negative orientations of the IMF B z component. The results compare very favourably to other ground-based and satellite measurements.

  15. Estimated water requirements for gold heap-leach operations

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water necessary for conventional gold heap-leach operations. Water is required for drilling and dust suppression during mining, for agglomeration and as leachate during ore processing, to support the workforce (requires water in potable form and for sanitation), for minesite reclamation, and to compensate for water lost to evaporation and leakage. Maintaining an adequate water balance is especially critical in areas where surface and groundwater are difficult to acquire because of unfavorable climatic conditions [arid conditions and (or) a high evaporation rate]; where there is competition with other uses, such as for agriculture, industry, and use by municipalities; and where compliance with regulatory requirements may restrict water usage. Estimating the water consumption of heap-leach operations requires an understanding of the heap-leach process itself. The task is fairly complex because, although they all share some common features, each gold heap-leach operation is unique. Also, estimating the water consumption requires a synthesis of several fields of science, including chemistry, ecology, geology, hydrology, and meteorology, as well as consideration of economic factors.

  16. Unmanned Aerial Vehicles unique cost estimating requirements

    NASA Astrophysics Data System (ADS)

    Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.

    Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.

  17. Conversion of calibration curves for accurate estimation of molecular weight averages and distributions of polyether polyols by conventional size exclusion chromatography.

    PubMed

    Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M

    2018-08-17

    Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.

  18. Conservative Estimation of Whole-body Average SAR in Infant Model for 0.3-6GHz Far-Field Exposure

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Nagaya, Yoshio; Ito, Naoki; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi

    From an anatomically-based Japanese model of three-year-old child with a resolution of 1 mm, we developed a nine-month Japanese infant with linear shrink. With these models, we calculated the whole-body average specific absorption rate (WBA-SAR) for plane-wave exposure from 0.1 to 6 GHz. A conservative estimate of the WBA-SAR was also investigated by using three kinds of simple-shaped models: cuboid, ellipsoid and spheroid, whose parameters were determined based on the above three-year-old child model. As a result, the cuboid and ellipsoid were found to provide an overestimate of the WBA-SAR compared to the realistic model, whereas the spheroid does an underestimate. Based on these findings for different body models, we have specified the incident power density required to produce WBA-SAR of 0.08 W/kg, which is the basic restriction for public exposure in the guidelines of International Commission on Non-Ionizing Radiation Protection.

  19. Progressive calibration and averaging for tandem mass spectrometry statistical confidence estimation: Why settle for a single decoy?

    PubMed Central

    Keich, Uri; Noble, William Stafford

    2017-01-01

    Estimating the false discovery rate (FDR) among a list of tandem mass spectrum identifications is mostly done through target-decoy competition (TDC). Here we offer two new methods that can use an arbitrarily small number of additional randomly drawn decoy databases to improve TDC. Specifically, “Partial Calibration” utilizes a new meta-scoring scheme that allows us to gradually benefit from the increase in the number of identifications calibration yields and “Averaged TDC” (a-TDC) reduces the liberal bias of TDC for small FDR values and its variability throughout. Combining a-TDC with “Progressive Calibration” (PC), which attempts to find the “right” number of decoys required for calibration we see substantial impact in real datasets: when analyzing the Plasmodium falciparum data it typically yields almost the entire 17% increase in discoveries that “full calibration” yields (at FDR level 0.05) using 60 times fewer decoys. Our methods are further validated using a novel realistic simulation scheme and importantly, they apply more generally to the problem of controlling the FDR among discoveries from searching an incomplete database. PMID:29326989

  20. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  1. How Well Can We Estimate Areal-Averaged Spectral Surface Albedo from Ground-Based Transmission in an Atlantic Coastal Area?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department ofmore » Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.« less

  2. The GAAS metagenomic tool and its estimations of viral and microbial average genome size in four major biomes.

    PubMed

    Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest

    2009-12-01

    Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions.

  3. The balanced survivor average causal effect.

    PubMed

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  4. When Is the Local Average Treatment Close to the Average? Evidence from Fertility and Labor Supply

    ERIC Educational Resources Information Center

    Ebenstein, Avraham

    2009-01-01

    The local average treatment effect (LATE) may differ from the average treatment effect (ATE) when those influenced by the instrument are not representative of the overall population. Heterogeneity in treatment effects may imply that parameter estimates from 2SLS are uninformative regarding the average treatment effect, motivating a search for…

  5. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  6. Planning and Estimation of Operations Support Requirements

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon

    2010-01-01

    Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D

  7. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.

  8. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  9. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  10. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  11. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    PubMed

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-07

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.

  12. Performance analysis of cross-layer design with average PER constraint over MIMO fading channels

    NASA Astrophysics Data System (ADS)

    Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin

    2015-12-01

    In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.

  13. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Provisions § 89.204 Averaging. (a) Requirements for Tier 1 engines rated at or above 37 kW. A manufacturer... credits obtained through trading. (b) Requirements for Tier 2 and later engines rated at or above 37 kW and Tier 1 and later engines rated under 37 kW. A manufacturer may use averaging to offset an emission...

  14. Determining storm sampling requirements for improving precision of annual load estimates of nutrients from a small forested watershed.

    PubMed

    Ide, Jun'ichiro; Chiwa, Masaaki; Higashi, Naoko; Maruno, Ryoko; Mori, Yasushi; Otsuki, Kyoichi

    2012-08-01

    This study sought to determine the lowest number of storm events required for adequate estimation of annual nutrient loads from a forested watershed using the regression equation between cumulative load (∑L) and cumulative stream discharge (∑Q). Hydrological surveys were conducted for 4 years, and stream water was sampled sequentially at 15-60-min intervals during 24 h in 20 events, as well as weekly in a small forested watershed. The bootstrap sampling technique was used to determine the regression (∑L-∑Q) equations of dissolved nitrogen (DN) and phosphorus (DP), particulate nitrogen (PN) and phosphorus (PP), dissolved inorganic nitrogen (DIN), and suspended solid (SS) for each dataset of ∑L and ∑Q. For dissolved nutrients (DN, DP, DIN), the coefficient of variance (CV) in 100 replicates of 4-year average annual load estimates was below 20% with datasets composed of five storm events. For particulate nutrients (PN, PP, SS), the CV exceeded 20%, even with datasets composed of more than ten storm events. The differences in the number of storm events required for precise load estimates between dissolved and particulate nutrients were attributed to the goodness of fit of the ∑L-∑Q equations. Bootstrap simulation based on flow-stratified sampling resulted in fewer storm events than the simulation based on random sampling and showed that only three storm events were required to give a CV below 20% for dissolved nutrients. These results indicate that a sampling design considering discharge levels reduces the frequency of laborious chemical analyses of water samples required throughout the year.

  15. Translating HbA1c measurements into estimated average glucose values in pregnant women with diabetes.

    PubMed

    Law, Graham R; Gilthorpe, Mark S; Secher, Anna L; Temple, Rosemary; Bilous, Rudolf; Mathiesen, Elisabeth R; Murphy, Helen R; Scott, Eleanor M

    2017-04-01

    This study aimed to examine the relationship between average glucose levels, assessed by continuous glucose monitoring (CGM), and HbA 1c levels in pregnant women with diabetes to determine whether calculations of standard estimated average glucose (eAG) levels from HbA 1c measurements are applicable to pregnant women with diabetes. CGM data from 117 pregnant women (89 women with type 1 diabetes; 28 women with type 2 diabetes) were analysed. Average glucose levels were calculated from 5-7 day CGM profiles (mean 1275 glucose values per profile) and paired with a corresponding (±1 week) HbA 1c measure. In total, 688 average glucose-HbA 1c pairs were obtained across pregnancy (mean six pairs per participant). Average glucose level was used as the dependent variable in a regression model. Covariates were gestational week, study centre and HbA 1c . There was a strong association between HbA 1c and average glucose values in pregnancy (coefficient 0.67 [95% CI 0.57, 0.78]), i.e. a 1% (11 mmol/mol) difference in HbA 1c corresponded to a 0.67 mmol/l difference in average glucose. The random effects model that included gestational week as a curvilinear (quadratic) covariate fitted best, allowing calculation of a pregnancy-specific eAG (PeAG). This showed that an HbA 1c of 8.0% (64 mmol/mol) gave a PeAG of 7.4-7.7 mmol/l (depending on gestational week), compared with a standard eAG of 10.2 mmol/l. The PeAG associated with maintaining an HbA 1c level of 6.0% (42 mmol/mol) during pregnancy was between 6.4 and 6.7 mmol/l, depending on gestational week. The HbA 1c -average glucose relationship is altered by pregnancy. Routinely generated standard eAG values do not account for this difference between pregnant and non-pregnant individuals and, thus, should not be used during pregnancy. Instead, the PeAG values deduced in the current study are recommended for antenatal clinical care.

  16. Estimation of protein requirements according to nitrogen balance for older hospitalized adults with pressure ulcers according to wound severity in Japan.

    PubMed

    Iizaka, Shinji; Matsuo, Junko; Konya, Chizuko; Sekine, Rie; Sugama, Junko; Sanada, Hiromi

    2012-11-01

    To estimate protein requirements in older hospitalized adults with pressure ulcers (PrU) according to systemic conditions and wound severity. Secondary nitrogen balance study over 3 days. Long-term care facility. Twenty-eight older adults with PrU using a urinary catheter. Nitrogen balance over 3 days was evaluated from habitual nitrogen intake measured using a food weighing record and nitrogen excretion from urine, feces and wound exudate. Nitrogen intake required to maintain nitrogen equilibrium was estimated as an average protein requirement using a linear mixed model. Nitrogen intake at nitrogen equilibrium was 0.151 gN/kg per day (95% confidence interval = 0.127-0.175 gN/kg per day) for all participants. The amount of protein loss from wound exudate contributed little to total nitrogen excretion. A Charlson comorbidity index of 4 or greater (the median value) was related to lower nitrogen intake at nitrogen equilibrium (P = .005). Severe PrU with heavy exudate amounts and measured wound areas of 7.9 cm(2) or greater (the median value) were related to higher nitrogen intake at nitrogen equilibrium in individuals with a Charlson comorbidity index of 3 or less (both P = .04). Larger wound area (correlation coefficient (r) = 0.55, P = .003) and heavier exudate volume (r = 0.53, P = .004) were associated with muscle protein hypercatabolism measured according to 3-methylhistidine/creatinine ratio. The average protein requirement is 0.95 g/kg per day for older hospitalized Japanese adults with PrU, but protein requirements depend on an individual's condition and wound severity and range from 0.75 to 1.30 g/kg per day. Severe PrU can require higher protein intakes because of muscle protein hypercatabolism rather than direct loss of protein from wound exudate. © 2012, Copyright the Authors Journal compilation © 2012, The American Geriatrics Society.

  17. Estimating glomerular filtration rate (GFR) in children. The average between a cystatin C- and a creatinine-based equation improves estimation of GFR in both children and adults and enables diagnosing Shrunken Pore Syndrome.

    PubMed

    Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders

    2017-09-01

    Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.

  18. Airport Surface Traffic Control Concept Formulation Study : Volume 4. Estimation of Requirements

    DOT National Transportation Integrated Search

    1975-07-01

    A detailed study of requirements was performed and is presented. This requirements effort provided an estimate of the performance requirements of a surveillance sensor that would be required in a TAGS (Tower Automated Ground Surveillance) system for ...

  19. Multi-Watt Average Power Nanosecond Microchip Laser and Power Scalability Estimates

    NASA Technical Reports Server (NTRS)

    Konoplev, Oleg A.; Vasilyev, Alexey A.; Seas, Antonios A.; Yu, Anthony W.; Li, Steven X.; Shaw, George B.; Stephen, Mark A.; Krainak, Michael A.

    2011-01-01

    We demonstrated up to 2 W average power, CW-pumped, passively- Q-switched, 1.5 ns monolithic MCL with single-longitudinal mode-operation. We discuss laser design issues to bring the average power to 5-1 OW and beyond.

  20. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  1. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  2. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGES

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; ...

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  3. Maps to estimate average streamflow and headwater limits for streams in U.S. Army Corps of Engineers, Mobile District, Alabama and adjacent states

    USGS Publications Warehouse

    Nelson, George H.

    1984-01-01

    U.S. Army Corps of Engineers permits are required for discharges of dredged or fill-material downstream from the ' headwaters ' of specified streams. The term ' headwaters ' is defined as the point of a freshwater (non-tidal) stream above which the average flow is less than 5 cu ft/s. Maps of the Mobile District area showing (1) lines of equal average streamflow, and (2) lines of equal drainage areas required to produce an average flow of 5 cu ft/s are contained in this report. These maps are for use by the Corps of Engineers in their permitting program. (USGS)

  4. Estimating Watershed-Averaged Precipitation and Evapotranspiration Fluxes using Streamflow Measurements in a Semi-Arid, High Altitude Montane Catchment

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.

    2014-12-01

    Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.

  5. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  6. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  7. Estimating force and power requirements for crosscut shearing of roundwood.

    Treesearch

    Rodger A. Arola

    1972-01-01

    Presents a procedure which, through the use of nomographs, permits rapid estimation of the force required to crosscut shear logs of various species and diameters with shear blades ranging in thickness from 1/4 to 7/8 inch. In addition, nomographs are included to evaluate hydraulic cylinder sizes, pump capacities, and motor horsepower requirements to effect the cut....

  8. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    PubMed

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  9. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  10. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    NASA Astrophysics Data System (ADS)

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  11. MARC ES: a computer program for estimating medical information storage requirements.

    PubMed

    Konoske, P J; Dobbins, R W; Gauker, E D

    1998-01-01

    During combat, documentation of medical treatment information is critical for maintaining continuity of patient care. However, knowledge of prior status and treatment of patients is limited to the information noted on a paper field medical card. The Multi-technology Automated Reader Card (MARC), a smart card, has been identified as a potential storage mechanism for casualty medical information. Focusing on data capture and storage technology, this effort developed a Windows program, MARC ES, to estimate storage requirements for the MARC. The program calculates storage requirements for a variety of scenarios using medical documentation requirements, casualty rates, and casualty flows and provides the user with a tool to estimate the space required to store medical data at each echelon of care for selected operational theaters. The program can also be used to identify the point at which data must be uploaded from the MARC if size constraints are imposed. Furthermore, this model can be readily extended to other systems that store or transmit medical information.

  12. Movie denoising by average of warped lines.

    PubMed

    Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro

    2007-09-01

    Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.

  13. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    PubMed

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  14. Method for detection and correction of errors in speech pitch period estimates

    NASA Technical Reports Server (NTRS)

    Bhaskar, Udaya (Inventor)

    1989-01-01

    A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.

  15. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  16. Estimating Wartime Support Resource Requirements. Statistical and Related Policy Issues.

    DTIC Science & Technology

    1984-07-01

    Requirements, Programs & Studies Group (AF/RDQM) July 1984 Ofc, DCS/R&D and Acquisition 13. NUMEtR OF PAGES Ho UAF Wshinton.D.C. 20330______________ 14.m...personnel responsible for producing requirements estimates. This work was conducted as part of the study effort "The Driving 1" Inputs and Assumptions of...represent another important explanation for high support costs. The Air Force has completed a detailed study of the reasons for growth in spares

  17. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition

    PubMed Central

    Taylor, Brian A.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason

    2009-01-01

    The authors investigated the performance of the iterative Steiglitz–McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (≤16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer–Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR)≥5 for echo train lengths (ETLs)≥4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and∕or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with ≥4 echoes and for T2* (<1.0%) with ≥7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire ≤16 echoes for one- and two-peak systems. Preliminary ex vivo and in vivo

  18. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  19. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  20. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  1. Doubly robust nonparametric inference on the average treatment effect.

    PubMed

    Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B

    2017-12-01

    Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.

  2. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    NASA Astrophysics Data System (ADS)

    Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.

  3. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Estimation of the rain signal in the presence of large surface clutter

    NASA Technical Reports Server (NTRS)

    Ahamad, Atiq; Moore, Richard K.

    1994-01-01

    The principal limitation for the use of a spaceborne imaging SAR as a rain radar is the surface-clutter problem. Signals may be estimated in the presence of noise by averaging large numbers of independent samples. This method was applied to obtain an estimate of the rain echo by averaging a set of N(sub c) samples of the clutter in a separate measurement and subtracting the clutter estimate from the combined estimate. The number of samples required for successful estimation (within 10-20%) for off-vertical angles of incidence appears to be prohibitively large. However, by appropriately degrading the resolution in both range and azimuth, the required number of samples can be obtained. For vertical incidence, the number of samples required for successful estimation is reasonable. In estimating the clutter it was assumed that the surface echo is the same outside the rain volume as it is within the rain volume. This may be true for the forest echo, but for convective storms over the ocean the surface echo outside the rain volume is very different from that within. It is suggested that the experiment be performed with vertical incidence over forest to overcome this limitation.

  5. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  6. Nonlinear models for estimating GSFC travel requirements

    NASA Technical Reports Server (NTRS)

    Buffalano, C.; Hagan, F. J.

    1974-01-01

    A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.

  7. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  8. Optimal weighted averaging of event related activity from acquisitions with artifacts.

    PubMed

    Vollero, Luca; Petrichella, Sara; Innello, Giulio

    2016-08-01

    In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.

  9. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Spatially averaged flow over a wavy boundary revisited

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.

  11. Estimates of galactic cosmic ray shielding requirements during solar minimum

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.

    1990-01-01

    Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.

  12. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  13. Applications of high average power nonlinear optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velsko, S.P.; Krupke, W.F.

    1996-02-05

    Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.

  14. Lessons Learned for Planning and Estimating Operations Support Requirements

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn

    2011-01-01

    Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead projects to focus on hardware development schedules and costs, de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations, and any LCC growth can directly impact the programs ability to fund new missions. The D&NF Program Office at Marshall Space Flight Center recently studied cost overruns for 7 D&NF missions related to phase C/D development of operational capabilities and phase E mission operations. The goal was to identify the underlying causes for the overruns and develop practical mitigations to assist the D&NF projects in identifying potential operations risks and controlling the associated impacts to operations development and execution costs. The study found that the drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This presentation summarizes the study and the results, providing a set of lessons NASA can use to improve early estimation and validation of operations costs.

  15. A Method for the Estimation of p-Mode Parameters from Averaged Solar Oscillation Power Spectra

    NASA Astrophysics Data System (ADS)

    Reiter, J.; Rhodes, E. J., Jr.; Kosovichev, A. G.; Schou, J.; Scherrer, P. H.; Larson, T. P.

    2015-04-01

    A new fitting methodology is presented that is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from m-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the “Windowed, MuLTiple-Peak, averaged-spectrum” or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run, using weights from a leakage matrix that takes into account observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method, which employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure, which is based upon 6366 modes that we computed using the WMLTP method on the 66 day 2010 Solar and Heliospheric Observatory/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion, we developed a new procedure for the identification and correction of outliers in a frequency dataset. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle 24 during mid-2010.

  16. Evaluation of a method estimating real-time individual lysine requirements in two lines of growing-finishing pigs.

    PubMed

    Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J

    2015-04-01

    The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be

  17. Estimates of the Average Number of Times Students Say They Cheated

    ERIC Educational Resources Information Center

    Liebler, Robert

    2017-01-01

    Data from published studies is used to recover information about the sample mean self-reported number of times cheated by college students. The sample means were estimated by fitting distributions to the reported data. The few estimated sample means thus recovered were roughly 2 or less.

  18. Estimating nutrient uptake requirements for soybean using QUEFTS model in China

    PubMed Central

    Yang, Fuqiang; Xu, Xinpeng; Wang, Wei; Ma, Jinchuan; Wei, Dan; He, Ping; Pampolino, Mirasol F.; Johnston, Adrian M.

    2017-01-01

    Estimating balanced nutrient requirements for soybean (Glycine max [L.] Merr) in China is essential for identifying optimal fertilizer application regimes to increase soybean yield and nutrient use efficiency. We collected datasets from field experiments in major soybean planting regions of China between 2001 and 2015 to assess the relationship between soybean seed yield and nutrient uptake, and to estimate nitrogen (N), phosphorus (P), and potassium (K) requirements for a target yield of soybean using the quantitative evaluation of the fertility of tropical soils (QUEFTS) model. The QUEFTS model predicted a linear–parabolic–plateau curve for the balanced nutrient uptake with a target yield increased from 3.0 to 6.0 t ha−1 and the linear part was continuing until the yield reached about 60–70% of the potential yield. To produce 1000 kg seed of soybean in China, 55.4 kg N, 7.9 kg P, and 20.1 kg K (N:P:K = 7:1:2.5) were required in the above-ground parts, and the corresponding internal efficiencies (IE, kg seed yield per kg nutrient uptake) were 18.1, 126.6, and 49.8 kg seed per kg N, P, and K, respectively. The QUEFTS model also simulated that a balanced N, P, and K removal by seed which were 48.3, 5.9, and 12.2 kg per 1000 kg seed, respectively, accounting for 87.1%, 74.1%, and 60.8% of the total above-ground parts, respectively. These results were conducive to make fertilizer recommendations that improve the seed yield of soybean and avoid excessive or deficient nutrient supplies. Field validation indicated that the QUEFTS model could be used to estimate nutrient requirements which help develop fertilizer recommendations for soybean. PMID:28498839

  19. 18 CFR 284.270 - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION...; (4) The estimated total amount and average daily amount of emergency natural gas to be purchased...

  20. 18 CFR 284.270 - Reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... the emergency; (4) The estimated total amount and average daily amount of emergency natural gas to be...

  1. 18 CFR 284.270 - Reporting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION...; (4) The estimated total amount and average daily amount of emergency natural gas to be purchased...

  2. 18 CFR 284.270 - Reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... the emergency; (4) The estimated total amount and average daily amount of emergency natural gas to be...

  3. 18 CFR 284.270 - Reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Reporting requirements. 284.270 Section 284.270 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... the emergency; (4) The estimated total amount and average daily amount of emergency natural gas to be...

  4. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  5. Dietary Protein Intake in Young Children in Selected Low-Income Countries Is Generally Adequate in Relation to Estimated Requirements for Healthy Children, Except When Complementary Food Intake Is Low.

    PubMed

    Arsenault, Joanne E; Brown, Kenneth H

    2017-05-01

    Background: Previous research indicates that young children in low-income countries (LICs) generally consume greater amounts of protein than published estimates of protein requirements, but this research did not account for protein quality based on the mix of amino acids and the digestibility of ingested protein. Objective: Our objective was to estimate the prevalence of inadequate protein and amino acid intake by young children in LICs, accounting for protein quality. Methods: Seven data sets with information on dietary intake for children (6-35 mo of age) from 6 LICs (Peru, Guatemala, Ecuador, Bangladesh, Uganda, and Zambia) were reanalyzed to estimate protein and amino acid intake and assess adequacy. The protein digestibility-corrected amino acid score of each child's diet was calculated and multiplied by the original (crude) protein intake to obtain an estimate of available protein intake. Distributions of usual intake were obtained to estimate the prevalence of inadequate protein and amino acid intake for each cohort according to Estimated Average Requirements. Results: The prevalence of inadequate protein intake was highest in breastfeeding children aged 6-8 mo: 24% of Bangladeshi and 16% of Peruvian children. With the exception of Bangladesh, the prevalence of inadequate available protein intake decreased by age 9-12 mo and was very low in all sites (0-2%) after 12 mo of age. Inadequate protein intake in children <12 mo of age was due primarily to low energy intake from complementary foods, not inadequate protein density. Conclusions: Overall, most children consumed protein amounts greater than requirements, except for the younger breastfeeding children, who were consuming low amounts of complementary foods. These findings reinforce previous evidence that dietary protein is not generally limiting for children in LICs compared with estimated requirements for healthy children, even after accounting for protein quality. However, unmeasured effects of infection

  6. Cosmological ensemble and directional averages of observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less

  7. Irrigation Requirement Estimation using MODIS Vegetation Indices and Inverse Biophysical Modeling; A Case Study for Oran, Algeria

    NASA Technical Reports Server (NTRS)

    Bounoua, L.; Imhoff, M.L.; Franks, S.

    2008-01-01

    the study site, for the month of July, spray irrigation resulted in an irrigation amount of about 1.4 mm per occurrence with an average frequency of occurrence of 24.6 hours. The simulated total monthly irrigation for July was 34.85 mm. In contrast, the drip irrigation resulted in less frequent irrigation events with an average water requirement about 57% less than that simulated during the spray irrigation case. The efficiency of the drip irrigation method rests on its reduction of the canopy interception loss compared to the spray irrigation method. When compared to a country-wide average estimate of irrigation water use, our numbers are quite low. We would have to revise the reported country level estimates downward to 17% or less

  8. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    PubMed Central

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  9. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    PubMed

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  10. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    NASA Astrophysics Data System (ADS)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  11. How Many Conformations Need To Be Sampled To Obtain Converged QM/MM Energies? The Curse of Exponential Averaging.

    PubMed

    Ryde, Ulf

    2017-11-14

    Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.

  12. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  13. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  14. Considerations for applying VARSKIN mod 2 to skin dose calculations averaged over 10 cm2.

    PubMed

    Durham, James S

    2004-02-01

    VARSKIN Mod 2 is a DOS-based computer program that calculates the dose to skin from beta and gamma contamination either directly on skin or on material in contact with skin. The default area for calculating the dose is 1 cm2. Recently, the U.S. Nuclear Regulatory Commission issued new guidelines for calculating shallow dose equivalent from skin contamination that requires the dose be averaged over 10 cm2. VARSKIN Mod 2 was not filly designed to calculate beta or gamma dose estimates averaged over 10 cm2, even though the program allows the user to calculate doses averaged over 10 cm2. This article explains why VARSKIN Mod 2 overestimates the beta dose when applied to 10 cm2 areas, describes a manual method for correcting the overestimate, and explains how to perform reasonable gamma dose calculations averaged over 10 cm2. The article also describes upgrades underway in Varskin 3.

  15. Estimating risk reduction required to break even in a health promotion program.

    PubMed

    Ozminkowski, Ronald J; Goetzel, Ron Z; Santoro, Jan; Saenz, Betty-Jo; Eley, Christine; Gorsky, Bob

    2004-01-01

    To illustrate a formula to estimate the amount of risk reduction required to break even on a corporate health promotion program. A case study design was implemented. Base year (2001) health risk and medical expenditure data from the company, along with published information on the relationships between employee demographics, health risks, and medical expenditures, were used to forecast demographics, risks, and expenditures for 2002 through 2011 and estimate the required amount of risk reduction. Motorola. 52,124 domestic employees. Demographics included age, gender, race, and job type. Health risks for 2001 were measured via health risk appraisal. Risks were noted as either high or low and related to exercise/eating habits, body weight, blood pressure, blood sugar levels, cholesterol levels, depression, stress, smoking/drinking habits, and seat belt use. Medical claims for 2001 were used to calculate medical expenditures per employee. Assuming a dollar 282 per employee program cost, Motorola employees would need to reduce their lifestyle-related health risks by 1.08% to 1.42% per year to break even on health promotion programming, depending upon the discount rate. Higher or lower program investments would change the risk reduction percentages. Employers can use information from published studies, along with their own data, to estimate the amount of risk reduction required to break even on their health promotion programs.

  16. Bioenergetics model for estimating food requirements of female Pacific walruses (Odobenus rosmarus divergens)

    USGS Publications Warehouse

    Noren, S.R.; Udevitz, M.S.; Jay, C.V.

    2012-01-01

    Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.

  17. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  18. The Effect of a "C" Average Grade Requirement on the Dropout Rate of Extracurricular Activities in the Anchorage Public Schools.

    ERIC Educational Resources Information Center

    Foldenauer, Jerry

    The purpose of this study was to determine if extracurricular activity participants in the Anchorage (Alaska) School District were dropping out of school at a greater rate as a result of having to meet a "C" average grade requirement for participation. Subjects of the study were all students in grades 9 through 12 who participated in…

  19. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  20. Average discharge, perennial flow initiation, and channel initiation - small southern Appalachian basins

    Treesearch

    B. Lane Rivenbark; C. Rhett Jackson

    2004-01-01

    Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...

  1. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  3. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions

    PubMed Central

    Trites, Andrew W.; Rosen, David A. S.; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion—an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric—Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal—and they should

  4. Estimated Daily Average Per Capita Water Ingestion by Child and Adult Age Categories Based on USDA's 1994-96 and 1998 Continuing Survey of Food Intakes by Individuals (Journal Article)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The a...

  5. Egomotion Estimation with Optic Flow and Air Velocity Sensors

    DTIC Science & Technology

    2012-01-22

    RUMMELT ADAM J. RUTKOWSKI Acting Technical Advisor, RWW Program Manager This report is...method of distance and groundspeed estimation using an omnidirectional camera, but knowledge of the average scene distance is required. Flight height...varying wind and even over sloped terrain. Our method also does not require any prior knowledge of the environment or the flyer motion states. This

  6. Average Revisited in Context

    ERIC Educational Resources Information Center

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  7. Viewpoint: observations on scaled average bioequivalence.

    PubMed

    Patterson, Scott D; Jones, Byron

    2012-01-01

    The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  9. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  10. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in

  11. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging.

    PubMed

    Kim, Steven B; Kodell, Ralph L; Moon, Hojin

    2014-03-01

    In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.

  12. [Simulation model for estimating the cancer care infrastructure required by the public health system].

    PubMed

    Gomes Junior, Saint Clair Santos; Almeida, Rosimary Terezinha

    2009-02-01

    To develop a simulation model using public data to estimate the cancer care infrastructure required by the public health system in the state of São Paulo, Brazil. Public data from the Unified Health System database regarding cancer surgery, chemotherapy, and radiation therapy, from January 2002-January 2004, were used to estimate the number of cancer cases in the state. The percentages recorded for each therapy in the Hospital Cancer Registry of Brazil were combined with the data collected from the database to estimate the need for services. Mixture models were used to identify subgroups of cancer cases with regard to the length of time that chemotherapy and radiation therapy were required. A simulation model was used to estimate the infrastructure required taking these parameters into account. The model indicated the need for surgery in 52.5% of the cases, radiation therapy in 42.7%, and chemotherapy in 48.5%. The mixture models identified two subgroups for radiation therapy and four subgroups for chemotherapy with regard to mean usage time for each. These parameters allowed the following estimated infrastructure needs to be made: 147 operating rooms, 2 653 operating beds, 297 chemotherapy chairs, and 102 radiation therapy devices. These estimates suggest the need for a 1.2-fold increase in the number of chemotherapy services and a 2.4-fold increase in the number of radiation therapy services when compared with the parameters currently used by the public health system. A simulation model, such as the one used in the present study, permits better distribution of health care resources because it is based on specific, local needs.

  13. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    PubMed

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  14. Estimating average growth trajectories in shape-space using kernel smoothing.

    PubMed

    Hutton, Tim J; Buxton, Bernard F; Hammond, Peter; Potts, Henry W W

    2003-06-01

    In this paper, we show how a dense surface point distribution model of the human face can be computed and demonstrate the usefulness of the high-dimensional shape-space for expressing the shape changes associated with growth and aging. We show how average growth trajectories for the human face can be computed in the absence of longitudinal data by using kernel smoothing across a population. A training set of three-dimensional surface scans of 199 male and 201 female subjects of between 0 and 50 years of age is used to build the model.

  15. Robust Estimation Based on Walsh Averages for the General Linear Model.

    DTIC Science & Technology

    1983-11-01

    estimate of I minimizing Ip(Z ) has an influence function proportional to p(y) and its asymptotic 2 2-1 variance-covariance matrix is E(* )/(E...in particular, on the influence function h(y) and quantities appearing in the asymptotic vari- ance. Some cno-ents are made on the one- and two...for signed rank estimates. The function P2 (t) of (1.4) has derivative 2(t) = - if t < -c 0 if It < c + I if t > c. *Then the influence function is h(t

  16. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  17. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  18. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following requirements...

  19. [Estimating the impacts of future climate change on water requirement and water deficit of winter wheat in Henan Province, China].

    PubMed

    Ji, Xing-jie; Cheng, Lin; Fang, Wen-song

    2015-09-01

    Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future.

  20. Average dispersal success: linking home range, dispersal, and metapopulation dynamics to reserve design.

    PubMed

    Fagan, William F; Lutscher, Frithjof

    2006-04-01

    Spatially explicit models for populations are often difficult to tackle mathematically and, in addition, require detailed data on individual movement behavior that are not easily obtained. An approximation known as the "average dispersal success" provides a tool for converting complex models, which may include stage structure and a mechanistic description of dispersal, into a simple matrix model. This simpler matrix model has two key advantages. First, it is easier to parameterize from the types of empirical data typically available to conservation biologists, such as survivorship, fecundity, and the fraction of juveniles produced in a study area that also recruit within the study area. Second, it is more amenable to theoretical investigation. Here, we use the average dispersal success approximation to develop estimates of the critical reserve size for systems comprising single patches or simple metapopulations. The quantitative approach can be used for both plants and animals; however, to provide a concrete example of the technique's utility, we focus on a special case pertinent to animals. Specifically, for territorial animals, we can characterize such an estimate of minimum viable habitat area in terms of the number of home ranges that the reserve contains. Consequently, the average dispersal success framework provides a framework through which home range size, natal dispersal distances, and metapopulation dynamics can be linked to reserve design. We briefly illustrate the approach using empirical data for the swift fox (Vulpes velox).

  1. Monte Carlo generated conversion factors for the estimation of average glandular dose in contact and magnification mammography

    NASA Astrophysics Data System (ADS)

    Koutalonis, M.; Delis, H.; Spyrou, G.; Costaridou, L.; Tzanakos, G.; Panayiotakis, G.

    2006-11-01

    Magnification mammography is a special technique used in the cases where breast complaints are noted by a woman or when an abnormality is found in a screening mammogram. The carcinogenic risk in mammography is related to the dose deposited in the glandular tissue of the breast rather than the adipose, and average glandular dose (AGD) is the quantity taken into consideration during a mammographic examination. Direct measurement of the AGD is not feasible during clinical practice and thus, the incident air KERMA on the breast surface is used to estimate the glandular dose, with the help of proper conversion factors. Additional conversion factors adapted for magnification and tube voltage are calculated, using Monte Carlo simulation. The effect of magnification degree, tube voltage, various anode/filter material combinations and glandularity on AGD is also studied, considering partial breast irradiation. Results demonstrate that the estimation of AGD utilizing conversion factors depends on these parameters, while the omission of correction factors for magnification and tube voltage can lead to significant underestimation or overestimation of AGD. AGD was found to increase with filter material's k-absorption edge, anode material's k-emission edge, tube voltage and magnification. Decrease of the glandularity of the breast leads to higher AGD due to the increased penetrating ability of the photon beam in thick breasts with low glandularity.

  2. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images.

    PubMed

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S

    2016-01-01

    Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.

  3. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  4. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  5. [Development of weight-estimation formulae for the bedridden elderly requiring care].

    PubMed

    Oonishi, Reiko; Fujii, Kouji; Tsuda, Hiroko; Imai, Katsumi

    2012-01-01

    Bedridden elderly persons requiring care need special body-weight measurement implements, and body-weighting assumes more difficult if they live at their own homes. Therefore, we tried to design a new weight-estimation formulae using various anthropometric variables. The subjects were 33 male and 132 female elderly inpatients certified to be at care level 4 or 5. The body composition included height, body weight, arm circumference, triceps skinfold thickness, subscapular skinfold thickness, calf circumference, and waist circumference. We studied the correlation between the body weight and each anthropometric variable and age. In men, the highest correlation with body weight was shown by waist circumference (r=0.891, p<0.0001), followed by age (r=0.779, p<0.0001) and calf circumference (r=0.614, p<0.0001). The variables that showed the highest correlation with body weight in women were waist circumference (r=0.806, p<0.0001), followed by triceps skinfold thickness (r=0.723, p<0.0001) and arm circumference (r=0.662, p<0.0001). The weight estimation formulae were obtained by multiple regression analysis. Formulae for men: body weight=0.660×waist circumference (cm)+0.702×calf circumference (cm)+0.096×age (years)-26.917 (R(2)=0.862, p<0.001); formulae for women: body weight=0.315×waist circumference (cm)+0.684×arm circumference (cm)+0.183×height (cm)-28.788 (R(2)=0.836, p<0.001). We successfully developed gender-specific weight-estimation formulae with high coefficients of determination. The results suggest that waist circumference, which is an index of visceral fat, is an effective anthropometric variable to estimate the body weight of bedridden elderly patients requiring care.

  6. Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar

    DOE PAGES

    Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...

    2016-10-18

    Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less

  7. EURRECA-Estimating vitamin D requirements for deriving dietary reference values.

    PubMed

    Cashman, Kevin D; Kiely, Mairead

    2013-01-01

    The time course of the EURRECA from 2008 to 2012, overlapped considerably with the timeframe of the process undertaken by the North American Institute of Medicine (IOM) to revise dietary reference intakes for vitamin D and calcium (published November 2010). Therefore the aims of the vitamin D-related activities in EURRECA were formulated to address knowledge requirements that would complement the activities undertaken by the IOM and provide additional resources for risk assessors and risk management agencies charged with the task of setting dietary reference values for vitamin D. A total of three systematic reviews were carried out. The first, which pre-dated the IOM review process, identified and evaluated existing and novel biomarkers of vitamin D status and confirmed that circulating 25-hydroxyvitamin D (25(OH)D) concentrations is a robust and reliable marker of vitamin D status. The second systematic review conducted a meta-analysis of the dose-response of serum 25(OH)D to vitamin D intake from randomized controlled trials (RCT) among adults to explore the most appropriate model of the vitamin D intake-serum 25(OH)D) relationship to estimate requirements. The third review also carried out a meta-analysis to evaluate evidence of efficacy from RCT using foods fortified with vitamin D, and found they increased circulating 25(OH)D concentrations in a dose-dependent manner but identified a need for stronger data on the efficacy of vitamin D-fortified food on deficiency prevention and potential health outcomes, including adverse effects. Finally, narrative reviews provided estimates of the prevalence of inadequate intakes of vitamin D in adults and children from international dietary surveys, as well as a compilation of research requirements for vitamin D to inform current and future assessments of vitamin D requirements. [Supplementary materials are available for this article. Go to the publisher's onilne edition of Critical Reviews in Food Science and Nutrion for

  8. Protein requirements in healthy adults: a meta-analysis of nitrogen balance studies.

    PubMed

    Li, Min; Sun, Feng; Piao, Jian Hua; Yang, Xiao Guang

    2014-08-01

    The goal of this study was to analyze protein requirements in healthy adults through a meta-analysis of nitrogen balance studies. A comprehensive search for nitrogen balance studies of healthy adults published up to October 2012 was performed, each study were reviewed, and data were abstracted. The studies were first evaluated for heterogeneity. The average protein requirements were analyzed by using the individual data of each included studies. Study site climate, age, sex, and dietary protein source were compared. Data for 348 subjects were gathered from 28 nitrogen balance studies. The natural logarithm of requirement for 348 individuals had a normal distribution with a mean of 4.66. The estimated average requirement was the exponentiation of the mean of the log requirement, 105.64 mg N/kg•d. No significant differences between adult age, source of dietary protein were observed. But there was significant difference between sex and the climate of the study site (P<0.05). The estimated average requirement and recommended nutrient intake of the healthy adult population was 105.64 mg N/kg•d (0.66 g high quality protein/kg•d) and 132.05 mg N/kg•d (0.83 g high quality protein/kg•d), respectively. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  9. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...

  10. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...

  11. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...

  12. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...

  13. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...

  14. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  15. Protein and Amino Acid Requirements during Pregnancy.

    PubMed

    Elango, Rajavel; Ball, Ronald O

    2016-07-01

    Protein forms an essential component of a healthy diet in humans to support both growth and maintenance. During pregnancy, an exceptional stage of life defined by rapid growth and development, adequate dietary protein is crucial to ensure a healthy outcome. Protein deposition in maternal and fetal tissues increases throughout pregnancy, with most occurring during the third trimester. Dietary protein intake recommendations are based on factorial estimates because the traditional method of determining protein requirements, nitrogen balance, is invasive and undesirable during pregnancy. The current Estimated Average Requirement and RDA recommendations of 0.88 and 1.1 g · kg(-1) · d(-1), respectively, are for all stages of pregnancy. The single recommendation does not take into account the changing needs during different stages of pregnancy. Recently, with the use of the minimally invasive indicator amino acid oxidation method, we defined the requirements to be, on average, 1.2 and 1.52 g · kg(-1) · d(-1) during early (∼16 wk) and late (∼36 wk) stages of pregnancy, respectively. Although the requirements are substantially higher than current recommendations, our values are ∼14-18% of total energy and fit within the Acceptable Macronutrient Distribution Range. Using swine as an animal model we showed that the requirements for several indispensable amino acids increase dramatically during late gestation compared with early gestation. Additional studies should be conducted during pregnancy to confirm the newly determined protein requirements and to determine the indispensable amino acid requirements during pregnancy in humans. © 2016 American Society for Nutrition.

  16. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548... moisture content determination. (a) Determining average moisture content of the lot is not a requirement of... connection with grade analysis or as a separate determination. (b) Nuts shall be obtained from a randomly...

  17. Warfarin Dosing Algorithms Underpredict Dose Requirements in Patients Requiring ≥7 mg Daily: A Systematic Review and Meta-analysis.

    PubMed

    Saffian, S M; Duffull, S B; Wright, Dfb

    2017-08-01

    There is preliminary evidence to suggest that some published warfarin dosing algorithms produce biased maintenance dose predictions in patients who require higher than average doses. We conducted a meta-analysis of warfarin dosing algorithms to determine if there exists a systematic under- or overprediction of dose requirements for patients requiring ≥7 mg/day across published algorithms. Medline and Embase databases were searched up to September 2015. We quantified the proportion of over- and underpredicted doses in patients whose observed maintenance dose was ≥7 mg/day. The meta-analysis included 47 evaluations of 22 different warfarin dosing algorithms from 16 studies. The meta-analysis included data from 1,492 patients who required warfarin doses of ≥7 mg/day. All 22 algorithms were found to underpredict warfarin dosing requirements in patients who required ≥7 mg/day by an average of 2.3 mg/day with a pooled estimate of underpredicted doses of 92.3% (95% confidence interval 90.3-94.1, I 2 = 24%). © 2017 American Society for Clinical Pharmacology and Therapeutics.

  18. Dietary Protein Intake in Young Children in Selected Low-Income Countries Is Generally Adequate in Relation to Estimated Requirements for Healthy Children, Except When Complementary Food Intake Is Low123

    PubMed Central

    Arsenault, Joanne E; Brown, Kenneth H

    2017-01-01

    Background: Previous research indicates that young children in low-income countries (LICs) generally consume greater amounts of protein than published estimates of protein requirements, but this research did not account for protein quality based on the mix of amino acids and the digestibility of ingested protein. Objective: Our objective was to estimate the prevalence of inadequate protein and amino acid intake by young children in LICs, accounting for protein quality. Methods: Seven data sets with information on dietary intake for children (6–35 mo of age) from 6 LICs (Peru, Guatemala, Ecuador, Bangladesh, Uganda, and Zambia) were reanalyzed to estimate protein and amino acid intake and assess adequacy. The protein digestibility–corrected amino acid score of each child’s diet was calculated and multiplied by the original (crude) protein intake to obtain an estimate of available protein intake. Distributions of usual intake were obtained to estimate the prevalence of inadequate protein and amino acid intake for each cohort according to Estimated Average Requirements. Results: The prevalence of inadequate protein intake was highest in breastfeeding children aged 6–8 mo: 24% of Bangladeshi and 16% of Peruvian children. With the exception of Bangladesh, the prevalence of inadequate available protein intake decreased by age 9–12 mo and was very low in all sites (0–2%) after 12 mo of age. Inadequate protein intake in children <12 mo of age was due primarily to low energy intake from complementary foods, not inadequate protein density. Conclusions: Overall, most children consumed protein amounts greater than requirements, except for the younger breastfeeding children, who were consuming low amounts of complementary foods. These findings reinforce previous evidence that dietary protein is not generally limiting for children in LICs compared with estimated requirements for healthy children, even after accounting for protein quality. However, unmeasured effects

  19. Deterministic estimate of hypocentral pore fluid pressure of the M5.8 Pawnee, Oklahoma earthquake: Lower pre-injection pressure requires lower resultant pressure for slip

    NASA Astrophysics Data System (ADS)

    Levandowski, W. B.; Walsh, F. R. R.; Yeck, W.

    2016-12-01

    Quantifying the increase in pore-fluid pressure necessary to cause slip on specific fault planes can provide actionable information for stakeholders to potentially mitigate hazard. Although the M5.8 Pawnee earthquake occurred on a previously unmapped fault, we can retrospectively estimate the pore-pressure perturbation responsible for this event. We first estimate the normalized local stress tensor by inverting focal mechanisms surrounding the Pawnee Fault. Faults are generally well oriented for slip, with instabilities averaging 96% of maximum. Next, with an estimate of the weight of local overburden we solve for the pore pressure needed at the hypocenters. Specific to the Pawnee fault, we find that hypocentral pressure 43-104% of hydrostatic (accounting for uncertainties in all relevant parameters) would have been sufficient to cause slip. The dominant source of uncertainty is the pressure on the fault prior to fluid injection. Importantly, we find that lower pre-injection pressure requires lower resultant pressure to cause slip, decreasing from a regional average of 30% above hydrostatic pressure if the hypocenters begin at hydrostatic pressure to 6% above hydrostatic pressure with no pre-injection fluid. This finding suggests that underpressured regions such as northern Oklahoma are predisposed to injection-induced earthquakes. Although retrospective and forensic, similar analyses of other potentially induced events and comparisons to natural earthquakes will provide insight into the relative importance of fault orientation, the magnitude of the local stress field, and fluid-pressure migration in intraplate seismicity.

  20. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  1. On Estimation of the Survivor Average Causal Effect in Observational Studies when Important Confounders are Missing Due to Death

    PubMed Central

    Egleston, Brian L.; Scharfstein, Daniel O.; MacKenzie, Ellen

    2008-01-01

    We focus on estimation of the causal effect of treatment on the functional status of individuals at a fixed point in time t* after they have experienced a catastrophic event, from observational data with the following features: (1) treatment is imposed shortly after the event and is non-randomized, (2) individuals who survive to t* are scheduled to be interviewed, (3) there is interview non-response, (4) individuals who die prior to t* are missing information on pre-event confounders, (5) medical records are abstracted on all individuals to obtain information on post-event, pre-treatment confounding factors. To address the issue of survivor bias, we seek to estimate the survivor average causal effect (SACE), the effect of treatment on functional status among the cohort of individuals who would survive to t* regardless of whether or not assigned to treatment. To estimate this effect from observational data, we need to impose untestable assumptions, which depend on the collection of all confounding factors. Since pre-event information is missing on those who die prior to t*, it is unlikely that these data are missing at random (MAR). We introduce a sensitivity analysis methodology to evaluate the robustness of SACE inferences to deviations from the MAR assumption. We apply our methodology to the evaluation of the effect of trauma center care on vitality outcomes using data from the National Study on Costs and Outcomes of Trauma Care. PMID:18759833

  2. Estimation of the average exchanges in momentum and latent heat between the atmosphere and the oceans with Seasat observations

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1983-01-01

    Ocean-surface momentum flux and latent heat flux are determined from Seasat-A data from 1978 and compared with ship observations. Momentum flux was measured using the Seasat-A scatterometer system (SASS) heat flux, with the scanning multichannel MW radiometer (SMMR). Ship measurements were quality selected and averaged to increase their reliability. The fluxes were computed using a bulk parameterization technique. It is found that although SASS effectively measures momentum flux, variations in atmospheric stability and sea-surface temperature cause deviations which are not accounted for by the present data-processing algorithm. The SMMR-latent-heat-flux algorithm, while needing refinement, is shown to given estimations to within 35 W/sq m in its present form, which removes systematic error and uses an empirically determined transfer coefficient.

  3. Water requirements of honeylocust (Gleditsia triacanthos f. inermis) in the urban forest

    Treesearch

    Howard G. Halverson; Donald F. Potts

    1981-01-01

    Water use by an urban tree was measured lysimetrically while water use by the same tree at a non-urban site was estimated by a model. Comparison of the measured and estimated water use showed that the urban honeylocust (Gleditsia triacanthos f. inermis) required an average of 155 percent of the water needed by the same tree...

  4. Cost of close the gap for vision of Indigenous Australians: On estimating the extra resources required.

    PubMed

    Hsueh, Ya-seng Arthur; Brando, Alex; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh

    2013-12-01

    To estimate the costs of the extra resources required to close the gap of vision between Indigenous and non-Indigenous Australians. Constructing comprehensive eye care pathways for Indigenous Australians with their related probabilities, to capture full eye care usage compared with current usage rate for cataract surgery, refractive error and diabetic retinopathy using the best available data. Urban and remote regions of Australia. The provision of eye care for cataract surgery, refractive error and diabetic retinopathy. Estimated cost needed for full access, estimated current spending and estimated extra cost required to close the gaps of cataract surgery, refractive error and diabetic retinopathy for Indigenous Australians. Total cost needed for full coverage of all three major eye conditions is $45.5 million per year in 2011 Australian dollars. Current annual spending is $17.4 million. Additional yearly cost required to close the gap of vision is $28 million. This includes extra-capped funds of $3 million from the Commonwealth Government and $2 million from the State and Territory Governments. Additional coordination costs per year are $13.3 million. Although available data are limited, this study has produced the first estimates that are indicative of the need for planning and provide equity in eye care. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.

  5. Mean size estimation yields left-side bias: Role of attention on perceptual averaging.

    PubMed

    Li, Kuei-An; Yeh, Su-Ling

    2017-11-01

    The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.

  6. A comparison of average wages with age-specific wages for assessing indirect productivity losses: analytic simplicity versus analytic precision.

    PubMed

    Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J

    2017-07-01

    Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.

  7. 40 CFR 98.315 - Procedures for estimating missing data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... measured parameters used in the GHG emissions calculations is required (e.g., carbon content values, etc... such estimates. (a) For each missing value of the monthly carbon content of calcined petroleum coke the substitute data value shall be the arithmetic average of the quality-assured values of carbon contents for...

  8. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  9. Estimating average alcohol consumption in the population using multiple sources: the case of Spain.

    PubMed

    Sordo, Luis; Barrio, Gregorio; Bravo, María J; Villalbí, Joan R; Espelt, Albert; Neira, Montserrat; Regidor, Enrique

    2016-01-01

    National estimates on per capita alcohol consumption are provided regularly by various sources and may have validity problems, so corrections are needed for monitoring and assessment purposes. Our objectives were to compare different alcohol availability estimates for Spain, to build the best estimate (actual consumption), characterize its time trend during 2001-2011, and quantify the extent to which other estimates (coverage) approximated actual consumption. Estimates were: alcohol availability from the Spanish Tax Agency (Tax Agency availability), World Health Organization (WHO availability) and other international agencies, self-reported purchases from the Spanish Food Consumption Panel, and self-reported consumption from population surveys. Analyses included calculating: between-agency discrepancy in availability, multisource availability (correcting Tax Agency availability by underestimation of wine and cider), actual consumption (adjusting multisource availability by unrecorded alcohol consumption/purchases and alcohol losses), and coverage of selected estimates. Sensitivity analyses were undertaken. Time trends were characterized by joinpoint regression. Between-agency discrepancy in alcohol availability remained high in 2011, mainly because of wine and spirits, although some decrease was observed during the study period. The actual consumption was 9.5 l of pure alcohol/person-year in 2011, decreasing 2.3 % annually, mainly due to wine and spirits. 2011 coverage of WHO availability, Tax Agency availability, self-reported purchases, and self-reported consumption was 99.5, 99.5, 66.3, and 28.0 %, respectively, generally with downward trends (last three estimates, especially self-reported consumption). The multisource availability overestimated actual consumption by 12.3 %, mainly due to tourism imbalance. Spanish estimates of per capita alcohol consumption show considerable weaknesses. Using uncorrected estimates, especially self-reported consumption, for

  10. Ground-water pumpage and artificial recharge estimates for calendar year 2000 and average annual natural recharge and interbasin flow by hydrographic area, Nevada

    USGS Publications Warehouse

    Lopes, Thomas J.; Evetts, David M.

    2004-01-01

    Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth

  11. The health economic burden that acute and chronic wounds impose on an average clinical commissioning group/health board in the UK.

    PubMed

    Guest, J F; Vowden, K; Vowden, P

    2017-06-02

    To estimate the patterns of care and related resource use attributable to managing acute and chronic wounds among a catchment population of a typical clinical commissioning group (CCG)/health board and corresponding National Health Service (NHS) costs in the UK. This was a sub-analysis of a retrospective cohort analysis of the records of 2000 patients in The Health Improvement Network (THIN) database. Patients' characteristics, wound-related health outcomes and health-care resource use were quantified for an average CCG/health board with a catchment population of 250,000 adults ≥18 years of age, and the corresponding NHS cost of patient management was estimated at 2013/2014 prices. An average CCG/health board was estimated to be managing 11,200 wounds in 2012/2013. Of these, 40% were considered to be acute wounds, 48% chronic and 12% lacking any specific diagnosis. The prevalence of acute, chronic and unspecified wounds was estimated to be growing at the rate of 9%, 12% and 13% per annum respectively. Our analysis indicated that the current rate of wound healing must increase by an average of at least 1% per annum across all wound types in order to slow down the increasing prevalence. Otherwise, an average CCG/health board is predicted to manage ~23,200 wounds per annum by 2019/2020 and is predicted to spend a discounted (the process of determining the present value of a payment that is to be received in the future) £50 million on managing these wounds and associated comorbidities. Real-world evidence highlights the substantial burden that acute and chronic wounds impose on an average CCG/health board. Strategies are required to improve the accuracy of diagnosis and healing rates.

  12. Protein and Amino Acid Requirements during Pregnancy123

    PubMed Central

    Elango, Rajavel; Ball, Ronald O

    2016-01-01

    Protein forms an essential component of a healthy diet in humans to support both growth and maintenance. During pregnancy, an exceptional stage of life defined by rapid growth and development, adequate dietary protein is crucial to ensure a healthy outcome. Protein deposition in maternal and fetal tissues increases throughout pregnancy, with most occurring during the third trimester. Dietary protein intake recommendations are based on factorial estimates because the traditional method of determining protein requirements, nitrogen balance, is invasive and undesirable during pregnancy. The current Estimated Average Requirement and RDA recommendations of 0.88 and 1.1 g · kg−1 · d−1, respectively, are for all stages of pregnancy. The single recommendation does not take into account the changing needs during different stages of pregnancy. Recently, with the use of the minimally invasive indicator amino acid oxidation method, we defined the requirements to be, on average, 1.2 and 1.52 g · kg−1 · d−1 during early (∼16 wk) and late (∼36 wk) stages of pregnancy, respectively. Although the requirements are substantially higher than current recommendations, our values are ∼14–18% of total energy and fit within the Acceptable Macronutrient Distribution Range. Using swine as an animal model we showed that the requirements for several indispensable amino acids increase dramatically during late gestation compared with early gestation. Additional studies should be conducted during pregnancy to confirm the newly determined protein requirements and to determine the indispensable amino acid requirements during pregnancy in humans. PMID:27422521

  13. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  14. The average chemical composition of the lunar surface

    NASA Technical Reports Server (NTRS)

    Turkevich, A. L.

    1973-01-01

    The available analytical data from twelve locations on the moon are used to estimate the average amounts of the principal chemical elements (O, Na, Mg, Al, Si, Ca, Ti, and Fe) in the mare, the terra, and the average lunar surface regolith. These chemical elements comprise about 99% of the atoms on the lunar surface. The relatively small variability in the amounts of these elements at different mare (or terra) sites, and the evidence from the orbital measurements of Apollo 15 and 16, suggest that the lunar surface is much more homogeneous than the surface of the earth. The average chemical composition of the lunar surface may now be known as well as, if not better than, that of the solid part of the earth's surface.

  15. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  16. Estimating Recharge From Soil Water Tension Data

    NASA Astrophysics Data System (ADS)

    Sisson, J. B.; Gee, G. W.

    2001-12-01

    Effectively managing an aquifer requires accurate estimates of the ambient flux as well as the travel time of annual pulses to pass through the vadose zone. When soil water potential and/or water content data are available together with unsaturated hydraulic properties the ambient flux can be estimated using Darcy's Law. A field site, the Buried Waste Test Facility, located at Hanford WA was instrumented with advanced tensiometers to a depth of 20 ft bls and data obtained over a 2 year period. The unsaturated hydraulic properties were available at the closed bottom lysimeter from previous studies. The ambient flux was estimated from the rate of pumpage from the lysimeter to be 55 mm/y. Data from the tensiometers indicated a unit gradient in total water potential at depths greater than 4 m. Thus, the ambient flux was numerically equal to the unsaturated hydraulic conductivity. The data also clearly show the passage of wetting fronts beyond 2.3 m and with some imagination to depths beyond 4.3 m. Using the tensiometer data together with previously estimated hydraulic properties resulted in estimates of ambient flux that ranged from about 10 to 120 mm/y. These estimates were found to depend on the length of the period, for which soil water potentials were averaged, and on how the hydraulic conductivity was averaged.

  17. Estimation of evaporation from open water - A review of selected studies, summary of U.S. Army Corps of Engineers data collection and methods, and evaluation of two methods for estimation of evaporation from five reservoirs in Texas

    USGS Publications Warehouse

    Harwell, Glenn R.

    2012-01-01

    Organizations responsible for the management of water resources, such as the U.S. Army Corps of Engineers (USACE), are tasked with estimation of evaporation for water-budgeting and planning purposes. The USACE has historically used Class A pan evaporation data (pan data) to estimate evaporation from reservoirs but many USACE Districts have been experimenting with other techniques for an alternative to collecting pan data. The energy-budget method generally is considered the preferred method for accurate estimation of open-water evaporation from lakes and reservoirs. Complex equations to estimate evaporation, such as the Penman, DeBruin-Keijman, and Priestley-Taylor, perform well when compared with energy-budget method estimates when all of the important energy terms are included in the equations and ideal data are collected. However, sometimes nonideal data are collected and energy terms, such as the change in the amount of stored energy and advected energy, are not included in the equations. When this is done, the corresponding errors in evaporation estimates are not quantifiable. Much simpler methods, such as the Hamon method and a method developed by the U.S. Weather Bureau (USWB) (renamed the National Weather Service in 1970), have been shown to provide reasonable estimates of evaporation when compared to energy-budget method estimates. Data requirements for the Hamon and USWB methods are minimal and sometimes perform well with remotely collected data. The Hamon method requires average daily air temperature, and the USWB method requires daily averages of air temperature, relative humidity, wind speed, and solar radiation. Estimates of annual lake evaporation from pan data are frequently within 20 percent of energy-budget method estimates. Results of evaporation estimates from the Hamon method and the USWB method were compared against historical pan data at five selected reservoirs in Texas (Benbrook Lake, Canyon Lake, Granger Lake, Hords Creek Lake, and Sam

  18. Greenhouse gas emissions and the Australian diet--comparing dietary recommendations with average intakes.

    PubMed

    Hendrie, Gilly A; Ridoutt, Brad G; Wiedmann, Thomas O; Noakes, Manny

    2014-01-08

    Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe) for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e) per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet). Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day) and energy-dense, nutrient poor "non-core" foods (3.9 kg CO2e). Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe.

  19. Greenhouse Gas Emissions and the Australian Diet—Comparing Dietary Recommendations with Average Intakes

    PubMed Central

    Hendrie, Gilly A.; Ridoutt, Brad G.; Wiedmann, Thomas O.; Noakes, Manny

    2014-01-01

    Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe) for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e) per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet). Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day) and energy-dense, nutrient poor “non-core” foods (3.9 kg CO2e). Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe. PMID:24406846

  20. Genomic instability related to zinc deficiency and excess in an in vitro model: is the upper estimate of the physiological requirements recommended for children safe?

    PubMed

    Padula, Gisel; Ponzinibbio, María Virginia; Gambaro, Rocío Celeste; Seoane, Analía Isabel

    2017-08-01

    Micronutrients are important for the prevention of degenerative diseases due to their role in maintaining genomic stability. Therefore, there is international concern about the need to redefine the optimal mineral and vitamin requirements to prevent DNA damage. We analyzed the cytostatic, cytotoxic, and genotoxic effect of in vitro zinc supplementation to determine the effects of zinc deficiency and excess and whether the upper estimate of the physiological requirement recommended for children is safe. To achieve zinc deficiency, DMEM/Ham's F12 medium (HF12) was chelated (HF12Q). Lymphocytes were isolated from healthy female donors (age range, 5-10 yr) and cultured for 7 d as follows: negative control (HF12, 60 μg/dl ZnSO 4 ); deficient (HF12Q, 12 μg/dl ZnSO 4 ); lower level (HF12Q + 80 μg/dl ZnSO 4 ); average level (HF12Q + 180 μg/dl ZnSO 4 ); upper limit (HF12Q + 280 μg/dl ZnSO 4 ); and excess (HF12Q + 380 μg/dl ZnSO 4 ). The comet (quantitative analysis) and cytokinesis-block micronucleus cytome assays were used. Differences were evaluated with Kruskal-Wallis and ANOVA (p < 0.05). Olive tail moment, tail length, micronuclei frequency, and apoptotic and necrotic percentages were significantly higher in the deficient, upper limit, and excess cultures compared with the negative control, lower, and average limit ones. In vitro zinc supplementation at the lower and average limit (80 and 180 μg/dl ZnSO 4 ) of the physiological requirement recommended for children proved to be the most beneficial in avoiding genomic instability, whereas the deficient, upper limit, and excess (12, 280, and 380 μg/dl) cultures increased DNA and chromosomal damage and apoptotic and necrotic frequencies.

  1. Time On Station Requirements: Costs, Policy Change, and Perceptions

    DTIC Science & Technology

    2016-12-01

    Travel Management Office (2016). .........................................................................6  Table 3.  Time it took spouses to find...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT TIME ON STATION REQUIREMENTS: COSTS, POLICY CHANGE, AND...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching

  2. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces.

    PubMed

    Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone

    2017-06-01

    The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.

  3. An aeration control strategy for oxidation ditch processes based on online oxygen requirement estimation.

    PubMed

    Zhan, J X; Ikehata, M; Mayuzumi, M; Koizumi, E; Kawaguchi, Y; Hashimoto, T

    2013-01-01

    A feedforward-feedback aeration control strategy based on online oxygen requirements (OR) estimation is proposed for oxidation ditch (OD) processes, and it is further developed for intermittent aeration OD processes, which are the most popular type in Japan. For calculating OR, concentrations of influent biochemical oxygen demand (BOD) and total Kjeldahl nitrogen (TKN) are estimated online by the measurement of suspended solids (SS) and sometimes TKN is estimated by NH4-N. Mixed liquor suspended solids (MLSS) and temperature are used to estimate the required oxygen for endogenous respiration. A straightforward parameter named aeration coefficient, Ka, is introduced as the only parameter that can be tuned automatically by feedback control or manually by the operators. Simulation with an activated sludge model was performed in comparison to fixed-interval aeration and satisfying result of OR control strategy was obtained. The OR control strategy has been implemented at seven full-scale OD plants and improvements in nitrogen removal are obtained in all these plants. Among them, the results obtained in Yumoto wastewater treatment plant were presented, in which continuous aeration was applied previously. After implementing intermittent OR control, the total nitrogen concentration was reduced from more than 5 mg/L to under 2 mg/L, and the electricity consumption was reduced by 61.2% for aeration or 21.5% for the whole plant.

  4. Validation of energy requirement equations for estimation of breast milk consumption in infants.

    PubMed

    Schoen, Stefanie; Sichert-Hellert, Wolfgang; Kersting, Mathilde

    2009-12-01

    To test equations for calculating infants' energy requirements as a simple and reliable instrument for estimating the amount of breast milk consumed in epidemiological studies where test-weighing is not possible. Infants' energy requirements were calculated using three different equations based on reference data and compared with actual energy intakes assessed using the 3 d weighed dietary records of breast-fed infants from the DOrtmund Nutritional and Anthropometric Longitudinally Designed (DONALD) Study. A sub-sample of 323 infants from the German DONALD Study who were predominantly breast-fed for at least the first four months of life, and who had 3 d weighed dietary records and repeated body weight measurements within the first year of life. Healthy, term infants breast-fed for at least 4 months, 0-12 months of age. The overall differences between measured energy intake and calculated energy requirements were quite small, never more than 10 % of total energy intake, and smaller than the mean variance of energy intake between the three days of recording. The equation of best fit incorporated body weight and recent growth, while the worst fit was found for the equation not considering body weight. Breast milk consumption in fully and partially breast-fed infants can be reasonably quantified by calculating the infants' individual energy requirements via simple equations. This provides a feasible approach for estimating infant energy intake in epidemiological studies where test-weighing of breast milk is not possible.

  5. Comparison of Estimated Energy Requirements in Severely Burned Patients With Measurements by Using Indirect Calorimetry

    PubMed Central

    Tancheva, D.; Arabadziev, J.; Gergov, G.; Lachev, N.; Todorova, S.; Hristova, A.

    2005-01-01

    Summary Severe burn injuries give rise to an extreme state of physiological stress. No other trauma results in such an accelerated rate of tissue catabolism, loss of lean body mass, and depletion of energy and protein reserves. A heightened attention to energy needs is essential, and the significance of adequate nutritional support in the complex management of patients with major burns is very important. The purpose of this study is to compare the results obtained by three of the most popular methods of estimating energy requirements in severely burned adult patients with the measurements of resting energy (REE) expenditure by indirect calorimetry (IC). A prospective study was carried out of 20 patients (male/female ratio, 17/3; mean age, 37.83 ± 10.86 yr), without accompanying morbidities, with burn injuries covering a mean body surface area of 34.27 ± 11.55% and a mean abbreviated burn severity index of 7.44 ± 1.58. During the first 30 days after trauma, the energy requirements were estimated using the Curreri, Long, and Toronto formulas. Twice weekly measurements of REE by IC were obtained. It was found that the Curreri and Long formulas overestimated the energy requirements in severely burned patients, as found by other investigators. However, no significant difference was found between the daily energy requirements calculated by the Toronto formula and the measured REE values by IC. It is concluded that the Toronto formula can be used as an alternative method for estimating the energy requirements of patients with major burns in cases where IC is not available or not applicable. PMID:21990973

  6. Average effect estimates remain similar as evidence evolves from single trials to high-quality bodies of evidence: a meta-epidemiologic study.

    PubMed

    Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Thaler, Kylie; Nussbaumer, Barbara; Sommer, Isolde; Lohr, Kathleen N

    2016-01-01

    The objective of our study was to use a diverse sample of medical interventions to assess empirically whether first trials rendered substantially different treatment effect estimates than reliable, high-quality bodies of evidence. We used a meta-epidemiologic study design using 100 randomly selected bodies of evidence from Cochrane reports that had been graded as high quality of evidence. To determine the concordance of effect estimates between first and subsequent trials, we applied both quantitative and qualitative approaches. For quantitative assessment, we used Lin's concordance correlation and calculated z-scores; to determine the magnitude of differences of treatment effects, we calculated standardized mean differences (SMDs) and ratios of relative risks. We determined qualitative concordance based on a two-tiered approach incorporating changes in statistical significance and magnitude of effect. First trials both overestimated and underestimated the true treatment effects in no discernible pattern. Nevertheless, depending on the definition of concordance, effect estimates of first trials were concordant with pooled subsequent studies in at least 33% but up to 50% of comparisons. The pooled magnitude of change as bodies of evidence advanced from single trials to high-quality bodies of evidence was 0.16 SMD [95% confidence interval (CI): 0.12, 0.21]. In 80% of comparisons, the difference in effect estimates was smaller than 0.5 SMDs. In first trials with large treatment effects (>0.5 SMD), however, estimates of effect substantially changed as new evidence accrued (mean change 0.68 SMD; 95% CI: 0.50, 0.86). Results of first trials often change, but the magnitude of change, on average, is small. Exceptions are first trials that present large treatment effects, which often dissipate as new evidence accrues. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    PubMed

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape.

  8. Effect of confounding variables on hemodynamic response function estimation using averaging and deconvolution analysis: An event-related NIRS study.

    PubMed

    Aarabi, Ardalan; Osharina, Victoria; Wallois, Fabrice

    2017-07-15

    Slow and rapid event-related designs are used in fMRI and functional near-infrared spectroscopy (fNIRS) experiments to temporally characterize the brain hemodynamic response to discrete events. Conventional averaging (CA) and the deconvolution method (DM) are the two techniques commonly used to estimate the Hemodynamic Response Function (HRF) profile in event-related designs. In this study, we conducted a series of simulations using synthetic and real NIRS data to examine the effect of the main confounding factors, including event sequence timing parameters, different types of noise, signal-to-noise ratio (SNR), temporal autocorrelation and temporal filtering on the performance of these techniques in slow and rapid event-related designs. We also compared systematic errors in the estimates of the fitted HRF amplitude, latency and duration for both techniques. We further compared the performance of deconvolution methods based on Finite Impulse Response (FIR) basis functions and gamma basis sets. Our results demonstrate that DM was much less sensitive to confounding factors than CA. Event timing was the main parameter largely affecting the accuracy of CA. In slow event-related designs, deconvolution methods provided similar results to those obtained by CA. In rapid event-related designs, our results showed that DM outperformed CA for all SNR, especially above -5 dB regardless of the event sequence timing and the dynamics of background NIRS activity. Our results also show that periodic low-frequency systemic hemodynamic fluctuations as well as phase-locked noise can markedly obscure hemodynamic evoked responses. Temporal autocorrelation also affected the performance of both techniques by inducing distortions in the time profile of the estimated hemodynamic response with inflated t-statistics, especially at low SNRs. We also found that high-pass temporal filtering could substantially affect the performance of both techniques by removing the low-frequency components of

  9. Human-experienced temperature changes exceed global average climate changes for all income groups

    NASA Astrophysics Data System (ADS)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  10. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  11. How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation

    ERIC Educational Resources Information Center

    Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard

    2006-01-01

    Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…

  12. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...

  13. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...

  14. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...

  15. Methodology to Estimate the Longitudinal Average Attributable Fraction of Guideline-recommended Medications for Death in Older Adults With Multiple Chronic Conditions

    PubMed Central

    Zhan, Yilei; Cohen, Andrew B.; Tinetti, Mary E.; Trentalange, Mark; McAvay, Gail

    2016-01-01

    Background: Persons with multiple chronic conditions receive multiple guideline-recommended medications to improve outcomes such as mortality. Our objective was to estimate the longitudinal average attributable fraction for 3-year survival of medications for cardiovascular conditions in persons with multiple chronic conditions and to determine whether heterogeneity occurred by age. Methods: Medicare Current Beneficiary Survey participants (N = 8,578) with two or more chronic conditions, enrolled from 2005 to 2009 with follow-up through 2011, were analyzed. We calculated the longitudinal extension of the average attributable fraction for oral medications (beta blockers, renin–angiotensin system blockers, and thiazide diuretics) indicated for cardiovascular conditions (atrial fibrillation, coronary artery disease, heart failure, and hypertension), on survival adjusted for 18 participant characteristics. Models stratified by age (≤80 and >80 years) were analyzed to determine heterogeneity of both cardiovascular conditions and medications. Results: Heart failure had the greatest average attributable fraction (39%) for mortality. The fractional contributions of beta blockers, renin–angiotensin system blockers, and thiazides to improve survival were 10.4%, 9.3%, and 7.2% respectively. In age-stratified models, of these medications thiazides had a significant contribution to survival only for those aged 80 years or younger. The effects of the remaining medications were similar in both age strata. Conclusions: Most cardiovascular medications were attributed independently to survival. The two cardiovascular conditions contributing independently to death were heart failure and atrial fibrillation. The medication effects were similar by age except for thiazides that had a significant contribution to survival in persons younger than 80 years. PMID:26748093

  16. Estimating the Cost of Providing Foundational Public Health Services.

    PubMed

    Mamaril, Cezar Brian C; Mays, Glen P; Branham, Douglas Keith; Bekemeier, Betty; Marlowe, Justin; Timsina, Lava

    2017-12-28

    To estimate the cost of resources required to implement a set of Foundational Public Health Services (FPHS) as recommended by the Institute of Medicine. A stochastic simulation model was used to generate probability distributions of input and output costs across 11 FPHS domains. We used an implementation attainment scale to estimate costs of fully implementing FPHS. We use data collected from a diverse cohort of 19 public health agencies located in three states that implemented the FPHS cost estimation methodology in their agencies during 2014-2015. The average agency incurred costs of $48 per capita implementing FPHS at their current attainment levels with a coefficient of variation (CV) of 16 percent. Achieving full FPHS implementation would require $82 per capita (CV=19 percent), indicating an estimated resource gap of $34 per capita. Substantial variation in costs exists across communities in resources currently devoted to implementing FPHS, with even larger variation in resources needed for full attainment. Reducing geographic inequities in FPHS may require novel financing mechanisms and delivery models that allow health agencies to have robust roles within the health system and realize a minimum package of public health services for the nation. © Health Research and Educational Trust.

  17. Average chemical composition of the lunar surface

    NASA Technical Reports Server (NTRS)

    Turkevich, A. L.

    1973-01-01

    The available data on the chemical composition of the lunar surface at eleven sites (3 Surveyor, 5 Apollo and 3 Luna) are used to estimate the amounts of principal chemical elements (those present in more than about 0.5% by atom) in average lunar surface material. The terrae of the moon differ from the maria in having much less iron and titanium and appreciably more aluminum and calcium.

  18. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  19. Estimating Capacity Requirements for Mental Health Services After a Disaster Has Occurred: A Call for New Data

    PubMed Central

    Siegel, Carole E.; Laska, Eugene; Meisner, Morris

    2004-01-01

    Objectives. We sought to estimate the extended mental health service capacity requirements of persons affected by the September 11, 2001, terrorist attacks. Methods. We developed a formula to estimate the extended mental health service capacity requirements following disaster situations and assessed availability of the information required by the formula. Results. Sparse data exist on current services and supports used by people with mental health problems outside of the formal mental health specialty sector. There also are few systematically collected data on mental health sequelae of disasters. Conclusions. We recommend research-based surveys to understand service usage in non–mental health settings and suggest that federal guidelines be established to promote uniform data collection of a core set of items in studies carried out after disasters. PMID:15054009

  20. An Investigation of Online Homework: Required or Not Required?

    ERIC Educational Resources Information Center

    Wooten, Tommy; Dillard-Eggers, Jane

    2013-01-01

    In our research we investigate the use of online homework in principles of accounting classes where some classes required online homework while other classes did not. Users of online homework, compared to nonusers, had a higher grade point average and earned a higher grade in class. On average, both required and not-required users rated the online…

  1. Increased Protein Requirements in Female Athletes after Variable-Intensity Exercise.

    PubMed

    Wooding, Denise J; Packer, Jeff E; Kato, Hiroyuki; West, Daniel W D; Courtney-Martin, Glenda; Pencharz, Paul B; Moore, Daniel R

    2017-11-01

    Protein requirements are primarily studied in the context of resistance or endurance exercise with little research devoted to variable-intensity intermittent exercise characteristic of many team sports. Further, female populations are underrepresented in dietary sports science studies. We aimed to determine a dietary protein requirement in active females performing variable-intensity intermittent exercise using the indicator amino acid oxidation (IAAO) method. We hypothesized that these requirements would be greater than current IAAO-derived estimates in nonactive adult males. Six females (21.2 ± 0.8 yr, 68.8 ± 4.1 kg, 47.1 ± 1.2 mL O2·kg·min; mean ± SE) completed five to seven metabolic trials during the luteal phase of the menstrual cycle. Participants performed a modified Loughborough Intermittent Shuttle Test before consuming eight hourly mixed meals providing the test protein intake (0.2-2.66 g·kg·d), 6 g·kg·d CHO and sufficient energy for resting and exercise-induced energy expenditure. Protein was provided as crystalline amino acid modeling egg protein with [C]phenylalanine as the indicator amino acid. Phenylalanine turnover (Q) was determined from urinary [C]phenylalanine enrichment. Breath CO2 excretion (FCO2) was analyzed using mixed effects biphase linear regression with the breakpoint and upper 95% confidence interval approximating the estimated average requirement and recommended dietary allowance, respectively. Protein intake had no effect on Q (68.7 ± 7.3 μmol·kg·h; mean ± SE). FCO2 displayed a robust biphase response (R = 0.66) with an estimated average requirement of 1.41 g·kg·d and recommended dietary allowance of 1.71 g·kg·d. The protein requirement estimate of 1.41 and 1.71 g·kg·d for females performing variable-intensity intermittent exercise is greater than the IAAO-derived estimates of adult males (0.93 and 1.2 g·kg·d) and at the upper range of the American College of Sports Medicine athlete recommendations (1.2-2.0 g·kg·d).

  2. An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom.

    PubMed

    Lee, Hyunwoo; Lee, Hana; Whang, Mincheol

    2018-01-15

    Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG) is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG). Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG) approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females) were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1) the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2) the proposed method was compared with the previous SCG method that employs fewer-axis; and (3) the method was tested in various measurement conditions for a more practical application.

  3. Areal-Averaged Spectral Surface Albedo in an Atlantic Coastal Area: Estimation from Ground-Based Transmission

    DOE PAGES

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; ...

    2017-07-12

    Tower-based data combined with high-resolution satellite products have been used to produce surface albedo at various spatial scales over land. Because tower-based albedo data are available at only a few sites, surface albedos using these combined data are spatially limited. Moreover, tower-based albedo data are not representative of highly heterogeneous regions. To produce areal-averaged and spectrally-resolved surface albedo for regions with various degrees of surface heterogeneity, we have developed a transmission-based retrieval and demonstrated its feasibility for relatively homogeneous land surfaces. Here we demonstrate its feasibility for a highly heterogeneous coastal region. We use the atmospheric transmission measured during amore » 19-month period (June 2009 – December 2010) by a ground-based Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (0.415, 0.5, 0.615, 0.673 and 0.87 µm) at the Department of Energy’s Atmospheric Radiation Measurement (ARM) Mobile Facility (AMF) site located on Graciosa Island. We compare the MFRSR-retrieved areal-averaged surface albedo with albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations, and also a composite-based albedo. Lastly, we demonstrate that these three methods produce similar spectral signatures of surface albedo; however, the MFRSR-retrieved albedo, is higher on average (≤0.04) than the MODIS-based areal-averaged surface albedo and the largest difference occurs in winter.« less

  4. Areal-Averaged Spectral Surface Albedo in an Atlantic Coastal Area: Estimation from Ground-Based Transmission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassianov, Evgueni; Barnard, James; Flynn, Connor

    Tower-based data combined with high-resolution satellite products have been used to produce surface albedo at various spatial scales over land. Because tower-based albedo data are available at only a few sites, surface albedos using these combined data are spatially limited. Moreover, tower-based albedo data are not representative of highly heterogeneous regions. To produce areal-averaged and spectrally-resolved surface albedo for regions with various degrees of surface heterogeneity, we have developed a transmission-based retrieval and demonstrated its feasibility for relatively homogeneous land surfaces. Here we demonstrate its feasibility for a highly heterogeneous coastal region. We use the atmospheric transmission measured during amore » 19-month period (June 2009 – December 2010) by a ground-based Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (0.415, 0.5, 0.615, 0.673 and 0.87 µm) at the Department of Energy’s Atmospheric Radiation Measurement (ARM) Mobile Facility (AMF) site located on Graciosa Island. We compare the MFRSR-retrieved areal-averaged surface albedo with albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations, and also a composite-based albedo. Lastly, we demonstrate that these three methods produce similar spectral signatures of surface albedo; however, the MFRSR-retrieved albedo, is higher on average (≤0.04) than the MODIS-based areal-averaged surface albedo and the largest difference occurs in winter.« less

  5. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can

  6. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  7. Average variograms to guide soil sampling

    NASA Astrophysics Data System (ADS)

    Kerry, R.; Oliver, M. A.

    2004-10-01

    To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.

  8. Modeling of structural uncertainties in Reynolds-averaged Navier-Stokes closures

    NASA Astrophysics Data System (ADS)

    Emory, Michael; Larsson, Johan; Iaccarino, Gianluca

    2013-11-01

    Estimation of the uncertainty in numerical predictions by Reynolds-averaged Navier-Stokes closures is a vital step in building confidence in such predictions. An approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed. The methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.

  9. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  10. Remote Sensing Precision Requirements For FIA Estimation

    Treesearch

    Mark H. Hansen

    2001-01-01

    In this study the National Land Cover Data (NLCD) available from the Multi-Resolution Land Characteristics Consortium (MRLC) is used for stratification in the estimation of forest area, timberland area, and growing-stock volume from the first year (1999) of annual FIA data collected in Indiana, Iowa, Minnesota, and Missouri. These estimates show that with improvements...

  11. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  12. Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.

  13. Nut crop yield records show that budbreak-based chilling requirements may not reflect yield decline chill thresholds

    NASA Astrophysics Data System (ADS)

    Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.

    2015-06-01

    Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.

  14. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  15. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  16. Estimating the required logistical resources to support the development of a sustainable corn stover bioeconomy in the USA

    DOE PAGES

    Ebadian, Mahmood; Sokhansanj, Shahabaddine; Webb, Erin

    2016-11-23

    In this paper, the logistical resources required to develop a bioeconomy based on corn stover in the USA are quantified, including field equipment, storage sites, transportation and handling equipment, workforce, corn growers, and corn lands. These resources are essential to mobilize large quantities of corn stover from corn fields to biorefineries. The logistical resources are estimated over the lifetime of the biorefineries. Seventeen corn-growing states are considered for the logistical resource assessment. Over 6.8 billion gallons of cellulosic ethanol can be produced annually from 108 million dry tons of corn stover in these states. The maximum number of required fieldmore » equipment (i.e., choppers, balers, collectors, loaders, and tractors) is estimated to be 194 110 units with a total economic value of about 26 billion dollars. In addition, 40 780 trucks and flatbed trailers would be required to transport bales from corn fields and storage sites to biorefineries with a total economic value of 4.0 billion dollars. About 88 899 corn growers need to be contracted with an annual net income of over 2.1 billion dollars. About 1903 storage sites would be required to hold 53.1 million dry tons of inventory after the harvest season. These storage sites would take up about 35 320.2 acres and 4077 loaders with an economic value of 0.4 billion dollars would handle this inventory. The total required workforce to run the logistics operations is estimated to be 50 567. Furthermore, the magnitude of the estimated logistical resources demonstrates the economic and social significance of the corn stover bioeconomy in rural areas in the USA.« less

  17. Estimating the required logistical resources to support the development of a sustainable corn stover bioeconomy in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebadian, Mahmood; Sokhansanj, Shahabaddine; Webb, Erin

    In this paper, the logistical resources required to develop a bioeconomy based on corn stover in the USA are quantified, including field equipment, storage sites, transportation and handling equipment, workforce, corn growers, and corn lands. These resources are essential to mobilize large quantities of corn stover from corn fields to biorefineries. The logistical resources are estimated over the lifetime of the biorefineries. Seventeen corn-growing states are considered for the logistical resource assessment. Over 6.8 billion gallons of cellulosic ethanol can be produced annually from 108 million dry tons of corn stover in these states. The maximum number of required fieldmore » equipment (i.e., choppers, balers, collectors, loaders, and tractors) is estimated to be 194 110 units with a total economic value of about 26 billion dollars. In addition, 40 780 trucks and flatbed trailers would be required to transport bales from corn fields and storage sites to biorefineries with a total economic value of 4.0 billion dollars. About 88 899 corn growers need to be contracted with an annual net income of over 2.1 billion dollars. About 1903 storage sites would be required to hold 53.1 million dry tons of inventory after the harvest season. These storage sites would take up about 35 320.2 acres and 4077 loaders with an economic value of 0.4 billion dollars would handle this inventory. The total required workforce to run the logistics operations is estimated to be 50 567. Furthermore, the magnitude of the estimated logistical resources demonstrates the economic and social significance of the corn stover bioeconomy in rural areas in the USA.« less

  18. Estimation of minimum ventilation requirement of dairy cattle barns for different outdoor temperature and its affects on indoor temperature: Bursa case.

    PubMed

    Yaslioglu, Erkan; Simsek, Ercan; Kilic, Ilker

    2007-04-15

    In the study, 10 different dairy cattle barns with natural ventilation system were investigated in terms of structural aspects. VENTGRAPH software package was used to estimate minimum ventilation requirements for three different outdoor design temperatures (-3, 0 and 1.7 degrees C). Variation in indoor temperatures was also determined according to the above-mentioned conditions. In the investigated dairy cattle barns, on condition that minimum ventilation requirement to be achieved for -3, 0 and 1.7 degrees C outdoor design temperature and 70, 80% Indoor Relative Humidity (IRH), estimated indoor temperature were ranged from 2.2 to 12.2 degrees C for 70% IRH, 4.3 to 15.0 degrees C for 80% IRH. Barn type, outdoor design temperature and indoor relative humidity significantly (p < 0.01) affect the indoor temperature. The highest ventilation requirement was calculated for straw yard (13879 m3 h(-1)) while the lowest was estimated for tie-stall (6169.20 m3 h(-1)). Estimated minimum ventilation requirements per animal were significantly (p < 0.01) different according to the barn types. Effect of outdoor esign temperatures on minimum ventilation requirements and minimum ventilation requirements per animal was found to be significant (p < 0.05, p < 0.01). Estimated indoor temperatures were in thermoneutral zone (-2 to 20 degrees C). Therefore, one can be said that use of naturally ventilated cold dairy barns in the region will not lead to problems associated with animal comfort in winter.

  19. The Nexus between the Above-Average Effect and Cooperative Learning in the Classroom

    ERIC Educational Resources Information Center

    Breneiser, Jennifer E.; Monetti, David M.; Adams, Katharine S.

    2012-01-01

    The present study examines the above-average effect (Chambers & Windschitl, 2004; Moore & Small, 2007) in assessments of task performance. Participants completed self-estimates of performance and group estimates of performance, before and after completing a task. Participants completed a task individually and in groups. Groups were…

  20. Analyzing average and conditional effects with multigroup multilevel structural equation models

    PubMed Central

    Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf

    2014-01-01

    Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668

  1. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  2. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  3. Average Dielectric Property Analysis of Complex Breast Tissue with Microwave Transmission Measurements

    PubMed Central

    Garrett, John D.; Fear, Elise C.

    2015-01-01

    Prior information about the average dielectric properties of breast tissue can be implemented in microwave breast imaging techniques to improve the results. Rapidly providing this information relies on acquiring a limited number of measurements and processing these measurement with efficient algorithms. Previously, systems were developed to measure the transmission of microwave signals through breast tissue, and simplifications were applied to estimate the average properties. These methods provided reasonable estimates, but they were sensitive to multipath. In this paper, a new technique to analyze the average properties of breast tissues while addressing multipath is presented. Three steps are used to process transmission measurements. First, the effects of multipath were removed. In cases where multipath is present, multiple peaks were observed in the time domain. A Tukey window was used to time-gate a single peak and, therefore, select a single path through the breast. Second, the antenna response was deconvolved from the transmission coefficient to isolate the response from the tissue in the breast interior. The antenna response was determined through simulations. Finally, the complex permittivity was estimated using an iterative approach. This technique was validated using simulated and physical homogeneous breast models and tested with results taken from a recent patient study. PMID:25585106

  4. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  5. 40 CFR 1051.740 - Are there special averaging provisions for snowmobiles?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES Averaging... redesignated engines. Calculate credits using this average emission level relative to the specific pollutant in... equation in § 1051.103), then your credits are the difference between the Phase 3 reduction requirement of...

  6. 40 CFR 1051.740 - Are there special averaging provisions for snowmobiles?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES Averaging... redesignated engines. Calculate credits using this average emission level relative to the specific pollutant in... equation in § 1051.103), then your credits are the difference between the Phase 3 reduction requirement of...

  7. Estimates of the maximum time required to originate life

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Fogleman, Guy

    1989-01-01

    Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.

  8. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  9. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    NASA Astrophysics Data System (ADS)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  10. Body mass estimates of hominin fossils and the evolution of human body size.

    PubMed

    Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G

    2015-08-01

    Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Robust estimation of event-related potentials via particle filter.

    PubMed

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Conceptual Analysis of System Average Water Stability

    NASA Astrophysics Data System (ADS)

    Zhang, H.

    2016-12-01

    Averaging over time and area, the precipitation in an ecosystem (SAP - system average precipitation) depends on the average surface temperature and relative humidity (RH) in the system if uniform convection is assumed. RH depends on the evapotranspiration of the system (SAE - system average evapotranspiration). There is a non-linear relationship between SAP and SAE. Studying this relationship can lead mechanistic understanding of the ecosystem health status and trend under different setups. If SAP is higher than SAE, the system will have a water runoff which flows out through rivers. If SAP is lower than SAE, irrigation is needed to maintain the vegetation status. This presentation will give a conceptual analysis of the stability in this relationship under different assumed areas, water or forest coverages, elevations and latitudes. This analysis shows that desert is a stable system. Water circulation in basins is also stabilized at a specific SAP based on the basin profile. It further shows that deforestation will reduce SAP, and can flip the system to an irrigation required status. If no irrigation is provided, the system will automatically reduce to its stable point - desert, which is extremely difficult to turn around.

  13. Optimal firing rate estimation

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.

  14. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  15. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  16. Focus on Teacher Salaries: An Update on Average Salaries and Recent Legislative Actions in the SREB States.

    ERIC Educational Resources Information Center

    Gaines, Gale F.

    Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…

  17. Water requirements of the petroleum refining industry

    USGS Publications Warehouse

    Otts, Louis Ethelbert

    1964-01-01

    About 3,500 million gallons of water was withdrawn daily in 1955 for use by petroleum refineries in the United States. This was about 3 percent of the estimated daily withdrawal of industrial water in the United States in 1955. An average of 468 gallons of water was required to refine a barrel of crude oil, and the median was 95 gallons of water per barrel of crude charge; withdrawals ranged from 6.5 to 3,240 gallons per barrel. Ninety-one percent of the water requirements of the petroleum refineries surveyed was for cooling. One-third of the refineries reused their cooling water from 10 to more than 50 times. Only 17 refineries used once-through cooling systems. Refineries with recirculating cooling systems circulated about twice as much cooling water but needed about 25 times less makeup; however, they consumed about 24 times more water per barrel of charge than refineries using once-through cooling systems. The average noncracking refinery used about 375 gallons of water per barrel of crude, which is less than the 471-gallon average of refineries with cracking facilities. Refineries are composed of various processing units, and the water requirements of such units varied ; median makeup needs ranged from about 125 gallons per barrel for polymerization and alkylation units to 15.5 gallons per barrel for distillation units. Refinery-owned sources of water supplied 95 percent of the makeup-water requirements. Surface-water sources provided 86 percent of the makeup-water demand. Less than 1 percent of the makeup water was obtained from reprocessed municipal sewage.

  18. Perceived Average Orientation Reflects Effective Gist of the Surface.

    PubMed

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  19. Alcohol-impaired driving: average quantity consumed and frequency of drinking do matter.

    PubMed

    Birdsall, William C; Reed, Beth Glover; Huq, Syeda S; Wheeler, Laura; Rush, Sarah

    2012-01-01

    The objective of this article is to estimate and validate a logistic model of alcohol-impaired driving using previously ignored alcohol consumption behaviors, other risky behaviors, and demographic characteristics as independent variables. The determinants of impaired driving are estimated using the US Centers for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) surveys. Variables used in a logistic model to explain alcohol-impaired driving are not only standard sociodemographic variables and bingeing but also frequency of drinking and average quantity consumed, as well as other risky behaviors. We use interactions to understand how being female and being young affect impaired driving. Having estimated our model using the 1997 survey, we validated our model using the BRFSS data for 1999. Drinking 9 or more times in the past month doubled the odds of impaired driving. The greater average consumption of alcohol per session, the greater the odds of driving impaired, especially for persons in the highest quartile of alcohol consumed. Bingeing has the greatest effect on impaired driving. Seat belt use is the one risky behavior found to be related to such driving. Sociodemographic effects are consistent with earlier research. Being young (18-30) interacts with two of the alcohol consumption variables and being a woman interacts with always wearing a seat belt. Our model was robust in the validation analysis. All 3 dimensions of drinking behavior are important determinants of alcohol-impaired driving, including frequency and average quantity consumed. Including these factors in regressions improves the estimates of the effects of all variables.

  20. Comparing Satellite Rainfall Estimates with Rain-Gauge Data: Optimal Strategies Suggested by a Spectral Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.

  1. 7 CFR 5.5 - Publication of season average, calendar year, and parity price data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... cases where preliminary marketing season average price data are used in estimating the adjusted base... parity price data. 5.5 Section 5.5 Agriculture Office of the Secretary of Agriculture DETERMINATION OF PARITY PRICES § 5.5 Publication of season average, calendar year, and parity price data. (a) New adjusted...

  2. A straightforward frequency-estimation technique for GPS carrier-phase time transfer.

    PubMed

    Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen

    2006-09-01

    Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).

  3. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    NASA Astrophysics Data System (ADS)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  4. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  5. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the

  6. 24 CFR 906.15 - Requirements applicable to a family purchasing a property under a homeownership program.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... requirements: (1) Cost/income ratio. On an average monthly estimate, the amount of the applicant's payments for... homeownership program. (a) Low-income requirement. Except in the case of a PHA's offer of first refusal to a... program must be a low-income family, as defined in section 3 of the 1937 Act (42 U.S.C. 1437a), at the...

  7. A model for light distribution and average solar irradiance inside outdoor tubular photobioreactors for the microalgal mass culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez, F.G.A.; Camacho, F.G.; Perez, J.A.S.

    1997-09-05

    A mathematical model to estimate the solar irradiance profile and average light intensity inside a tubular photobioreactor under outdoor conditions is proposed, requiring only geographic, geometric, and solar position parameters. First, the length of the path into the culture traveled by any direct or disperse ray of light was calculated as the function of three variables: day of year, solar hour, and geographic latitude. Then, the phenomenon of light attenuation by biomass was studied considering Lambert-Beer`s law (only considering absorption) and the monodimensional model of Cornet et al. (1900) (considering absorption and scattering phenomena). Due to the existence of differentialmore » wavelength absorption, none of the literature models are useful for explaining light attenuation by the biomass. Therefore, an empirical hyperbolic expression is proposed. The equations to calculate light path length were substituted in the proposed hyperbolic expression, reproducing light intensity data obtained in the center of the loop tubes. The proposed model was also likely to estimate the irradiance accurately at any point inside the culture. Calculation of the local intensity was thus extended to the full culture volume in order to obtain the average irradiance, showing how the higher biomass productivities in a Phaeodactylum tricornutum UTEX 640 outdoor chemostat culture could be maintained by delaying light limitation.« less

  8. Estimation of fan pressure ratio requirements and operating performance for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Gloss, B. B.; Nystrom, D.

    1981-01-01

    The National Transonic Facility (NTF), a fan-driven, transonic, pressurized, cryogenic wind tunnel, will operate over the Mach number range of 0.10 to 1.20 with stagnation pressures varying from 1.00 to about 8.8 atm and stagnation temperatures varying from 77 to 340 K. The NTF is cooled to cryogenic temperatures by the injection of liquid nitrogen into the tunnel stream with gaseous nitrogen as the test gas. The NTF can also operate at ambient temperatures using a conventional chilled water heat exchanger with air on nitrogen as the test gas. The methods used in estimating the fan pressure ratio requirements are described. The estimated NTF operating envelopes at Mach numbers from 0.10 to 1.20 are presented.

  9. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...

  10. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...

  11. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...

  12. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...

  13. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...

  14. Orthodontic manpower requirements of Trinidad and Tobago.

    PubMed

    Bourne, C O

    2012-09-01

    A study was done to estimate the orthodontic manpower requirements of Trinidad and Tobago. A questionnaire was administered via e-mail to 9 of 11 orthodontists. Information from a population census, a report on the orthodontic treatment needs of children in Trinidad and Tobago and this questionnaire were used to calculate the number of orthodontists and chairside orthodontic assistants needed in Trinidad and Tobago. On average, 50 per cent of the 289 patients treated by each orthodontist in Trinidad and Tobago annually are children. Approximately, 13 360 patients can be expected to demand orthodontic treatment every year in this country. The number of orthodontists and chairside assistants required to treat these patients was estimated to be 44 and 154, respectively. Currently, Trinidad and Tobago only has a quarter of the number of orthodontists and orthodontic chairside assistants required to treat the number of patients in need. As the demand is relatively high in Trinidad and Tobago and the number of orthodontists has increased slowly and inadequately for the past decade, the orthodontists are likely to remain adequately employed and happy with their job unlike dentists who are currently in private practice for less than a year.

  15. [Thermal requirements and estimate of the annual number of generations of Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae) on strawberry crop].

    PubMed

    Nondillo, Aline; Redaelli, Luiza R; Botton, Marcos; Pinent, Silvia M J; Gitz, Rogério

    2008-01-01

    Frankliniella occidentalis (Pergande) is one of the major strawberry pests in southern Brazil. The insect causes russeting and wither in flowers and fruits reducing commercial value. In this work, the thermal requirements of the eggs, larvae and pupae of F. occidentalis were estimated. Thrips development was studied in folioles of strawberry plants at six constant temperatures (16, 19, 22, 25, 28 and 31 degrees C) in controlled conditions (70 +/- 10% R.H. and 12:12 L:D). The number of annual generations of F. occidentalis was estimated for six strawberry production regions of Rio Grande do Sul State based on its thermal requirements. Developmental time of each F. occidentalis stages was proportional to the temperature increase. The best development rate was obtained when insects were reared at 25 masculineC and 28 masculineC. The lower threshold and the thermal requirements for the egg to adult stage were 9.9 masculineC and 211.9 degree-days, respectively. Considering the thermal requirements of F. occidentalis, 10.7, 12.6, 13.1, 13.6, 16.5 and 17.9 generations/year were estimated, respectively, for Vacaria, Caxias do Sul, Farroupilha, Pelotas, Porto Alegre and Taquari producing regions located in Rio Grande do Sul State, Brazil.

  16. Using Bayes Model Averaging for Wind Power Forecasts

    NASA Astrophysics Data System (ADS)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  17. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the

  18. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  19. A Note on Spatial Averaging and Shear Stresses Within Urban Canopies

    NASA Astrophysics Data System (ADS)

    Xie, Zheng-Tong; Fuka, Vladimir

    2018-04-01

    One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.

  20. 42 CFR 414.904 - Average sales price as the basis for payment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... subsection (c), the term billing unit means the identifiable quantity associated with a billing and payment code, as established by CMS. (c) Single source drugs—(1) Average sales price. The average sales price... report as required by section 623(c) of the Medicare Prescription Drug, Improvement, and Modernization...

  1. 42 CFR 414.904 - Average sales price as the basis for payment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subsection (c), the term billing unit means the identifiable quantity associated with a billing and payment code, as established by CMS. (c) Single source drugs—(1) Average sales price. The average sales price... report as required by section 623(c) of the Medicare Prescription Drug, Improvement, and Modernization...

  2. Reported energy intake by weight status, day and estimated energy requirement among adults: NHANES 2003-2008

    USDA-ARS?s Scientific Manuscript database

    Objective: To describe energy intake reporting by gender, weight status, and interview sequence and to compare reported intakes to the Estimated Energy Requirement at different levels of physical activity. Methods: Energy intake was self-reported by 24-hour recall on two occasions (day 1 and day 2)...

  3. The effects of noise in cardiac diffusion tensor imaging and the benefits of averaging complex data.

    PubMed

    Scott, Andrew D; Nielles-Vallespin, Sonia; Ferreira, Pedro F; McGill, Laura-Ann; Pennell, Dudley J; Firmin, David N

    2016-05-01

    There is growing interest in cardiac diffusion tensor imaging (cDTI), but, unlike other diffusion MRI applications, there has been little investigation of the effects of noise on the parameters typically derived. One method of mitigating noise floor effects when there are multiple image averages, as in cDTI, is to average the complex rather than the magnitude data, but the phase contains contributions from bulk motion, which must be removed first. The effects of noise on the mean diffusivity (MD), fractional anisotropy (FA), helical angle (HA) and absolute secondary eigenvector angle (E2A) were simulated with various diffusion weightings (b values). The effect of averaging complex versus magnitude images was investigated. In vivo cDTI was performed in 10 healthy subjects with b = 500, 1000, 1500 and 2000 s/mm(2). A technique for removing the motion-induced component of the image phase present in vivo was implemented by subtracting a low-resolution copy of the phase from the original images before averaging the complex images. MD, FA, E2A and the transmural gradient in HA were compared for un-averaged, magnitude- and complex-averaged reconstructions. Simulations demonstrated an over-estimation of FA and MD at low b values and an under-estimation at high b values. The transition is relatively signal-to-noise ratio (SNR) independent and occurs at a higher b value for FA (b = 1000-1250 s/mm(2)) than MD (b ≈ 250 s/mm(2)). E2A is under-estimated at low and high b values with a transition at b ≈ 1000 s/mm(2), whereas the bias in HA is comparatively small. The under-estimation of FA and MD at high b values is caused by noise floor effects, which can be mitigated by averaging the complex data. Understanding the parameters of interest and the effects of noise informs the selection of the optimal b values. When complex data are available, they should be used to maximise the benefit from the acquisition of multiple averages. The combination of

  4. Capital requirements for the transportation of energy materials based on PIES Scenario estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gezen, A.; Kendrick, M.J.; Khan, S.S.

    In May 1978, Transportation and Economic Research Associates (TERA), Inc. completed a study in which information and methodologies were developed for the determination of capital requirements in the transportation of energy materials. This work was designed to aid EIA in the analysis of PIES solutions. The work consisted of the development of five algorithms which are used to estimate transportation-investment requirements associated with energy commodities and transportation modes. For the purpose of this analysis, TERA was provided with three PIES-solution scenarios for 1985. These are: Scenario A which assumes a high domestic economic rate of growth along with its correspondingmore » high demand for petroleum, as well as a high domestic supply of petroleum; Scenario C which assumes a medium level of economic growth and petroleum demand and supply; and Scenario E which assumes a low level of economic growth and domestic demand and supply for petroleum. Two PIES-related outputs used in TERA's analysis are the ''COOKIE'' reports which present activity summaries by region and ''PERUSE'' printouts of solution files which give interregional flows by energy material. Only the transportation of four energy materials, crude oil, petroleum products, natural gas, and coal is considered. In estimating the capital costs of new or expanded capacity for the transportation of these materials, three transportation modes were examined: pipelines, water carriers (inland barge and deep draft vessels), and railroads. (MCW)« less

  5. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  6. Is it valid to calculate the 3-kilohertz threshold by averaging 2 and 4 kilohertz?

    PubMed

    Gurgel, Richard K; Popelka, Gerald R; Oghalai, John S; Blevins, Nikolas H; Chang, Kay W; Jackler, Robert K

    2012-07-01

    Many guidelines for reporting hearing results use the threshold at 3 kilohertz (kHz), a frequency not measured routinely. This study assessed the validity of estimating the missing 3-kHz threshold by averaging the measured thresholds at 2 and 4 kHz. The estimated threshold was compared to the measured threshold at 3 kHz individually and when used in the pure-tone average (PTA) of 0.5, 1, 2, and 3 kHz in audiometric data from 2170 patients. The difference between the estimated and measured thresholds for 3 kHz was within ± 5 dB in 72% of audiograms, ± 10 dB in 91%, and within ± 20 dB in 99% (correlation coefficient r = 0.965). The difference between the PTA threshold using the estimated threshold compared with using the measured threshold at 3 kHz was within ± 5 dB in 99% of audiograms (r = 0.997). The estimated threshold accurately approximates the measured threshold at 3 kHz, especially when incorporated into the PTA.

  7. Genetic Analysis of Milk Yield in First-Lactation Holstein Friesian in Ethiopia: A Lactation Average vs Random Regression Test-Day Model Analysis

    PubMed Central

    Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.

    2015-01-01

    The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217

  8. J-adaptive estimation with estimated noise statistics

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1973-01-01

    The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.

  9. In-situ and path-averaged measurements of aerosol optical properties

    NASA Astrophysics Data System (ADS)

    van Binsbergen, Sven A.; Grossmann, Peter; February, Faith J.; Cohen, Leo H.; van Eijk, Alexander M. J.; Stein, Karin U.

    2017-09-01

    This paper compares in-situ and path-averaged measurements of the electro-optical transmission, with emphasis on aerosol effects. The in-situ sensors consisted of optical particle counters (OPC), the path-averaged data was provided by a 7-wavelength transmissometer (MSRT) and scintillometers (BLS). Data were collected at two sites: a homogeneous test site in Northern Germany, and over the inhomogeneous False Bay near Cape Town, South Africa. A retrieval algorithm was developed to infer characteristics of the aerosol size distribution (Junge approximation) from the MSRT data. A comparison of the various sensors suggests that the optical particle counters are over optimistic in their estimate of the transmission. For the homogeneous test site, in-situ and path-averaged sensors yield similar results. For the inhomogeneous test site, sensors may react differently or temporally separated to meteorological events such as a change in wind speed and/or direction.

  10. Estimation of Fat-free Mass at Discharge in Preterm Infants Fed With Optimized Feeding Regimen.

    PubMed

    Larcade, Julie; Pradat, Pierre; Buffin, Rachel; Leick-Courtois, Charline; Jourdes, Emilie; Picaud, Jean-Charles

    2017-01-01

    The purpose of the present study was to validate a previously calculated equation (E1) that estimates infant fat-free mass (FFM) at discharge using data from a population of preterm infants receiving an optimized feeding regimen. Preterm infants born before 33 weeks of gestation between April 2014 and November 2015 in the tertiary care unit of Croix-Rousse Hospital in Lyon, France, were included in the study. At discharge, FFM was assessed by air displacement plethysmography (PEA POD) and was compared with FFM estimated by E1. FFM was estimated using a multiple linear regression model. Data on 155 preterm infants were collected. There was a strong correlation between the FFM estimated by E1 and FFM assessed by the PEA POD (r = 0.939). E1, however, underestimated the FFM (average difference: -197 g), and this underestimation increased as FFM increased. A new, more predictive equation is proposed (r = 0.950, average difference: -12 g). Although previous estimation methods were useful for estimating FFM at discharge, an equation adapted to present populations of preterm infants with "modern" neonatal care and nutritional practices is required for accuracy.

  11. Estimation of Managerial and Technical Personnel Requirements in Selected Industries. Training for Industry Series, No. 2.

    ERIC Educational Resources Information Center

    United Nations Industrial Development Organization, Vienna (Austria).

    The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…

  12. Aerosol optical properties inferred from in-situ and path-averaged measurements

    NASA Astrophysics Data System (ADS)

    van Binsbergen, Sven A.; Grossmann, Peter; Cohen, Leo H.; van Eijk, Alexander M. J.; Stein, Karin U.

    2017-09-01

    This paper compares in-situ and path-averaged measurements of the electro-optical transmission, with emphasis on aerosol effects. The in-situ sensors consisted of optical particle counters (OPC) and a visibility meter, the path-averaged data was provided by a 7-wavelength transmissometer (MSRT) and a scintillometer (BLS). Data was collected at a test site in Northern Germany. A retrieval algorithm was developed to infer characteristics of the aerosol size distribution (Junge approximation) from the MSRT data. A comparison of the various sensors suggests that the optical particle counters are over-optimistic in their estimate of the transmission.

  13. Estimating Irrigation Water Requirements using MODIS Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah

    2007-01-01

    An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.

  14. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  15. A Geomagnetic Estimate of Mean Paleointensity

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte

    2004-01-01

    To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate uses the modem magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that the low degree multipole powers of the core-source field are distributed as chi-squared with 2n+l degrees of freedom and expectation values {R(n)} = K[(n+l/2)/n(n+l](c/a)(sup 2n+4), where c is the 3480 km radius of Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity {F(sup 2)}. The sum also estimates {F(sup 2)} averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes.

  16. A Geomagnetic Estimate of Mean Paleointensity

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    2004-01-01

    To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate used the modern magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that low degree multi-pole powers of the coresource field are distributed as chi-squared with 2n+1 degrees of freedom and expectation values, where c is the 3480 km radius of the Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity F(exp 2). The sum also estimates F(exp 2) averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes. Additional information is included in the original extended abstract.

  17. Thermally induced distortion of a high-average-power laser system by an optical transport system

    NASA Astrophysics Data System (ADS)

    Chow, Robert; Ault, Linda E.; Taylor, John R.; Jedlovec, Don

    1999-11-01

    The atomic vapor laser isotope separation process uses high- average power lasers that have the commercial potential to enrich uranium for the electric power utilities. The transport of the laser beam through the laser system to the separation chambers requires high performance optical components, most of which have either fused silica or Zerodur as the substrate material. One of the requirements of the optical components is to preserve the wavefront quality of the laser beam that propagate over long distances. Full aperture tests with the high power process lasers and finite element analysis (FEA) have been performed on the transport optics. The wavefront distortions of the various sections of the transport path were measured with diagnostic Hartmann sensor packages. The FEA results were derived from an in-house thermal-structural- optical code which is linked to the commercially available CodeV program. In comparing the measured and predicted results, the bulk absorptance of fused silica was estimated to about 50 ppm/cm in the visible wavelength regime. Wavefront distortions will be reported on optics made from fused silica and Zerodur substrate materials.

  18. The never ending road: improving, adapting and refining a needs-based model to estimate future general practitioner requirements in two Australian states.

    PubMed

    Laurence, Caroline O; Heywood, Troy; Bell, Janice; Atkinson, Kaye; Karnon, Jonathan

    2018-03-27

    Health workforce planning models have been developed to estimate the future health workforce requirements for a population whom they serve and have been used to inform policy decisions. To adapt and further develop a need-based GP workforce simulation model to incorporate current and estimated geographic distribution of patients and GPs. A need-based simulation model that estimates the supply of GPs and levels of services required in South Australia (SA) was adapted and applied to the Western Australian (WA) workforce. The main outcome measure was the differences in the number of full-time equivalent (FTE) GPs supplied and required from 2013 to 2033. The base scenario estimated a shortage of GPs in WA from 2019 onwards with a shortage of 493 FTE GPs in 2033, while for SA, estimates showed an oversupply over the projection period. The WA urban and rural models estimated an urban shortage of GPs over this period. A reduced international medical graduate recruitment scenario resulted in estimated shortfalls of GPs by 2033 for WA and SA. The WA-specific scenarios of lower population projections and registrar work value resulted in a reduced shortage of FTE GPs in 2033, while unfilled training places increased the shortfall of FTE GPs in 2033. The simulation model incorporates contextual differences to its structure that allows within and cross jurisdictional comparisons of workforce estimations. It also provides greater insights into the drivers of supply and demand and the impact of changes in workforce policy, promoting more informed decision-making.

  19. Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1992-01-01

    Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

  20. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost themore » same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)« less

  1. Pediatric chest and abdominopelvic CT: organ dose estimation based on 42 patient models.

    PubMed

    Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Paulson, Erik K; Frush, Donald P; Samei, Ehsan

    2014-02-01

    To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. The institutional review board approved this HIPAA-compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0-16 years; weight range, 2-80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDI(vol)). The relationships between CTDI(vol)-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. For organs within the image coverage, CTDI(vol)-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R(2) > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%-32%) mainly because of the effect of overranging. It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDI(vol). These CTDI(vol)-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles across patient populations within a practice.

  2. Pediatric Chest and Abdominopelvic CT: Organ Dose Estimation Based on 42 Patient Models

    PubMed Central

    Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Paulson, Erik K.; Frush, Donald P.

    2014-01-01

    Purpose To estimate organ dose from pediatric chest and abdominopelvic computed tomography (CT) examinations and evaluate the dependency of organ dose coefficients on patient size and CT scanner models. Materials and Methods The institutional review board approved this HIPAA–compliant study and did not require informed patient consent. A validated Monte Carlo program was used to perform simulations in 42 pediatric patient models (age range, 0–16 years; weight range, 2–80 kg; 24 boys, 18 girls). Multidetector CT scanners were modeled on those from two commercial manufacturers (LightSpeed VCT, GE Healthcare, Waukesha, Wis; SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Organ doses were estimated for each patient model for routine chest and abdominopelvic examinations and were normalized by volume CT dose index (CTDIvol). The relationships between CTDIvol-normalized organ dose coefficients and average patient diameters were evaluated across scanner models. Results For organs within the image coverage, CTDIvol-normalized organ dose coefficients largely showed a strong exponential relationship with the average patient diameter (R2 > 0.9). The average percentage differences between the two scanner models were generally within 10%. For distributed organs and organs on the periphery of or outside the image coverage, the differences were generally larger (average, 3%–32%) mainly because of the effect of overranging. Conclusion It is feasible to estimate patient-specific organ dose for a given examination with the knowledge of patient size and the CTDIvol. These CTDIvol-normalized organ dose coefficients enable one to readily estimate patient-specific organ dose for pediatric patients in clinical settings. This dose information, and, as appropriate, attendant risk estimations, can provide more substantive information for the individual patient for both clinical and research applications and can yield more expansive information on dose profiles

  3. Estimation of crop water requirements using remote sensing for operational water resources management

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Spiliotopoulos, Marios; Tzabiras, John; Loukas, Athanasios; Mylopoulos, Nikitas

    2015-06-01

    An integrated modeling system, developed in the framework of "Hydromentor" research project, is applied to evaluate crop water requirements for operational water resources management at Lake Karla watershed, Greece. The framework includes coupled components for operation of hydrotechnical projects (reservoir operation and irrigation works) and estimation of agricultural water demands at several spatial scales using remote sensing. The study area was sub-divided into irrigation zones based on land use maps derived from Landsat 5 TM images for the year 2007. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used to derive actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat TM imagery. Agricultural water needs were estimated using the FAO method for each zone and each control node of the system for a number of water resources management strategies. Two operational strategies of hydro-technical project development (present situation without operation of the reservoir and future situation with the operation of the reservoir) are coupled with three water demand strategies. In total, eight (8) water management strategies are evaluated and compared. The results show that, under the existing operational water resources management strategies, the crop water requirements are quite large. However, the operation of the proposed hydro-technical projects in Lake Karla watershed coupled with water demand management measures, like improvement of existing water distribution systems, change of irrigation methods, and changes of crop cultivation could alleviate the problem and lead to sustainable and ecological use of water resources in the study area.

  4. Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA.

    PubMed

    Wagner, Todd H; Chen, Shuo; Barnett, Paul G

    2003-09-01

    The U.S. Department of Veterans Affairs (VA) maintains discharge abstracts, but these do not include cost information. This article describes the methods the authors used to estimate the costs of VA medical-surgical hospitalizations in fiscal years 1998 to 2000. They estimated a cost regression with 1996 Medicare data restricted to veterans receiving VA care in an earlier year. The regression accounted for approximately 74 percent of the variance in cost-adjusted charges, and it proved to be robust to outliers and the year of input data. The beta coefficients from the cost regression were used to impute costs of VA medical-surgical hospital discharges. The estimated aggregate costs were reconciled with VA budget allocations. In addition to the direct medical costs, their cost estimates include indirect costs and physician services; both of these were allocated in proportion to direct costs. They discuss the method's limitations and application in other health care systems.

  5. Learning More from Educational Intervention Studies: Estimating Complier Average Causal Effects in a Relevance Intervention

    ERIC Educational Resources Information Center

    Nagengast, Benjamin; Brisson, Brigitte M.; Hulleman, Chris S.; Gaspard, Hanna; Häfner, Isabelle; Trautwein, Ulrich

    2018-01-01

    An emerging literature demonstrates that relevance interventions, which ask students to produce written reflections on how what they are learning relates to their lives, improve student learning outcomes. As part of a randomized evaluation of a relevance intervention (N = 1,978 students from 82 ninth-grade classes), we used Complier Average Causal…

  6. Robust Speech Enhancement Using Two-Stage Filtered Minima Controlled Recursive Averaging

    NASA Astrophysics Data System (ADS)

    Ghourchian, Negar; Selouani, Sid-Ahmed; O'Shaughnessy, Douglas

    In this paper we propose an algorithm for estimating noise in highly non-stationary noisy environments, which is a challenging problem in speech enhancement. This method is based on minima-controlled recursive averaging (MCRA) whereby an accurate, robust and efficient noise power spectrum estimation is demonstrated. We propose a two-stage technique to prevent the appearance of musical noise after enhancement. This algorithm filters the noisy speech to achieve a robust signal with minimum distortion in the first stage. Subsequently, it estimates the residual noise using MCRA and removes it with spectral subtraction. The proposed Filtered MCRA (FMCRA) performance is evaluated using objective tests on the Aurora database under various noisy environments. These measures indicate the higher output SNR and lower output residual noise and distortion.

  7. Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.

    PubMed

    Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S

    2010-03-01

    This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.

  8. Competing conservation objectives for predators and prey: estimating killer whale prey requirements for Chinook salmon.

    PubMed

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  9. Competing Conservation Objectives for Predators and Prey: Estimating Killer Whale Prey Requirements for Chinook Salmon

    PubMed Central

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  10. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  11. Average Emissivity Curve of BATSE Gamma-Ray Bursts with Different Intensities

    NASA Technical Reports Server (NTRS)

    Mitrofanov, Igor G.; Anfimov, Dimitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, W. S.; Pendleton, Geoffrey N.; Preece, Robert D.

    1998-01-01

    Six intensity groups with $/sim 150$ BATSE gamma-ray bursts each are compared using average emissivity curves. Time-stretch factors for each of the dimmer groups are estimated with respect to the brightest group. Which serves as the reference taking into account the systematics of counts-produced noise effects and choice statistics. The effect of stretching/intensity anti-correlation is found at the average back slopes of bursts with good statistical significance. A stretch factor $/sim 2$ is found between the 150 dimmest bursts with peak flux $less than 0.45$ ph cm$(exp -2)$ s$(exp -1)$, and the 147 brightest bursts with peak flux $greater than 4.1$ ph cm$(exp -2}$ s$(exp -1)$. On the other hand, only a marginally significant stretching effect $V(sub ec) 1.4$ is seen at the average rise fronts.

  12. Estimating Depolarization with the Jones Matrix Quality Factor

    NASA Astrophysics Data System (ADS)

    Hilfiker, James N.; Hale, Jeffrey S.; Herzinger, Craig M.; Tiwald, Tom; Hong, Nina; Schöche, Stefan; Arwin, Hans

    2017-11-01

    Mueller matrix (MM) measurements offer the ability to quantify the depolarization capability of a sample. Depolarization can be estimated using terms such as the depolarization index or the average degree of polarization. However, these calculations require measurement of the complete MM. We propose an alternate depolarization metric, termed the Jones matrix quality factor, QJM, which does not require the complete MM. This metric provides a measure of how close, in a least-squares sense, a Jones matrix can be found to the measured Mueller matrix. We demonstrate and compare the use of QJM to other traditional calculations of depolarization for both isotropic and anisotropic depolarizing samples; including non-uniform coatings, anisotropic crystal substrates, and beetle cuticles that exhibit both depolarization and circular diattenuation.

  13. Thermally induced distortion of high average power laser system by an optical transport system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ault, L; Chow, R; Taylor, Jedlovec, D

    1999-03-31

    The atomic vapor laser isotope separation process uses high-average power lasers that have the commercial potential to enrich uranium for the electric power utilities. The transport of the laser beam through the laser system to the separation chambers requires high performance optical components, most of which have either fused silica or Zerodur as the substrate material. One of the requirements of the optical components is to preserve the wavefront quality of the laser beam that propagate over long distances. Full aperture tests with the high power process lasers and finite element analysis (FEA) have been performed on the transport optics.more » The wavefront distortions of the various sections of the transport path were measured with diagnostic Hartmann sensor packages. The FEA results were derived from an in-house thermal-structural-optical code which is linked to the commercially available CodeV program. In comparing the measured and predicted results, the bulk absorptance of fused silica was estimated to about 50 ppm/cm in the visible wavelength regime. Wavefront distortions are reported on optics made from fused silica and Zerodur substrate materials.« less

  14. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  15. Robust w-Estimators for Cryo-EM Class Means.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2016-02-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.

  16. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  17. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  18. Stone Attenuation Values Measured by Average Hounsfield Units and Stone Volume as Predictors of Total Laser Energy Required During Ureteroscopic Lithotripsy Using Holmium:Yttrium-Aluminum-Garnet Lasers.

    PubMed

    Ofude, Mitsuo; Shima, Takashi; Yotsuyanagi, Satoshi; Ikeda, Daisuke

    2017-04-01

    To evaluate the predictors of the total laser energy (TLE) required during ureteroscopic lithotripsy (URS) using the holmium:yttrium-aluminum-garnet (Ho:YAG) laser for a single ureteral stone. We retrospectively analyzed the data of 93 URS procedures performed for a single ureteral stone in our institution from November 2011 to September 2015. We evaluated the association between TLE and preoperative clinical data, such as age, sex, body mass index, and noncontrast computed tomographic findings, including stone laterality, location, maximum diameter, volume, stone attenuation values measured using average Hounsfield units (HUs), and presence of secondary signs (severe hydronephrosis, tissue rim sign, and perinephric stranding). The mean maximum stone diameter, volume, and average HUs were 9.2 ± 3.8 mm, 283.2 ± 341.4 mm 3 , and 863 ± 297, respectively. The mean TLE and operative time were 2.93 ± 3.27 kJ and 59.1 ± 28.1 minutes, respectively. Maximum stone diameter, volume, average HUs, severe hydronephrosis, and tissue rim sign were significantly correlated with TLE (Spearman's rho analysis). Stepwise multiple linear regression analysis defining stone volume, average HUs, severe hydronephrosis, and tissue rim sign as explanatory variables showed that stone volume and average HUs were significant predictors of TLE (standardized coefficients of 0.565 and 0.320, respectively; adjusted R 2  = 0.55, F = 54.7, P <.001). Stone attenuation values measured by average HUs and stone volume were strong predictors of TLE during URS using Ho:YAG laser procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  20. Estimating Oxygen Needs for Childhood Pneumonia in Developing Country Health Systems: A New Model for Expecting the Unexpected

    PubMed Central

    Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling

    2014-01-01

    Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely

  1. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Department of Defense to rely upon information produced by the system that is needed for management purposes... management systems; and (4) Is subject to applicable financial control systems. Estimating system means the... estimates of costs and other data included in proposals submitted to customers in the expectation of...

  2. Preliminary evaluation of spectral, normal and meteorological crop stage estimation approaches

    NASA Technical Reports Server (NTRS)

    Cate, R. B.; Artley, J. A.; Doraiswamy, P. C.; Hodges, T.; Kinsler, M. C.; Phinney, D. E.; Sestak, M. L. (Principal Investigator)

    1980-01-01

    Several of the projects in the AgRISTARS program require crop phenology information, including classification, acreage and yield estimation, and detection of episodal events. This study evaluates several crop calendar estimation techniques for their potential use in the program. The techniques, although generic in approach, were developed and tested on spring wheat data collected in 1978. There are three basic approaches to crop stage estimation: historical averages for an area (normal crop calendars), agrometeorological modeling of known crop-weather relationships agrometeorological (agromet) crop calendars, and interpretation of spectral signatures (spectral crop calendars). In all, 10 combinations of planting and biostage estimation models were evaluated. Dates of stage occurrence are estimated with biases between -4 and +4 days while root mean square errors range from 10 to 15 days. Results are inconclusive as to the superiority of any of the models and further evaluation of the models with the 1979 data set is recommended.

  3. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    NASA Astrophysics Data System (ADS)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  4. Application of Bayesian model averaging to measurements of the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Parkinson, David; Liddle, Andrew R.

    2010-11-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940averaging can tighten the credible upper limit, depending on prior assumptions.

  5. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  6. Estimating cull in northern hardwoods

    Treesearch

    W.M. Zillgitt; S.R. Gevorkiantz

    1946-01-01

    Cull in northern hardwood stands is often very heavy and is difficult to estimate. To help clarify this situation and aid the average cruiser to become more accurate in his estimates, the study reported here should prove very helpful.

  7. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  8. Generating large-scale estimates from sparse, in-situ networks: multi-scale soil moisture modeling at ARS watersheds for NASA’s soil moisture active passive (SMAP) calibration/validation mission

    USDA-ARS?s Scientific Manuscript database

    NASA’s SMAP satellite, launched in November of 2014, produces estimates of average volumetric soil moisture at 3, 9, and 36-kilometer scales. The calibration and validation process of these estimates requires the generation of an identically-scaled soil moisture product from existing in-situ networ...

  9. Average of delta: a new quality control tool for clinical laboratories.

    PubMed

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  10. Estimating stand age for Douglas-fir.

    Treesearch

    Floyd A. Johnson

    1954-01-01

    Stand age for Douglas-fir has been defined as the average age of dominant and codominant trees. It is commonly estimated by measuring the age of several dominants and codominants and computing their arithmetic average.

  11. AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN

    EPA Science Inventory

    The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...

  12. Estimates of lower-tropospheric divergence and average vertical motion in the Southern Great Plains region

    NASA Astrophysics Data System (ADS)

    Muradyan, P.; Coulter, R.; Kotamarthi, V. R.; Wang, J.; Ghate, V. P.

    2016-12-01

    Large-scale mean vertical motion affects the atmospheric stability and is an important component in cloud formation. Thus, the analysis of temporal variations in the long-term averages of large-scale vertical motion would provide valuable insights into weather and climate patterns. 915-MHz radar wind profilers (RWP) provide virtually unattended and almost uninterrupted long-term wind speed measurements. We use five years of RWP wind data from the Atmospheric Boundary Layer Experiments (ABLE) located within the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site from 1999 to 2004. Wind speed data from a triangular array of SGP A1, A2, and A5 ancillary sites are used to calculate the horizontal divergence field over the profiler network area using the line integral method. The distance between each vertex of this triangle is approximately 60km. Thus, the vertical motion profiles deduced from the divergence/convergence of horizontal winds over these spatial scales are of relevance to mesoscale dynamics. The wind data from RWPs are averaged over 1 hour time slice and divergence is calculated at each range gate from the lowest at 82 m to the highest at 2.3 km. An analysis of temporal variations in the long-term averages of the atmospheric divergence and vertical air motion for the months of August/September indicates an overall vertical velocity of -0.002 m/s with a standard deviation of 0.013 m/s, agreeing well with previous studies. Overall mean of the diurnal variation of vertical velocity for the study period from surface to 500 m height is 0.0018 m/s with a standard error of 0.00095 m/s. Seasonal mean daytime vertical winds suggest generally downward motion in Winter and upward motion in Summer. Validation of the derived divergence and vertical motion against a regional climate model (Weather Forecast and Research, WRF) at a spatial resolution of 12 km, as well as clear-sky vs. cloudy conditions comparisons will also be presented.

  13. Vitamin D Requirements for the Future-Lessons Learned and Charting a Path Forward.

    PubMed

    Cashman, Kevin D

    2018-04-25

    Estimates of dietary requirements for vitamin D or Dietary Reference Values (DRV) are crucial from a public health perspective in providing a framework for prevention of vitamin D deficiency and optimizing vitamin D status of individuals. While these important public health policy instruments were developed with the evidence-base and data available at the time, there are some issues that need to be clarified or considered in future iterations of DRV for vitamin D. This is important as it will allow for more fine-tuned and truer estimates of the dietary requirements for vitamin D and thus provide for more population protection. The present review will overview some of the confusion that has arisen in relation to the application and/or interpretation of the definitions of the Estimated Average Requirement (EAR) and Recommended Dietary Allowance (RDA). It will also highlight some of the clarifications needed and, in particular, how utilization of a new approach in terms of using individual participant-level data (IPD), over and beyond aggregated data, from randomised controlled trials with vitamin D may have a key role in generating these more fine-tuned and truer estimates, which is of importance as we move towards the next iteration of vitamin D DRVs.

  14. The Effect of Honors Courses on Grade Point Averages

    ERIC Educational Resources Information Center

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  15. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  16. Average intragranular misorientation trends in polycrystalline materials predicted by a viscoplastic self-consistent approach

    DOE PAGES

    Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...

    2015-12-15

    Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less

  17. Choosing the best index for the average score intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2016-09-01

    The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.

  18. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  19. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  20. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  1. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  2. Estimates of Power Plant NOx Emissions and Lifetimes from OMI NO2 Satellite Retrievals

    NASA Technical Reports Server (NTRS)

    de Foy, Benjamin; Lu, Zifeng; Streets, David G.; Lamsal, Lok N.; Duncan, Bryan N.

    2015-01-01

    Isolated power plants with well characterized emissions serve as an ideal test case of methods to estimate emissions using satellite data. In this study we evaluate the Exponentially-Modified Gaussian (EMG) method and the box model method based on mass balance for estimating known NOx emissions from satellite retrievals made by the Ozone Monitoring Instrument (OMI). We consider 29 power plants in the USA which have large NOx plumes that do not overlap with other sources and which have emissions data from the Continuous Emission Monitoring System (CEMS). This enables us to identify constraints required by the methods, such as which wind data to use and how to calculate background values. We found that the lifetimes estimated by the methods are too short to be representative of the chemical lifetime. Instead, we introduce a separate lifetime parameter to account for the discrepancy between estimates using real data and those that theory would predict. In terms of emissions, the EMG method required averages from multiple years to give accurate results, whereas the box model method gave accurate results for individual ozone seasons.

  3. Treatment Effect Estimation Using Nonlinear Two-Stage Instrumental Variable Estimators: Another Cautionary Note.

    PubMed

    Chapman, Cole G; Brooks, John M

    2016-12-01

    To examine the settings of simulation evidence supporting use of nonlinear two-stage residual inclusion (2SRI) instrumental variable (IV) methods for estimating average treatment effects (ATE) using observational data and investigate potential bias of 2SRI across alternative scenarios of essential heterogeneity and uniqueness of marginal patients. Potential bias of linear and nonlinear IV methods for ATE and local average treatment effects (LATE) is assessed using simulation models with a binary outcome and binary endogenous treatment across settings varying by the relationship between treatment effectiveness and treatment choice. Results show that nonlinear 2SRI models produce estimates of ATE and LATE that are substantially biased when the relationships between treatment and outcome for marginal patients are unique from relationships for the full population. Bias of linear IV estimates for LATE was low across all scenarios. Researchers are increasingly opting for nonlinear 2SRI to estimate treatment effects in models with binary and otherwise inherently nonlinear dependent variables, believing that it produces generally unbiased and consistent estimates. This research shows that positive properties of nonlinear 2SRI rely on assumptions about the relationships between treatment effect heterogeneity and choice. © Health Research and Educational Trust.

  4. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  5. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kerber, A. G.; Sellers, P. J.

    1993-01-01

    Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.

  6. MANPOWER REQUIREMENTS AND DEMAND IN AGRICULTURE BY REGIONS AND NATIONALLY, WITH ESTIMATION OF VOCATIONAL TRAINING AND EDUCATIONAL NEEDS AND PRODUCTIVITY.

    ERIC Educational Resources Information Center

    ARCUS, PETER; HEADY, EARL O.

    THE PURPOSE OF THIS STUDY IS TO ESTIMATE THE MANPOWER REQUIREMENTS FOR THE NATION FOR 144 REGIONS THE TYPES OF SKILLS AND WORK ABILITIES REQUIRED BY AGRICULTURE IN THE NEXT 15 YEARS, AND THE TYPES AND AMOUNTS OF EDUCATION NEEDED. THE QUANTITATIVE ANALYSIS IS BEING MADE BY METHODS APPROPRIATE TO THE PHASES OF THE STUDY--(1) INTERRELATIONS AMONG…

  7. A spectral measurement method for determining white OLED average junction temperatures

    NASA Astrophysics Data System (ADS)

    Zhu, Yiting; Narendran, Nadarajah

    2016-09-01

    The objective of this study was to investigate an indirect method of measuring the average junction temperature of a white organic light-emitting diode (OLED) based on temperature sensitivity differences in the radiant power emitted by individual emitter materials (i.e., "blue," "green," and "red"). The measured spectral power distributions (SPDs) of the white OLED as a function of temperature showed amplitude decrease as a function of temperature in the different spectral bands, red, green, and blue. Analyzed data showed a good linear correlation between the integrated radiance for each spectral band and the OLED panel temperature, measured at a reference point on the back surface of the panel. The integrated radiance ratio of the spectral band green compared to red, (G/R), correlates linearly with panel temperature. Assuming that the panel reference point temperature is proportional to the average junction temperature of the OLED panel, the G/R ratio can be used for estimating the average junction temperature of an OLED panel.

  8. How many research nurses for how many clinical trials in an oncology setting? Definition of the Nursing Time Required by Clinical Trial-Assessment Tool (NTRCT-AT).

    PubMed

    Milani, Alessandra; Mazzocco, Ketti; Stucchi, Sara; Magon, Giorgio; Pravettoni, Gabriella; Passoni, Claudia; Ciccarelli, Chiara; Tonali, Alessandra; Profeta, Teresa; Saiani, Luisa

    2017-02-01

    Few resources are available to quantify clinical trial-associated workload, needed to guide staffing and budgetary planning. The aim of the study is to describe a tool to measure clinical trials nurses' workload expressed in time spent to complete core activities. Clinical trials nurses drew up a list of nursing core activities, integrating results from literature searches with personal experience. The final 30 core activities were timed for each research nurse by an outside observer during daily practice in May and June 2014. Average times spent by nurses for each activity were calculated. The "Nursing Time Required by Clinical Trial-Assessment Tool" was created as an electronic sheet that combines the average times per specified activities and mathematic functions to return the total estimated time required by a research nurse for each specific trial. The tool was tested retrospectively on 141 clinical trials. The increasing complexity of clinical research requires structured approaches to determine workforce requirements. This study provides a tool to describe the activities of a clinical trials nurse and to estimate the associated time required to deliver individual trials. The application of the proposed tool in clinical research practice could provide a consistent structure for clinical trials nursing workload estimation internationally. © 2016 John Wiley & Sons Australia, Ltd.

  9. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_TRMM-PFM-VIRS_Edition2B)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  10. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  11. Predicting Secchi disk depth from average beam attenuation in a deep, ultra-clear lake

    USGS Publications Warehouse

    Larson, G.L.; Hoffman, R.L.; Hargreaves, B.R.; Collier, R.W.

    2007-01-01

    We addressed potential sources of error in estimating the water clarity of mountain lakes by investigating the use of beam transmissometer measurements to estimate Secchi disk depth. The optical properties Secchi disk depth (SD) and beam transmissometer attenuation (BA) were measured in Crater Lake (Crater Lake National Park, Oregon, USA) at a designated sampling station near the maximum depth of the lake. A standard 20 cm black and white disk was used to measure SD. The transmissometer light source had a nearly monochromatic wavelength of 660 nm and a path length of 25 cm. We created a SD prediction model by regression of the inverse SD of 13 measurements recorded on days when environmental conditions were acceptable for disk deployment with BA averaged over the same depth range as the measured SD. The relationship between inverse SD and averaged BA was significant and the average 95% confidence interval for predicted SD relative to the measured SD was ??1.6 m (range = -4.6 to 5.5 m) or ??5.0%. Eleven additional sample dates tested the accuracy of the predictive model. The average 95% confidence interval for these sample dates was ??0.7 m (range = -3.5 to 3.8 m) or ??2.2%. The 1996-2000 time-series means for measured and predicted SD varied by 0.1 m, and the medians varied by 0.5 m. The time-series mean annual measured and predicted SD's also varied little, with intra-annual differences between measured and predicted mean annual SD ranging from -2.1 to 0.1 m. The results demonstrated that this prediction model reliably estimated Secchi disk depths and can be used to significantly expand optical observations in an environment where the conditions for standardized SD deployments are limited. ?? 2007 Springer Science+Business Media B.V.

  12. Atomic structure data based on average-atom model for opacity calculations in astrophysical plasmas

    NASA Astrophysics Data System (ADS)

    Trzhaskovskaya, M. B.; Nikulin, V. K.

    2018-03-01

    Influence of the plasmas parameters on the electron structure of ions in astrophysical plasmas is studied on the basis of the average-atom model in the local thermodynamic equilibrium approximation. The relativistic Dirac-Slater method is used for the electron density estimation. The emphasis is on the investigation of an impact of the plasmas temperature and density on the ionization stages required for calculations of the plasmas opacities. The level population distributions and level energy spectra are calculated and analyzed for all ions with 6 ≤ Z ≤ 32 occurring in astrophysical plasmas. The plasma temperature range 2 - 200 eV and the density range 2 - 100 mg/cm3 are considered. The validity of the method used is supported by good agreement between our values of ionization stages for a number of ions, from oxygen up to uranium, and results obtained earlier by various methods among which are more complicated procedures.

  13. Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes.

    PubMed

    Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang

    2016-10-03

    Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.

  14. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  15. Dynamic time warping-based averaging framework for functional near-infrared spectroscopy brain imaging studies

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Najafizadeh, Laleh

    2017-06-01

    We investigate the problem related to the averaging procedure in functional near-infrared spectroscopy (fNIRS) brain imaging studies. Typically, to reduce noise and to empower the signal strength associated with task-induced activities, recorded signals (e.g., in response to repeated stimuli or from a group of individuals) are averaged through a point-by-point conventional averaging technique. However, due to the existence of variable latencies in recorded activities, the use of the conventional averaging technique can lead to inaccuracies and loss of information in the averaged signal, which may result in inaccurate conclusions about the functionality of the brain. To improve the averaging accuracy in the presence of variable latencies, we present an averaging framework that employs dynamic time warping (DTW) to account for the temporal variation in the alignment of fNIRS signals to be averaged. As a proof of concept, we focus on the problem of localizing task-induced active brain regions. The framework is extensively tested on experimental data (obtained from both block design and event-related design experiments) as well as on simulated data. In all cases, it is shown that the DTW-based averaging technique outperforms the conventional-based averaging technique in estimating the location of task-induced active regions in the brain, suggesting that such advanced averaging methods should be employed in fNIRS brain imaging studies.

  16. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.

  17. A technique for estimating the absolute gain of a photomultiplier tube

    NASA Astrophysics Data System (ADS)

    Takahashi, M.; Inome, Y.; Yoshii, S.; Bamba, A.; Gunji, S.; Hadasch, D.; Hayashida, M.; Katagiri, H.; Konno, Y.; Kubo, H.; Kushida, J.; Nakajima, D.; Nakamori, T.; Nagayoshi, T.; Nishijima, K.; Nozaki, S.; Mazin, D.; Mashuda, S.; Mirzoyan, R.; Ohoka, H.; Orito, R.; Saito, T.; Sakurai, S.; Takeda, J.; Teshima, M.; Terada, Y.; Tokanai, F.; Yamamoto, T.; Yoshida, T.

    2018-06-01

    Detection of low-intensity light relies on the conversion of photons to photoelectrons, which are then multiplied and detected as an electrical signal. To measure the actual intensity of the light, one must know the factor by which the photoelectrons have been multiplied. To obtain this amplification factor, we have developed a procedure for estimating precisely the signal caused by a single photoelectron. The method utilizes the fact that the photoelectrons conform to a Poisson distribution. The average signal produced by a single photoelectron can then be estimated from the number of noise events, without requiring analysis of the distribution of the signal produced by a single photoelectron. The signal produced by one or more photoelectrons can be estimated experimentally without any assumptions. This technique, and an example of the analysis of a signal from a photomultiplier tube, are described in this study.

  18. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, John; Chorin, Alexandre J.; Crutchfield, William

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  19. Estimating avian population size using Bowden's estimator

    USGS Publications Warehouse

    Diefenbach, D.R.

    2009-01-01

    Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N < 50) unless a large percentage of the population was marked (>75%) and multiple (≥8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ≥ 0.5 if N ≤ 100 or pm > 0.1 if N ≥ 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates.

  20. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given

  1. Stress Drop Estimates from Induced Seismic Events in the Fort Worth Basin, Texas

    NASA Astrophysics Data System (ADS)

    Jeong, S. J.; Stump, B. W.; DeShon, H. R.

    2017-12-01

    Since the beginning of Barnett shale oil and gas production in the Fort Worth Basin, there have been earthquake sequences, including multiple magnitude 3.0+ events near the DFW International Airport, Azle, Irving-Dallas, and throughout Johnson County (Cleburne and Venus). These shallow depth earthquakes (2 to 8 km) have not exceeded magnitude 4.0 and have been widely felt; the close proximity of these earthquakes to a large population center motivates an assessment of the kinematics of the events in order to provide more accurate ground motion predictions. Previous studies have estimated average stress drops for the DFW airport and Cleburne earthquakes at 10 and 43 bars, respectively. Here, we calculate stress drops for Azle, Irving-Dallas and Venus earthquakes using seismic data from local (≤25 km) and regional (>25 km) seismic networks. Events with magnitudes above 2.5 are chosen to ensure adequate signal-to-noise. Stress drops are estimated by fitting the Brune earthquake model to the observed source spectrum with correction for propagation path effects and a local site effect using a high-frequency decay parameter, κ, estimated from acceleration spectrum. We find that regional average stress drops are similar to those estimated using local data, supporting the appropriateness of the propagation path and site corrections. The average stress drop estimates are 72 bars, which range from 7 to 240 bars. The results are consistent with global averages of 10 to 100 bars for intra-plate earthquakes and compatible with stress drops of DFW airport and Cleburne earthquakes. The stress drops show a slight breakdown in self-similarity with increasing moment magnitude. The breakdown of similarity for these events requires further study because of the limited magnitude range of the data. These results suggest that strong motions and seismic hazard from an injection-induced earthquake can be expected to be similar to those for tectonic events taking into account the shallow

  2. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  3. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    NASA Astrophysics Data System (ADS)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  4. Advanced proximal neoplasia of the colon in average-risk adults.

    PubMed

    Rabeneck, Linda; Paszat, Lawrence F; Hilsden, Robert J; McGregor, S Elizabeth; Hsieh, Eugene; M Tinmouth, Jill; Baxter, Nancy N; Saskin, Refik; Ruco, Arlinda; Stock, David

    2014-10-01

    Estimating risk for advanced proximal neoplasia (APN) based on distal colon findings can help identify asymptomatic persons who should undergo examination of the proximal colon after flexible sigmoidoscopy (FS) screening. We aimed to determine the risk of APN by most advanced distal finding among an average-risk screening population. Prospective, cross-sectional study. Teaching hospital and colorectal cancer screening center. A total of 4651 asymptomatic persons at average risk for colorectal cancer aged 50 to 74 years (54.4% women [n = 2529] with a mean [± standard deviation] age of 58.4 ± 6.2 years). All participants underwent a complete colonoscopy, including endoscopic removal of all polyps. We explored associations between several risk factors and APN. Logistic regression was used to identify independent predictors of APN. A total of 142 persons (3.1%) had APN, of whom 85 (1.8%) had isolated APN (with no distal findings). APN was associated with older age, a BMI >27 kg/m(2), smoking, distal advanced adenoma and/or cancer, and distal non-advanced tubular adenoma. Those with a distal advanced neoplasm were more than twice as likely to have APN compared with those without distal lesions. Distal findings used to estimate risk of APN were derived from colonoscopy rather than FS itself. In persons at average risk for colorectal cancer, the prevalence of isolated APN was low (1.8%). Use of distal findings to predict APN may not be the most effective strategy. However, incorporating factors such as age (>65 years), sex, BMI (>27 kg/m(2)), and smoking status, in addition to distal findings, should be considered for tailoring colonoscopy recommendations. Further evaluation of risk stratification approaches in other asymptomatic screening populations is warranted. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  5. A virtual pebble game to ensemble average graph rigidity.

    PubMed

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  6. The Problem With Estimating Public Health Spending.

    PubMed

    Leider, Jonathon P

    2016-01-01

    Accurate information on how much the United States spends on public health is critical. These estimates affect planning efforts; reflect the value society places on the public health enterprise; and allows for the demonstration of cost-effectiveness of programs, policies, and services aimed at increasing population health. Yet, at present, there are a limited number of sources of systematic public health finance data. Each of these sources is collected in different ways, for different reasons, and so yields strikingly different results. This article aims to compare and contrast all 4 current national public health finance data sets, including data compiled by Trust for America's Health, the Association of State and Territorial Health Officials (ASTHO), the National Association of County and City Health Officials (NACCHO), and the Census, which underlie the oft-cited National Health Expenditure Account estimates of public health activity. In FY2008, ASTHO estimates that state health agencies spent $24 billion ($94 per capita on average, median $79), while the Census estimated all state governmental agencies including state health agencies spent $60 billion on public health ($200 per capita on average, median $166). Census public health data suggest that local governments spent an average of $87 per capita (median $57), whereas NACCHO estimates that reporting LHDs spent $64 per capita on average (median $36) in FY2008. We conclude that these estimates differ because the various organizations collect data using different means, data definitions, and inclusion/exclusion criteria--most notably around whether to include spending by all agencies versus a state/local health department, and whether behavioral health, disability, and some clinical care spending are included in estimates. Alongside deeper analysis of presently underutilized Census administrative data, we see harmonization efforts and the creation of a standardized expenditure reporting system as a way to

  7. Occurrence of aflatoxin M1 in human milk samples in Vojvodina, Serbia: Estimation of average daily intake by babies.

    PubMed

    Radonić, Jelena R; Kocić Tanackov, Sunčica D; Mihajlović, Ivana J; Grujić, Zorica S; Vojinović Miloradov, Mirjana B; Škrinjar, Marija M; Turk Sekulić, Maja M

    2017-01-02

    The objectives of the study were to determine the aflatoxin M1 content in human milk samples in Vojvodina, Serbia, and to assess the risk of infants' exposure to aflatoxins food contamination. The growth of Aspergillus flavus and production of aflatoxin B1 in corn samples resulted in higher concentrations of AFM1 in milk and dairy products in 2013, indicating higher concentrations of AFM1 in human milk samples in 2013 and 2014 in Serbia. A total number of 60 samples of human milk (colostrum and breast milk collected 4-8 months after delivery) were analyzed for the presence of AFM1 using the Enzyme Linked Immunosorbent Assay method. The estimated daily intake of AFM1 through breastfeeding was calculated for the colostrum samples using an average intake of 60 mL/kg body weight (b.w.)/day on the third day of lactation. All breast milk collected 4-8 months after delivery and 36.4% of colostrum samples were contaminated with AFM1. The greatest percentage of contaminated colostrum (85%) and all samples of breast milk collected 4-8 months after delivery had AFM1 concentration above maximum allowable concentration according to the Regulation on health safety of dietetic products. The mean daily intake of AFM1 in colostrum was 2.65 ng/kg bw/day. Results of our study indicate the high risk of infants' exposure, who are at the early stage of development and vulnerable to toxic contaminants.

  8. Gains in accuracy from averaging ratings of abnormality

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.

    1999-05-01

    Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.

  9. Updated Estimates of the Average Financial Return on Master's Degree Programs in the United States

    ERIC Educational Resources Information Center

    Gándara, Denisa; Toutkoushian, Robert K.

    2017-01-01

    In this study, we provide updated estimates of the private and social financial return on enrolling in a master's degree program in the United States. In addition to returns for all fields of study, we show estimated returns to enrolling in master's degree programs in business and education, specifically. We also conduct a sensitivity analysis to…

  10. Models for estimating daily rainfall erosivity in China

    NASA Astrophysics Data System (ADS)

    Xie, Yun; Yin, Shui-qing; Liu, Bao-yuan; Nearing, Mark A.; Zhao, Ying

    2016-04-01

    The rainfall erosivity factor (R) represents the multiplication of rainfall energy and maximum 30 min intensity by event (EI30) and year. This rainfall erosivity index is widely used for empirical soil loss prediction. Its calculation, however, requires high temporal resolution rainfall data that are not readily available in many parts of the world. The purpose of this study was to parameterize models suitable for estimating erosivity from daily rainfall data, which are more widely available. One-minute resolution rainfall data recorded in sixteen stations over the eastern water erosion impacted regions of China were analyzed. The R-factor ranged from 781.9 to 8258.5 MJ mm ha-1 h-1 y-1. A total of 5942 erosive events from one-minute resolution rainfall data of ten stations were used to parameterize three models, and 4949 erosive events from the other six stations were used for validation. A threshold of daily rainfall between days classified as erosive and non-erosive was suggested to be 9.7 mm based on these data. Two of the models (I and II) used power law functions that required only daily rainfall totals. Model I used different model coefficients in the cool season (Oct.-Apr.) and warm season (May-Sept.), and Model II was fitted with a sinusoidal curve of seasonal variation. Both Model I and Model II estimated the erosivity index for average annual, yearly, and half-month temporal scales reasonably well, with the symmetric mean absolute percentage error MAPEsym ranging from 10.8% to 32.1%. Model II predicted slightly better than Model I. However, the prediction efficiency for the daily erosivity index was limited, with the symmetric mean absolute percentage error being 68.0% (Model I) and 65.7% (Model II) and Nash-Sutcliffe model efficiency being 0.55 (Model I) and 0.57 (Model II). Model III, which used the combination of daily rainfall amount and daily maximum 60-min rainfall, improved predictions significantly, and produced a Nash-Sutcliffe model efficiency

  11. Application of a weighted-averaging method for determining paleosalinity: a tool for restoration of south Florida's estuaries

    USGS Publications Warehouse

    Wingard, G.L.; Hudley, J.W.

    2012-01-01

    A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.

  12. Estimation of the proteomic cancer co-expression sub networks by using association estimators.

    PubMed

    Erdoğan, Cihat; Kurt, Zeyneb; Diri, Banu

    2017-01-01

    In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA). Correlation and mutual information (MI) based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators' performance, a multi-layer data integration platform on gene-disease associations (DisGeNET) and the Molecular Signatures Database (MSigDB) was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA) package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink) and 64% for Schurmann-Grassberger (SG) association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists.

  13. A data-driven model for estimating industry average numbers of hospital security staff.

    PubMed

    Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M

    2015-01-01

    In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital.

  14. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  15. Hyperspectral remote sensing of plant biochemistry using Bayesian model averaging with variable and band selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Kaiguang; Valle, Denis; Popescu, Sorin

    2013-05-15

    Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 speciesmore » across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.« less

  16. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  17. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_Terra-FM2-MODIS_Edition2C)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  18. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_Terra-FM1-MODIS_Edition2C)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  19. CERES Monthly TOA and SRB Averages (SRBAVG) data in HDF-EOS Grid (CER_SRBAVG_Terra-FM1-MODIS_Edition2D)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2004-05-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  20. Soil Moisture Content Estimation using GPR Reflection Travel Time

    NASA Astrophysics Data System (ADS)

    Lunt, I. A.; Hubbard, S. S.; Rubin, Y.

    2003-12-01

    Ground-penetrating radar (GPR) reflection travel time data were used to estimate changes in soil water content under a range of soil saturation conditions throughout the growing season at a California winery. Data were collected during four data acquisition campaigns over an 80 by 180 m area using 100 MHz surface GPR antennae. GPR reflections were associated with a thin, low permeability clay layer located between 0.8 to 1.3 m below the ground surface that was calibrated with borehole information and mapped across the study area. Field infiltration tests and neutron probe logs suggest that the thin clay layer inhibited vertical water flow, and was coincident with high volumetric water content (VWC) values. The GPR reflection two-way travel time and the depth of the reflector at borehole locations were used to calculate an average dielectric constant for soils above the reflector. A site-specific relationship between the dielectric constant and VWC was then used to estimate the depth-averaged VWC of the soils above the reflector. Compared to average VWC measurements from calibrated neutron probe logs over the same depth interval, the average VWC estimates obtained from GPR reflections had an RMS error of 2 percent. We also investigated the estimation of VWC using reflections associated with an advancing water front, and found that estimates of average VWC to the water front could be obtained with similar accuracy. These results suggested that the two-way travel time to a GPR reflection associated with a geological surface or wetting front can be used under natural conditions to obtain estimates of average water content when borehole control is available. The GPR reflection method therefore has potential for monitoring soil water content over large areas and under variable hydrological conditions.

  1. Estimation of Radionuclide Concentrations and Average Annual Committed Effective Dose due to Ingestion for the Population in the Red River Delta, Vietnam.

    PubMed

    Van, Tran Thi; Bat, Luu Tam; Nhan, Dang Duc; Quang, Nguyen Hao; Cam, Bui Duy; Hung, Luu Viet

    2018-02-16

    Radioactivity concentrations of nuclides of the 232 Th and 238 U radioactive chains and 40 K, 90 Sr, 137 Cs, and 239+240 Pu were surveyed for raw and cooked food of the population in the Red River delta region, Vietnam, using α-, γ-spectrometry, and liquid scintillation counting techniques. The concentration of 40 K in the cooked food was the highest compared to those of other radionuclides ranging from (23 ± 5) (rice) to (347 ± 50) Bq kg -1 dw (tofu). The 210 Po concentration in the cooked food ranged from its limit of detection (LOD) of 5 mBq kg -1  dw (rice) to (4.0 ± 1.6) Bq kg -1  dw (marine bivalves). The concentrations of other nuclides of the 232 Th and 238 U chains in the food were low, ranging from LOD of 0.02 Bq kg -1  dw to (1.1 ± 0.3) Bq kg -1  dw. The activity concentrations of 90 Sr, 137 Cs, and 239+240 Pu in the food were minor compared to that of the natural radionuclides. The average annual committed effective dose to adults in the study region was estimated and it ranged from 0.24 to 0.42 mSv a -1 with an average of 0.32 mSv a -1 , out of which rice, leafy vegetable, and tofu contributed up to 16.2%, 24.4%, and 21.3%, respectively. The committed effective doses to adults due to ingestion of regular diet in the Red River delta region, Vietnam are within the range determined in other countries worldwide. This finding suggests that Vietnamese food is safe for human consumption with respect to radiation exposure.

  2. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  3. Estimated rate of agricultural injury: the Korean Farmers' Occupational Disease and Injury Survey.

    PubMed

    Chae, Hyeseon; Min, Kyungdoo; Youn, Kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. The first Korean Farmers' Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45-0.45) compared to males. However, the odds of injury among farmers aged 50-59 (OR: 1.53, 95% CI = 1.46-1.60), 60-69 (OR: 1.45, 95% CI = 1.39-1.51), and ≥70 (OR: 1.94, 95% CI = 1.86-2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required.

  4. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    PubMed Central

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. Results We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45–0.45) compared to males. However, the odds of injury among farmers aged 50–59 (OR: 1.53, 95% CI = 1.46–1.60), 60–69 (OR: 1.45, 95% CI = 1.39–1.51), and ≥70 (OR: 1.94, 95% CI = 1.86–2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Conclusions Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required. PMID:24808945

  5. Estimating the residency expansion required to avoid projected primary care physician shortages by 2035.

    PubMed

    Petterson, Stephen M; Liaw, Winston R; Tran, Carol; Bazemore, Andrew W

    2015-03-01

    The purpose of this study was to calculate the projected primary care physician shortage, determine the amount and composition of residency growth needed, and estimate the impact of retirement age and panel size changes. We used the 2010 National Ambulatory Medical Care Survey to calculate utilization of ambulatory primary care services and the US Census Bureau to project demographic changes. To determine the baseline number of primary care physicians and the number retiring at 66 years, we used the 2014 American Medical Association Masterfile. Using specialty board and American Osteopathic Association figures, we estimated the annual production of primary care residents. To calculate shortages, we subtracted the accumulated primary care physician production from the accumulated number of primary care physicians needed for each year from 2015 to 2035. More than 44,000 primary care physicians will be needed by 2035. Current primary care production rates will be unable to meet demand, resulting in a shortage in excess of 33,000 primary care physicians. Given current production, an additional 1,700 primary care residency slots will be necessary by 2035. A 10% reduction in the ratio of population per primary care physician would require more than 3,000 additional slots by 2035, whereas changing the expected retirement age from 66 years to 64 years would require more than 2,400 additional slots. To eliminate projected shortages in 2035, primary care residency production must increase by 21% compared with current production. Delivery models that shift toward smaller ratios of population to primary care physicians may substantially increase the shortage. © 2015 Annals of Family Medicine, Inc.

  6. Estimating current and future global urban domestic material consumption

    NASA Astrophysics Data System (ADS)

    Baynes, Timothy Malcolm; Kaviti Musango, Josephine

    2018-06-01

    Urban material resource requirements are significant at the global level and these are expected to expand with future urban population growth. However, there are no global scale studies on the future material consumption of urban areas. This paper provides estimates of global urban domestic material consumption (DMC) in 2050 using three approaches based on: current gross statistics; a regression model; and a transition theoretic logistic model. All methods use UN urban population projections and assume a simple ‘business-as-usual’ scenario wherein historical aggregate trends in income and material flow continue into the future. A collation of data for 152 cities provided a year 2000 world average DMC/capita estimate, 12 tons/person/year (±22%), which we combined with UN population projections to produce a first-order estimation of urban DMC at 2050 of ~73 billion tons/year (±22%). Urban DMC/capita was found to be significantly correlated (R 2 > 0.9) to urban GDP/capita and area per person through a power law relation used to obtain a second estimate of 106 billion tons (±33%) in 2050. The inelastic exponent of the power law indicates a global tendency for relative decoupling of direct urban material consumption with increasing income. These estimates are global and influenced by the current proportion of developed-world cities in the global population of cities (and in our sample data). A third method employed a logistic model of transitions in urban DMC/capita with regional resolution. This method estimated global urban DMC to rise from approximately 40 billion tons/year in 2010 to ~90 billion tons/year in 2050 (modelled range: 66–111 billion tons/year). DMC/capita across different regions was estimated to converge from a range of 5–27 tons/person/year in the year 2000 to around 8–17 tons/person/year in 2050. The urban population does not increase proportionally during this period and thus the global average DMC/capita increases from ~12 to ~14 tons

  7. Procedures for estimating the frequency of commercial airline flights encountering high cabin ozone levels

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1979-01-01

    Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.

  8. Lead Coolant Test Facility Systems Design, Thermal Hydraulic Analysis and Cost Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soli Khericha; Edwin Harvego; John Svoboda

    2012-01-01

    The Idaho National Laboratory prepared a preliminary technical and functional requirements (T&FR), thermal hydraulic design and cost estimate for a lead coolant test facility. The purpose of this small scale facility is to simulate lead coolant fast reactor (LFR) coolant flow in an open lattice geometry core using seven electrical rods and liquid lead or lead-bismuth eutectic coolant. Based on review of current world lead or lead-bismuth test facilities and research needs listed in the Generation IV Roadmap, five broad areas of requirements were identified as listed: (1) Develop and Demonstrate Feasibility of Submerged Heat Exchanger; (2) Develop and Demonstratemore » Open-lattice Flow in Electrically Heated Core; (3) Develop and Demonstrate Chemistry Control; (4) Demonstrate Safe Operation; and (5) Provision for Future Testing. This paper discusses the preliminary design of systems, thermal hydraulic analysis, and simplified cost estimate. The facility thermal hydraulic design is based on the maximum simulated core power using seven electrical heater rods of 420 kW; average linear heat generation rate of 300 W/cm. The core inlet temperature for liquid lead or Pb/Bi eutectic is 4200 C. The design includes approximately seventy-five data measurements such as pressure, temperature, and flow rates. The preliminary estimated cost of construction of the facility is $3.7M (in 2006 $). It is also estimated that the facility will require two years to be constructed and ready for operation.« less

  9. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  10. Threaded average temperature thermocouple

    NASA Technical Reports Server (NTRS)

    Ward, Stanley W. (Inventor)

    1990-01-01

    A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

  11. Tisseel does not reduce postoperative drainage, length of stay, and transfusion requirements for lumbar laminectomy with noninstrumented fusion versus laminectomy alone.

    PubMed

    Epstein, Nancy E

    2015-01-01

    Typically, fibrin sealants (FSs) and fibrin glues (FGs) are used to strengthen dural repairs during spinal surgery. In 2014, Epstein demonstrated that one FS/FG, Tisseel (Baxter International Inc., Westlake Village, CA, USA) equalized the average times to drain removal and length of stay (LOS) for patients with versus without excess bleeding (e.g. who did not receive Tisseel) undergoing multilevel laminectomies with 1-2 level noninstrumented fusions (LamF).[6]. Here Tisseel was utilized to promote hemostasis for two populations; 39 patients undergoing average 4.4 level lumbar laminectomies with average 1.3 level noninstrumented fusions (LamF), and 48 patients undergoing average 4.0 level laminectomies alone (Lam). We compared the average operative time, estimated blood loss (EBL), postoperative drainage, LOS, and transfusion requirements for the LamF versus Lam groups. The average operative times, EBL, postoperative drainage, LOS, and transfusion requirements were all greater for LamF versus Lam patients; operative times (4.1 vs. 3.0 h), average EBL (192.3 vs. 147.9 cc), drainage (e.g. day 1; 199.6 vs. 167.4 cc; day 2; 172.9 vs. 63.9 cc), average LOS (4.6 vs. 2.5 days), and transfusion requirements (11 LamF patients; 18 Units [U] RBC versus 2 Lam patients; 3 U RBC). Utilizing Tisseel to facilitate hemostasis in LamF versus Lam still resulted in greater operative times, EBL, postoperative average drainage, LOS, and transfusion requirements for patients undergoing the noninstrumented fusions. Although Tisseel decreases back bleeding within the spinal canal, it does not reduce blood loss from LamF decorticated transverse processes.

  12. ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’?

    PubMed Central

    Polsky, Daniel; Manning, Willard G.

    2011-01-01

    Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment. PMID:22199462

  13. Is cepstrum averaging applicable to circularly polarized electric-field data?

    NASA Astrophysics Data System (ADS)

    Tunnell, T.

    1990-04-01

    In FY 1988 a cepstrum averaging technique was developed to eliminate the ground reflections from charged particle beam (CPB) electromagnetic pulse (EMP) data. The work was done for the Los Alamos National Laboratory Project DEWPOINT at SST-7. The technique averages the cepstra of horizontally and vertically polarized electric field data (i.e., linearly polarized electric field data). This cepstrum averaging technique was programmed into the FORTRAN codes CEP and CEPSIM. Steve Knox, the principal investigator for Project DEWPOINT, asked the authors to determine if the cepstrum averaging technique could be applied to circularly polarized electric field data. The answer is, Yes, but some modifications may be necessary. There are two aspects to this answer that we need to address, namely, the Yes and the modifications. First, regarding the Yes, the technique is applicable to elliptically polarized electric field data in general: circular polarization is a special case of elliptical polarization. Secondly, regarding the modifications, greater care may be required in computing the phase in the calculation of the complex logarithm. The calculation of the complex logarithm is the most critical step in cepstrum-based analysis. This memorandum documents these findings.

  14. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  15. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel; Fita, Ignacio

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothingmore » effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.« less

  16. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    NASA Astrophysics Data System (ADS)

    Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel

    2016-03-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  17. Averaging cross section data so we can fit it

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  18. Accuracy of Visual Estimation of LASIK Flap Thickness.

    PubMed

    Brenner, Jason E; Fadlallah, Ali; Hatch, Kathryn M; Choi, Catherine; Sayegh, Rony R; Kouyoumjian, Paul; Wu, Simon; Frangieh, George T; Melki, Samir A

    2017-11-01

    To assess the accuracy of surgeons' visual estimation of LASIK flap thickness when created by a femtosecond laser by comparing it to ultrasound measurements. Surgeons were asked to visually estimate the thickness of a femtosecond flap during the procedure. Total corneal thickness was measured by ultrasound pachymetry prior to the procedure and the stromal bed was similarly measured after flap lifting. The estimates from three experienced surgeons (cornea fellowship trained and more than 5 years in practice) were compared to those of three cornea fellows, with each surgeon evaluating 20 eyes (120 total). Surgeons were not told the thickness of the flaps unless required for safety reasons. The average difference between visual and ultrasonic estimation of LASIK flap thickness was 15.20 μm. The flap was 10 μm thicker than estimated in 37% of eyes, 20 μm thicker in 17% of eyes, and 30 μm thicker in 10% of eyes. The largest deviation was 53 μm. There was no statistically significant difference between the accuracy of experienced surgeons and fellows (P = .51). There are significant differences between surgeons' visual estimates and ultrasonic measurements of LASIK flap thickness. Relying on these visual estimates may lead to deeper excimer laser ablation than intended. This could lead to thinner residual stromal beds and higher percent tissue altered than planned. The authors recommend that surgeons measure flaps intraoperatively to maximize accuracy and safety. [J Refract Surg. 2017;33(11):765-767.]. Copyright 2017, SLACK Incorporated.

  19. Estimation of Critical Population Support Requirements.

    DTIC Science & Technology

    1984-05-30

    VA 22160 W.U. 4921H 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Federal Emergency Management Agency May 30, 1984 Industrial Protection...ensure the availability of industrial production required to support the population, maintain defense capabilities and perform command and control ...the population, maintain national defense capabilities and perform command and control activi- ties during a national emergency such as a threat of a

  20. Estimating rupture distances without a rupture

    USGS Publications Warehouse

    Thompson, Eric M.; Worden, Charles

    2017-01-01

    Most ground motion prediction equations (GMPEs) require distances that are defined relative to a rupture model, such as the distance to the surface projection of the rupture (RJB) or the closest distance to the rupture plane (RRUP). There are a number of situations in which GMPEs are used where it is either necessary or advantageous to derive rupture distances from point-source distance metrics, such as hypocentral (RHYP) or epicentral (REPI) distance. For ShakeMap, it is necessary to provide an estimate of the shaking levels for events without rupture models, and before rupture models are available for events that eventually do have rupture models. In probabilistic seismic hazard analysis, it is often convenient to use point-source distances for gridded seismicity sources, particularly if a preferred orientation is unknown. This avoids the computationally cumbersome task of computing rupture-based distances for virtual rupture planes across all strikes and dips for each source. We derive average rupture distances conditioned on REPI, magnitude, and (optionally) back azimuth, for a variety of assumed seismological constraints. Additionally, we derive adjustment factors for GMPE standard deviations that reflect the added uncertainty in the ground motion estimation when point-source distances are used to estimate rupture distances.

  1. Time Averaged Transmitter Power and Exposure to Electromagnetic Fields from Mobile Phone Base Stations

    PubMed Central

    Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo

    2014-01-01

    Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551

  2. EnviroAtlas - Average Annual Precipitation 1981-2010 by HUC12 for the Conterminous United States

    EPA Pesticide Factsheets

    This EnviroAtlas dataset provides the average annual precipitation by 12-digit Hydrologic Unit (HUC). The values were estimated from maps produced by the PRISM Climate Group, Oregon State University. The original data was at the scale of 800 m grid cells representing average precipitation from 1981-2010 in mm. The data was converted to inches of precipitation and then zonal statistics were estimated for a final value of average annual precipitation for each 12 digit HUC. For more information about the original dataset please refer to the PRISM website at http://www.prism.oregonstate.edu/. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  3. Estimating the Cost to do a Cost Estimate

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Buchanan, H. R.

    1998-01-01

    This article provides a model for estimating the cost required to do a cost estimate. Overruns may lead to concellation of a project. In 1991, we completed a study on the cost of doing cost estimates for the class of projects normally encountered in the development and implementation of equipment at the network of tracking stations operated by the Jet Propulsion Laboratory (JPL) for NASA.

  4. Strengthening the public health workforce: An estimation of the long-term requirements for public health specialists in Serbia.

    PubMed

    Santric Milicevic, Milena; Vasic, Milena; Edwards, Matt; Sanchez, Cristina; Fellows, John

    2018-06-01

    At the beginning of the 21st century, planning the public health workforce requirements came into the focus of policy makers. The need for improved provision of essential public health services, driven by a challenging non-communicable disease and causes of death and disability within Serbia, calls for a much needed estimation of the requirements of the public health professionals. Mid and long-term public health specialists' supply and demand estimations out to 2025were developed based on national staffing standards and regional distribution of the workforce in public health institutes of Serbia. By 2025, the supply of specialists, taking into account attrition rate of -1% reaches the staffing standard. However, a slight increase in attrition rates has the impact of revealing supply shortage risks. Demand side projections show that public health institutes require an annual input of 10 specialists or 2.1% annual growth rate in order for the four public health fields to achieve a headcount of 487 by 2025 as well as counteract workforce attrition rates. Shortage and poor distribution of public health specialists underline the urgent need for workforce recruitment and retention in public health institutes in order to ensure the coordination, management, surveillance and provision of essential public health services over the next decade. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Estimation of the proteomic cancer co-expression sub networks by using association estimators

    PubMed Central

    Kurt, Zeyneb; Diri, Banu

    2017-01-01

    In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA). Correlation and mutual information (MI) based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators’ performance, a multi-layer data integration platform on gene-disease associations (DisGeNET) and the Molecular Signatures Database (MSigDB) was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA) package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink) and 64% for Schurmann-Grassberger (SG) association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists. PMID:29145449

  6. Development of the Average Likelihood Function for Code Division Multiple Access (CDMA) Using BPSK and QPSK Symbols

    DTIC Science & Technology

    2015-01-01

    This research has the purpose to establish a foundation for new classification and estimation of CDMA signals. Keywords: DS / CDMA signals, BPSK, QPSK...DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK AND QPSK SYMBOLS JANUARY 2015...To) OCT 2013 – OCT 2014 4. TITLE AND SUBTITLE DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK

  7. Study of space shuttle EVA/IVA support requirements. Volume 1: Technical summary report

    NASA Technical Reports Server (NTRS)

    Copeland, R. J.; Wood, P. W., Jr.; Cox, R. L.

    1973-01-01

    Results are summarized which were obtained for equipment requirements for the space shuttle EVA/IVA pressure suit, life support system, mobility aids, vehicle support provisions, and energy 4 support. An initial study of tasks, guidelines, and constraints and a special task on the impact of a 10 psia orbiter cabin atmosphere are included. Supporting studies not related exclusively to any one group of equipment requirements are also summarized. Representative EVA/IVA task scenarios were defined based on an evaluation of missions and payloads. Analysis of the scenarios resulted in a total of 788 EVA/IVA's in the 1979-1990 time frame, for an average of 1.3 per shuttle flight. Duration was estimated to be under 4 hours on 98% of the EVA/IVA's, and distance from the airlock was determined to be 70 feet or less 96% of the time. Payload water vapor sensitivity was estimated to be significant on 9%-17% of the flights. Further analysis of the scenarios was carried out to determine specific equipment characteristics, such as suit cycle and mobility requirements.

  8. Benchmarking wide swath altimetry-based river discharge estimation algorithms for the Ganges river system

    NASA Astrophysics Data System (ADS)

    Bonnema, Matthew G.; Sikder, Safat; Hossain, Faisal; Durand, Michael; Gleason, Colin J.; Bjerklie, David M.

    2016-04-01

    The objective of this study is to compare the effectiveness of three algorithms that estimate discharge from remotely sensed observables (river width, water surface height, and water surface slope) in anticipation of the forthcoming NASA/CNES Surface Water and Ocean Topography (SWOT) mission. SWOT promises to provide these measurements simultaneously, and the river discharge algorithms included here are designed to work with these data. Two algorithms were built around Manning's equation, the Metropolis Manning (MetroMan) method, and the Mean Flow and Geomorphology (MFG) method, and one approach uses hydraulic geometry to estimate discharge, the at-many-stations hydraulic geometry (AMHG) method. A well-calibrated and ground-truthed hydrodynamic model of the Ganges river system (HEC-RAS) was used as reference for three rivers from the Ganges River Delta: the main stem of Ganges, the Arial-Khan, and the Mohananda Rivers. The high seasonal variability of these rivers due to the Monsoon presented a unique opportunity to thoroughly assess the discharge algorithms in light of typical monsoon regime rivers. It was found that the MFG method provides the most accurate discharge estimations in most cases, with an average relative root-mean-squared error (RRMSE) across all three reaches of 35.5%. It is followed closely by the Metropolis Manning algorithm, with an average RRMSE of 51.5%. However, the MFG method's reliance on knowledge of prior river discharge limits its application on ungauged rivers. In terms of input data requirement at ungauged regions with no prior records, the Metropolis Manning algorithm provides a more practical alternative over a region that is lacking in historical observations as the algorithm requires less ancillary data. The AMHG algorithm, while requiring the least prior river data, provided the least accurate discharge measurements with an average wet and dry season RRMSE of 79.8% and 119.1%, respectively, across all rivers studied. This poor

  9. Robust w-Estimators for Cryo-EM Class Means

    PubMed Central

    Huang, Chenxi; Tagare, Hemant D.

    2016-01-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397

  10. Estimating average shock pressures recorded by impactite samples based on universal stage investigations of planar deformation features in quartz - Sources of error and recommendations

    NASA Astrophysics Data System (ADS)

    Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.

    2018-01-01

    Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.

  11. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the

  12. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  13. Number of 24-hour diet recalls needed to estimate energy intake.

    PubMed

    Ma, Yunsheng; Olendzki, Barbara C; Pagoto, Sherry L; Hurley, Thomas G; Magner, Robert P; Ockene, Ira S; Schneider, Kristin L; Merriam, Philip A; Hébert, James R

    2009-08-01

    Twenty-four-hour diet recall interviews (24HRs) are used to assess diet and to validate other diet assessment instruments. Therefore it is important to know how many 24HRs are required to describe an individual's intake. Seventy-nine middle-aged white women completed seven 24HRs over a 14-day period, during which energy expenditure (EE) was determined by the doubly labeled water method (DLW). Mean daily intakes were compared to DLW-derived EE using paired t tests. Linear mixed models were used to evaluate the effect of call sequence and day of the week on 24HR-derived energy intake while adjusting for education, relative body weight, social desirability, and an interaction between call sequence and social desirability. Mean EE from DLW was 2115 kcal/day. Adjusted 24HR-derived energy intake was lowest at call 1 (1501 kcal/day); significantly higher energy intake was observed at calls 2 and 3 (2246 and 2315 kcal/day, respectively). Energy intake on Friday was significantly lower than on Sunday. Averaging energy intake from the first two calls better approximated true energy expenditure than did the first call, and averaging the first three calls further improved the estimate (p=0.02 for both comparisons). Additional calls did not improve estimation. Energy intake is underreported on the first 24HR. Three 24HRs appear optimal for estimating energy intake.

  14. Survival estimates for reintroduced populations of the Chiricahua Leopard Frog (Lithobates chiricahuensis)

    USGS Publications Warehouse

    Howell, Paige E; Hossack, Blake R.; Muths, Erin L.; Sigafus, Brent H.; Chandler, Richard B.

    2016-01-01

    Global amphibian declines have been attributed to a number of factors including disease, invasive species, habitat degradation, and climate change. Reintroduction is one management action that is commonly used with the goal of recovering imperiled species. The success of reintroductions varies widely, and evaluating their efficacy requires estimates of population viability metrics, such as underlying vital rates and trends in abundance. Although rarely quantified, assessing vital rates for recovering populations provides a more mechanistic understanding of population growth than numerical trends in population occupancy or abundance. We used three years of capture-mark-recapture data from three breeding ponds and a Cormack-Jolly-Seber model to estimate annual apparent survival for reintroduced populations of the federally threatened Chiricahua Leopard Frog (Lithobates chiricahuensis) at the Buenos Aires National Wildlife Refuge (BANWR), in the Altar Valley, Arizona, USA. To place our results in context, we also compiled published survival estimates for other ranids. Average apparent survival of Chiricahua Leopard Frogs at BANWR was 0.27 (95% CI [0.07, 0.74]) and average individual capture probability was 0.02 (95% CI [0, 0.05]). Our apparent survival estimate for Chiricahua Leopard Frogs is lower than for most other ranids and is not consistent with recent research that showed metapopulation viability in the Altar Valley is high. We suggest that low apparent survival may be indicative of high emigration rates. We recommend that future research should estimate emigration rates so that actual, rather than apparent, survival can be quantified to improve population viability assessments of threatened species following reintroduction efforts.

  15. Radiotelemetry to estimate stream life of adult chum salmon in the McNeil River, Alaska

    USGS Publications Warehouse

    Peirce, Joshua M.; Otis, Edward O.; Wipfli, Mark S.; Follmann, Erich H.

    2011-01-01

    Estimating salmon escapement is one of the fundamental steps in managing salmon populations. The area-under-the-curve (AUC) method is commonly used to convert periodic aerial survey counts into annual salmon escapement indices. The AUC requires obtaining accurate estimates of stream life (SL) for target species. Traditional methods for estimating SL (e.g., mark–recapture) are not feasible for many populations. Our objective in this study was to determine the average SL of chum salmon Oncorhynchus keta in the McNeil River, Alaska, through radiotelemetry. During the 2005 and 2006 runs, 155 chum salmon were fitted with mortality-indicating radio tags as they entered the McNeil River and tracked until they died. A combination of remote data loggers, aerial surveys, and foot surveys were used to determine the location of fish and provide an estimate of time of death. Higher predation resulted in tagged fish below McNeil Falls having a significantly shorter SL (12.6 d) than those above (21.9 d). The streamwide average SL (13.8 d) for chum salmon at the McNeil River was lower than the regionwide value (17.5 d) previously used to generate AUC indices of chum salmon escapement for the McNeil River. We conclude that radiotelemetry is an effective tool for estimating SL in rivers not well suited to other methods.

  16. The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Pituch, Keenan A.

    2009-01-01

    The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…

  17. MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning

    PubMed Central

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851

  18. MRI-based intelligence quotient (IQ) estimation with sparse learning.

    PubMed

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

  19. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  20. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    NASA Astrophysics Data System (ADS)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  1. RHIC BPM system average orbit calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michnoff,R.; Cerniglia, P.; Degen, C.

    2009-05-04

    RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed justmore » prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.« less

  2. Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM

    NASA Astrophysics Data System (ADS)

    Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng

    2015-07-01

    We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.

  3. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    NASA Astrophysics Data System (ADS)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  4. 41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...

  5. 41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...

  6. 41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...

  7. 41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...

  8. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...

  11. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...

  12. Estimating mineral requirements of Nellore beef bulls fed with or without inorganic mineral supplementation and the influence on mineral balance.

    PubMed

    Zanetti, D; Godoi, L A; Estrada, M M; Engle, T E; Silva, B C; Alhadas, H M; Chizzotti, M L; Prados, L F; Rennó, L N; Valadares Filho, S C

    2017-04-01

    The objectives of this study were to quantify the mineral balance of Nellore cattle fed with and without Ca, P, and micromineral (MM) supplementation and to estimate the net and dietary mineral requirement for cattle. Nellore cattle ( = 51; 270.4 ± 36.6 kg initial BW and 8 mo age) were assigned to 1 of 3 groups: reference ( = 5), maintenance ( = 4), and performance ( = 42). The reference group was slaughtered prior to the experiment to estimate initial body composition. The maintenance group was used to collect values of animals at low gain and reduced mineral intake. The performance group was assigned to 1 of 6 treatments: sugarcane as the roughage source with a concentrate supplement composed of soybean meal and soybean hulls with and without Ca, P, and MM supplementation; sugarcane as the roughage source with a concentrate supplement composed of soybean meal and ground corn with and without Ca, P, and MM supplementation; and corn silage as the roughage source with a concentrate supplement composed of soybean meal and ground corn with and without Ca, P, and MM supplementation. Orthogonal contrasts were adopted to compare mineral intake, fecal and urinary excretion, and apparent retention among treatments. Maintenance requirements and true retention coefficients were generated with the aid of linear regression between mineral intake and mineral retention. Mineral composition of the body and gain requirements was assessed using nonlinear regression between body mineral content and mineral intake. Mineral intake and fecal and urinary excretion were measured. Intakes of Ca, P, S, Cu, Zn, Mn, Co, and Fe were reduced in the absence of Ca, P, and MM supplementation ( < 0.05). Fecal excretion of Ca, Cu, Zn, Mn, and Co was also reduced in treatments without supplementation ( < 0.01). Overall, excretion and apparent absorption and retention coefficients were reduced when minerals were not supplied ( < 0.05). The use of the true retention coefficient instead of the true

  13. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  14. Child Mortality Estimation: Estimating Sex Differences in Childhood Mortality since the 1970s

    PubMed Central

    Sawyer, Cheryl Chriss

    2012-01-01

    Introduction Producing estimates of infant (under age 1 y), child (age 1–4 y), and under-five (under age 5 y) mortality rates disaggregated by sex is complicated by problems with data quality and availability. Interpretation of sex differences requires nuanced analysis: girls have a biological advantage against many causes of death that may be eroded if they are disadvantaged in access to resources. Earlier studies found that girls in some regions were not experiencing the survival advantage expected at given levels of mortality. In this paper I generate new estimates of sex differences for the 1970s to the 2000s. Methods and Findings Simple fitting methods were applied to male-to-female ratios of infant and under-five mortality rates from vital registration, surveys, and censuses. The sex ratio estimates were used to disaggregate published series of both-sexes mortality rates that were based on a larger number of sources. In many developing countries, I found that sex ratios of mortality have changed in the same direction as historically occurred in developed countries, but typically had a lower degree of female advantage for a given level of mortality. Regional average sex ratios weighted by numbers of births were found to be highly influenced by China and India, the only countries where both infant mortality and overall under-five mortality were estimated to be higher for girls than for boys in the 2000s. For the less developed regions (comprising Africa, Asia excluding Japan, Latin America/Caribbean, and Oceania excluding Australia and New Zealand), on average, boys' under-five mortality in the 2000s was about 2% higher than girls'. A number of countries were found to still experience higher mortality for girls than boys in the 1–4-y age group, with concentrations in southern Asia, northern Africa/western Asia, and western Africa. In the more developed regions (comprising Europe, northern America, Japan, Australia, and New Zealand), I found that the sex

  15. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  16. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    PubMed

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are

  17. Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average

    EPA Pesticide Factsheets

    This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895??2015). To provide more detailed information, each state has been divided into climate divisions, which are zones that share similar climate features. For more information: www.epa.gov/climatechange/science/indicators

  18. Bayesian block-diagonal variable selection and model averaging

    PubMed Central

    Papaspiliopoulos, O.; Rossell, D.

    2018-01-01

    Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501

  19. Averaging, passage through resonances, and capture into resonance in two-frequency systems

    NASA Astrophysics Data System (ADS)

    Neishtadt, A. I.

    2014-10-01

    Applying small perturbations to an integrable system leads to its slow evolution. For an approximate description of this evolution the classical averaging method prescribes averaging the rate of evolution over all the phases of the unperturbed motion. This simple recipe does not always produce correct results, because of resonances arising in the process of evolution. The phenomenon of capture into resonance consists in the system starting to evolve in such a way as to preserve the resonance property once it has arisen. This paper is concerned with application of the averaging method to a description of evolution in two-frequency systems. It is assumed that the trajectories of the averaged system intersect transversally the level surfaces of the frequency ratio and that certain other conditions of general position are satisfied. The rate of evolution is characterized by a small parameter \\varepsilon. The main content of the paper is a proof of the following result: outside a set of initial data with measure of order \\sqrt \\varepsilon the averaging method describes the evolution to within O(\\sqrt \\varepsilon \\vert\\ln\\varepsilon\\vert) for periods of time of order 1/\\varepsilon. This estimate is sharp. The exceptional set of measure \\sqrt \\varepsilon contains the initial data for phase points captured into resonance. A description of the motion of such phase points is given, along with a survey of related results on averaging. Examples of capture into resonance are presented for some problems in the dynamics of charged particles. Several open problems are stated. Bibliography: 65 titles.

  20. Determination of protein and amino acid requirements of lactating sows using a population-based factorial approach.

    PubMed

    Strathe, A V; Strathe, A B; Theil, P K; Hansen, C F; Kebreab, E

    2015-08-01

    Determination of appropriate nutritional requirements is essential to optimize the productivity and longevity of lactating sows. The current recommendations for requirements do not consider the large variation between animals. Therefore, the aim of this study was to determine the amino acid recommendations for lactating sows using a stochastic modeling approach that integrates population variation and uncertainty of key parameters into establishing nutritional recommendations for lactating sows. The requirement for individual sows was calculated using a factorial approach by adding the requirement for maintenance and milk. The energy balance of the sows was either negative or zero depending on feed intake being a limiting factor. Some parameters in the model were sow-specific and others were population-specific, depending on state of knowledge. Each simulation was for 1000 sows repeated 100 times using Monte Carlo simulation techniques. BW, back fat thickness of the sow, litter size (LS), average litter gain (LG), dietary energy density and feed intake were inputs to the model. The model was tested using results from the literature, and the values were all within ±1 s.d. of the estimated requirements. Simulations were made for a group of low- (LS=10 (s.d.=1), LG=2 kg/day (s.d.=0.6)), medium- (LS=12 (s.d.=1), LG=2.5 kg/day (s.d.=0.6)) and high-producing (LS=14 (s.d.=1), LG=3.5 kg/day (s.d.=0.6)) sows, where the average requirement was the result. In another simulation, the requirements were estimated for each week of lactation. The results were given as the median and s.d. The average daily standardized ileal digestible (SID) protein and lysine requirements for low-, medium- and high-producing sows were 623 (CV=2.5%) and 45.1 (CV=4.8%); 765 (CV=4.9%) and 54.7 (CV=7.0%); and 996 (CV=8.5%) and 70.8 g/day (CV=9.6%), respectively. The SID protein and lysine requirements were lowest at week 1, intermediate at week 2 and 4 and the highest at week 3 of lactation. The

  1. Exploring JLA supernova data with improved flux-averaging technique

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-03-01

    In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  2. Dynamic testing and test anxiety amongst gifted and average-ability children.

    PubMed

    Vogelaar, Bart; Bakker, Merel; Elliott, Julian G; Resing, Wilma C M

    2017-03-01

    Dynamic testing has been proposed as a testing approach that is less disadvantageous for children who may be potentially subject to bias when undertaking conventional assessments. For example, those who encounter high levels of test anxiety, or who are unfamiliar with standardized test procedures, may fail to demonstrate their true potential or capabilities. While dynamic testing has proven particularly useful for special groups of children, it has rarely been used with gifted children. We investigated whether it would be useful to conduct a dynamic test to measure the cognitive abilities of intellectually gifted children. We also investigated whether test anxiety scores would be related to a progression in the children's test scores after dynamic training. Participants were 113 children aged between 7 and 8 years from several schools in the western part of the Netherlands. The children were categorized as either gifted or average-ability and split into an unguided practice or a dynamic testing condition. The study employed a pre-test-training-post-test design. Using linear mixed modelling analysis with a multilevel approach, we inspected the growth trajectories of children in the various conditions and examined the impact of ability and test anxiety on progression and training benefits. Dynamic testing proved to be successful in improving the scores of the children, although no differences in training benefits were found between gifted and average-ability children. Test anxiety was shown to influence the children's rate of change across all test sessions and their improvement in performance accuracy after dynamic training. © 2016 The British Psychological Society.

  3. Favre-Averaged Turbulence Statistics in Variable Density Mixing of Buoyant Jets

    NASA Astrophysics Data System (ADS)

    Charonko, John; Prestridge, Kathy

    2014-11-01

    Variable density mixing of a heavy fluid jet with lower density ambient fluid in a subsonic wind tunnel was experimentally studied using Particle Image Velocimetry and Planar Laser Induced Fluorescence to simultaneously measure velocity and density. Flows involving the mixing of fluids with large density ratios are important in a range of physical problems including atmospheric and oceanic flows, industrial processes, and inertial confinement fusion. Here we focus on buoyant jets with coflow. Results from two different Atwood numbers, 0.1 (Boussinesq limit) and 0.6 (non-Boussinesq case), reveal that buoyancy is important for most of the turbulent quantities measured. Statistical characteristics of the mixing important for modeling these flows such as the PDFs of density and density gradients, turbulent kinetic energy, Favre averaged Reynolds stress, turbulent mass flux velocity, density-specific volume correlation, and density power spectra were also examined and compared with previous direct numerical simulations. Additionally, a method for directly estimating Reynolds-averaged velocity statistics on a per-pixel basis is extended to Favre-averages, yielding improved accuracy and spatial resolution as compared to traditional post-processing of velocity and density fields.

  4. Bound state potential energy surface construction: ab initio zero-point energies and vibrationally averaged rotational constants.

    PubMed

    Bettens, Ryan P A

    2003-01-15

    Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.

  5. Department of Transportation, National Highway Traffic Safety Administration : light truck average fuel economy standard, model year 1999

    DOT National Transportation Integrated Search

    1997-04-18

    Section 32902(a) of title 49, United States Code, requires the Secretary of Transportation to prescribe by regulation, at least 18 months in advance of each model year, average fuel economy standards (known as "Corporate Average Fuel Economy" or "CAF...

  6. Direct estimation of the oxygen requirements of Achromobacter xylosoxidans for aerobic degradation of monoaromatic hydrocarbons (BTEX) in a bioscrubber.

    PubMed

    Nielsen, David R; McLellan, P James; Daugulis, Andrew J

    2006-08-01

    The O2 requirements for biomass production and supplying maintenance energy demands during the degradation of both benzene and ethylbenzene by Achromobacter xylosoxidans Y234 were measured using a newly proposed technique involving a bioscrubber. Using this approach, relevant microbial parameter estimates were directly and simultaneously obtained via linear regression of pseudo steady-state data. For benzene and ethylbenzene, the biomass yield on O2, Y(X/O2), was estimated on a cell dry weight (CDW) basis as 1.96 +/- 0.25 mg CDW mgO2(-1) and 0.98 +/- 0.17 mg CDW mgO2(-1), while the specific rate of O2 consumption for maintenance, m(O2), was estimated as 0.041 +/- 0.008 mgO(2) mg CDW(-1) h(-1) and 0.053 +/- 0.022 mgO(2) mg CDW(-1) h(-1), respectively.

  7. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  8. SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure

    PubMed Central

    Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.

    2017-01-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341

  9. SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.

    PubMed

    Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S

    2017-03-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.

  10. A Comparison of Several Techniques For Estimating The Average Volume Per Acre For Multipanel Data With Missing Panels

    Treesearch

    Dave Gartner; Gregory A. Reams

    2001-01-01

    As Forest Inventory and Analysis changes from a periodic survey to a multipanel annual survey, a transition will occur where only some of the panels have been resurveyed. Several estimation techniques use data from the periodic survey in addition to the data from the partially completed multipanel data. These estimation techniques were compared using data from two...

  11. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  12. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and

  13. Calibrating recruitment estimates for mourning doves from harvest age ratios

    USGS Publications Warehouse

    Miller, David A.; Otis, David L.

    2010-01-01

    We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in

  14. Estimating the resources required in the roll-out of universal access to antiretroviral treatment in Zimbabwe.

    PubMed

    Hallett, T B; Gregson, S; Dube, S; Mapfeka, E S; Mugurungi, O; Garnett, G P

    2011-12-01

    To develop projections of the resources required (person-years of drug supply and healthcare worker time) for universal access to antiretroviral treatment (ART) in Zimbabwe. A stochastic mathematical model of disease progression, diagnosis, clinical monitoring and survival in HIV infected individuals. The number of patients receiving ART is determined by many factors, including the strategy of the ART programme (method of initiation, frequency of patient monitoring, ability to include patients diagnosed before ART became available), other healthcare services (referral rates from antenatal clinics, uptake of HIV testing), demographic and epidemiological conditions (past and future trends in incidence rates and population growth) as well as the medical impact of ART (average survival and the relationship with CD4 count when initiated). The variations in these factors lead to substantial differences in long-term projections; with universal access by 2010 and no further prevention interventions, between 370 000 and almost 2 million patients could be receiving treatment in 2030-a fivefold difference. Under universal access, by 2010 each doctor will initiate ART for up to two patients every day and the case-load for nurses will at least triple as more patients enter care and start treatment. The resources required by ART programmes are great and depend on the healthcare systems and the demographic/epidemiological context. This leads to considerable uncertainty in long-term projections and large variation in the resources required in different countries and over time. Understanding how current practices relate to future resource requirements can help optimise ART programmes and inform long-term public health planning.

  15. A switched systems approach to image-based estimation

    NASA Astrophysics Data System (ADS)

    Parikh, Anup

    With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been

  16. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant

  17. Weighted south-wide average pulpwood prices

    Treesearch

    James E. Granskog; Kevin D. Growther

    1991-01-01

    Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...

  18. An Improved Internal Consistency Reliability Estimate.

    ERIC Educational Resources Information Center

    Cliff, Norman

    1984-01-01

    The proposed coefficient is derived by assuming that the average Goodman-Kruskal gamma between items of identical difficulty would be the same for items of different difficulty. An estimate of covariance between items of identical difficulty leads to an estimate of the correlation between two tests with identical distributions of difficulty.…

  19. Evaluation and modification of five techniques for estimating stormwater runoff for watersheds in west-central Florida

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.

    1996-01-01

    Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average

  20. The ETA-II induction linac as a high-average-power FEL driver

    NASA Astrophysics Data System (ADS)

    Nexsen, W. E.; Atkinson, D. P.; Barrett, D. M.; Chen, Y.-J.; Clark, J. C.; Griffith, L. V.; Kirbie, H. C.; Newton, M. A.; Paul, A. C.; Sampayan, S.; Throop, A. L.; Turner, W. C.

    1990-10-01

    The Experimental Test Accelerator II (ETA-II) is the first induction linac designed specifically to FEL requirements. It is primarily intended to demonstrate induction accelerator technology for high-average-power, high-brightness electron beams, and will be used to drive a 140 and 250 GHz microwave FEL for plasma heating experiments in the Microwave Tokamak Experiment (MTX) at LLNL. Its features include high-vacuum design which allows the use of an intrinsically bright dispenser cathode, induction cells designed to minimize BBU growth rate, and careful attention to magnetic alignment to minimize radial sweep due to beam corkscrew. The use of magnetic switches allows high-average-power operation. At present ETA-II is being used to drive 140 GHz plasma heating experiments. These experiments require nominal beam parameters of 6 MeV energy, 2 kA current, 20 ns pulse width and a brightness of 1 × 108 A/(m rad)2 at the wiggler with a pulse repetition frequency (prf) of 0.5 Hz. Future 250 GHz experiments require beam parameters of 10 MeV energy, 3 kA current, 50 ns pulse width and a brightness of 1 × 108 A/(m rad)2 with a 5 kHz prf for 0.5 s. In this paper we discuss the present status of ETA-II parameters and the phased development program necessary to satisfy these future requirements.

  1. Recent developments in high average power driver technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J>

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kVmore » and 700 kV range are reported. A 250 kV, 1.5 kA/cm/sup 2/, 30 ns electron beam diode has operated stably for 1.6 x 10/sup 5/ pulses.« less

  2. Characterisation of plastic microbeads in facial scrubs and their estimated emissions in Mainland China.

    PubMed

    Cheung, Pui Kwan; Fok, Lincoln

    2017-10-01

    Plastic microbeads are often added to personal care and cosmetic products (PCCPs) as an abrasive agent in exfoliants. These beads have been reported to contaminate the aquatic environment and are sufficiently small to be readily ingested by aquatic organisms. Plastic microbeads can be directly released into the aquatic environment with domestic sewage if no sewage treatment is provided, and they can also escape from wastewater treatment plants (WWTPs) because of incomplete removal. However, the emissions of microbeads from these two sources have never been estimated for China, and no regulation has been imposed on the use of plastic microbeads in PCCPs. Therefore, in this study, we aimed to estimate the annual microbead emissions in Mainland China from both direct emissions and WWTP emissions. Nine facial scrubs were purchased, and the microbeads in the scrubs were extracted and enumerated. The microbead density in those products ranged from 5219 to 50,391 particles/g, with an average of 20,860 particles/g. Direct emissions arising from the use of facial scrubs were estimated using this average density number, population data, facial scrub usage rate, sewage treatment rate, and a few conservative assumptions. WWTP emissions were calculated by multiplying the annual treated sewage volume and estimated microbead density in treated sewage. We estimated that, on average, 209.7 trillion microbeads (306.9 tonnes) are emitted into the aquatic environment in Mainland China every year. More than 80% of the emissions originate from incomplete removal in WWTPs, and the remaining 20% are derived from direct emissions. Although the weight of the emitted microbeads only accounts for approximately 0.03% of the plastic waste input into the ocean from China, the number of microbeads emitted far exceeds the previous estimate of plastic debris (>330 μm) on the world's sea surface. Immediate actions are required to prevent plastic microbeads from entering the aquatic environment

  3. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  4. 16 CFR Appendix K to Part 305 - Representative Average Unit Energy Costs

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Representative Average Unit Energy Costs K... CONGRESS RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND OTHER PRODUCTS REQUIRED UNDER THE ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING RULEâ...

  5. 16 CFR Appendix K to Part 305 - Representative Average Unit Energy Costs

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Representative Average Unit Energy Costs K... CONGRESS RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND OTHER PRODUCTS REQUIRED UNDER THE ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING RULEâ...

  6. 16 CFR Appendix K to Part 305 - Representative Average Unit Energy Costs

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Representative Average Unit Energy Costs K... CONGRESS RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND OTHER PRODUCTS REQUIRED UNDER THE ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING RULEâ...

  7. 16 CFR Appendix K to Part 305 - Representative Average Unit Energy Costs

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Representative Average Unit Energy Costs K... CONGRESS RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND OTHER PRODUCTS REQUIRED UNDER THE ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING RULEâ...

  8. Protein requirements in male adolescent soccer players.

    PubMed

    Boisseau, N; Vermorel, M; Rance, M; Duché, P; Patureau-Mirand, P

    2007-05-01

    Few investigations have studied protein metabolism in children and adolescent athletes which makes difficult the assessment of daily recommended dietary protein allowances in this population. The problematic in paediatric competitors is the determination of additional protein needs resulting from intensive physical training. The aim of this investigation was to determine protein requirement in 14-year-old male adolescent soccer players. Healthy male adolescent soccer players (N = 11, 13.8 +/- 0.1 year) participated in a short term repeated nitrogen balance study. Diets were designed to provide proteins at three levels: 1.4, 1.2 and 1.0 g protein per kg body weight (BW). Nutrient and energy intakes were assessed from 4 day food records corresponding to 4 day training periods during 3 weeks. Urine was collected during four consecutive days and analysed for nitrogen. The nitrogen balances were calculated from mean daily protein intake, mean urinary nitrogen excretion and estimated faecal and integumental nitrogen losses. Nitrogen balance increased with both protein intake and energy balance. At energy equilibrium, the daily protein intake needed to balance nitrogen losses was 1.04 g kg(-1) day(-1). This corresponds to an estimated average requirement (EAR) for protein of 1.20 g kg(-1) day(-1) and a recommended daily allowance (RDA) of 1.40 g kg(-1) day(-1) assuming a daily nitrogen deposition of 11 mg kg(-1). The results of the present study suggest that the protein requirements of 14-year-old male athletes are above the RDA for non-active male adolescents.

  9. Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system

    NASA Astrophysics Data System (ADS)

    Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye

    2017-12-01

    In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.

  10. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  11. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  12. Dielectric method of high-resolution gas hydrate estimation

    NASA Astrophysics Data System (ADS)

    Sun, Y. F.; Goldberg, D.

    2005-02-01

    In-situ dielectric properties of natural gas hydrate are measured for the first time in the Mallik 5L-38 Well in the Mackenzie Delta, Canada. The average dielectric constant of the hydrate zones is 9, ranging from 5 to 20. The average resistivity is >5 ohm.m in the hydrate zones, ranging from 2 to 10 ohm.m at a 1.1 GHz dielectric tool frequency. The dielectric logs show similar trends with sonic and induction resistivity logs, but exhibits inherently higher vertical resolution (<5 cm). The average in-situ hydrate saturation in the well is about 70%, ranging from 20% to 95%. The dielectric estimates are overall in agreement with induction estimates but the induction log tends to overestimate hydrate content up to 15%. Dielectric estimates could be used as a better proxy of in-situ hydrate saturation in modeling hydrate dynamics. The fine-scale structure in hydrate zones could help reveal hydrate formation history.

  13. Proof of age required--estimating age in adults without birth records.

    PubMed

    Phillips, Christine; Narayanasamy, Shanti

    2010-07-01

    Many adults from refugee source countries do not have documents of birth, either because they have been lost in flight, or because the civil infrastructure is too fragile to support routine recording of birth. In Western countries, date of birth is used as a basic identifier, and access to services and support tends to be age regulated. Doctors are not infrequently asked to write formal reports estimating the true age of adult refugees; however, there are no existing guidelines to assist in this task. To provide an overview of methods to estimate age in living adults, and outline recommendations for best practice. Age should be estimated through physical examination; life history, matching local or national events with personal milestones; and existing nonformal documents. Accuracy of age estimation should be subject to three tests: biological plausibility, historical plausibility, and corroboration from reputable sources.

  14. Development of high-average-power DPSSL with high beam quality

    NASA Astrophysics Data System (ADS)

    Nakai, Sadao; Kanabe, Tadashi; Kawashima, Toshiyuki; Yamanaka, Masanobu; Izawa, Yasukazu; Nakatuka, Masahiro; Kandasamy, Ranganathan; Kan, Hirofumi; Hiruma, Teruo; Niino, Masayuki

    2000-08-01

    The recent progress of high power diode laser is opening new fields of laser and its application. We are developing high average power diode pumped solid state laser DPSSL for laser fusion power plant, for space propulsion and for various applications in industry. The common features or requirements of our High Average-power Laser for Nuclear-fusion Application (HALNA) are large pulse energy with relatively low repetition of few tens Hz, good beam quality of order of diffraction limit and high efficiency more than 10%. We constructed HALNA 10 (10J X 10 Hz) and tested the performance to clarify the scalability to higher power system. We have obtained in a preliminary experiment a 8.5 J output energy at 0.5 Hz with beam quality of 2 times diffraction limited far-field pattern.

  15. A metabolic nitrogen balance study for 40 d and evaluation of the menstrual cycle on protein requirement in young Nigerian women.

    PubMed

    Egun, G N; Atinmo, T

    1993-09-01

    A long-term N balance study was carried out to determine the adequacy of an estimated protein requirement level recommended for young healthy Nigerian women and the effect of the menstrual cycle on the requirement. Eleven healthy young women, 25 (SD 2.6) years, were fed on a diet providing 0.6 g protein (N x 6.25)/kg per d and an average energy intake of 0.17 (SD 0.012) MJ/kg per d. Urine, faeces, sweat and menstrual fluids were collected for estimation of N balance. Menstrual N loss varied among individuals ranging from 46 to 124 mg N/d with an average of 89 (SD 21.8) mg N/d. Individual N balance was found to vary according to the day of the menstrual cycle. Positive N balances were recorded at about ovulation while negative balances were observed just before the onset of menstruation. The average N balance ranged from +8.49 (SD 5.64) to -430 (SD 7.84) mg N/kg per d. Nevertheless, an overall cumulative positive N balance of +5.7 (SD 6.98) mg N/kg per d which did not change significantly with time was observed for the last 5 d of two consecutive 20 d diet periods, although three subjects were in negative N balance. Blood biochemical measurements were stable except for one subject who had elevated serum aspartate aminotransferase (EC 2.6.1.1) levels. These findings suggest that our estimate of protein requirements was sufficient to achieve N balance equilibrium in a majority (70%) of young women. However, to satisfy 97.5% of the population, slight adjustments might be necessary in the energy intake since subjects who were in cumulative negative N balance also lost weight.

  16. Redshift drift in an inhomogeneous universe: averaging and the backreaction conjecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koksbang, S.M.; Hannestad, S., E-mail: koksbang@phys.au.dk, E-mail: sth@phys.au.dk

    2016-01-01

    An expression for the average redshift drift in a statistically homogeneous and isotropic dust universe is given. The expression takes the same form as the expression for the redshift drift in FLRW models. It is used for a proof-of-principle study of the effects of backreaction on redshift drift measurements by combining the expression with two-region models. The study shows that backreaction can lead to positive redshift drift at low redshifts, exemplifying that a positive redshift drift at low redshifts does not require dark energy. Moreover, the study illustrates that models without a dark energy component can have an average redshiftmore » drift observationally indistinguishable from that of the standard model according to the currently expected precision of ELT measurements. In an appendix, spherically symmetric solutions to Einstein's equations with inhomogeneous dark energy and matter are used to study deviations from the average redshift drift and effects of local voids.« less

  17. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information.

    PubMed

    Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.

  18. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information

    PubMed Central

    Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290

  19. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    PubMed

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  20. Estimates of Ground-Water Recharge in Wadis of Arid, Mountainous Areas Using the Chloride Mass-Balance Approach

    NASA Astrophysics Data System (ADS)

    Wood, W. W.; Wood, W. W.

    2001-05-01

    Evaluation of ground-water supply in arid areas requires estimation of annual recharge. Traditional physical-based hydrologic estimates of ground-water recharge result in large uncertainties when applied in arid, mountainous environments because of infrequent, intense rainfall events, destruction of water-measuring structures associated with those events, and consequent short periods of hydrologic records. To avoid these problems and reduce the uncertainty of recharge estimates, a chloride mass-balance (CMB) approach was used to provide a time-integrated estimate. Seven basins exhibiting dry-stream beds (wadis) in the Asir and Hijaz Mountains, western Saudi Arabia, were selected to evaluate the method. Precipitation among the basins ranged from less than 70 mm/y to nearly 320 mm/y. Rain collected from 35 locations in these basins averaged 2.0 mg/L chloride. Ground water from 140 locations in the wadi alluvium averaged 200 mg/L chloride. This chloride concentration ratio of precipitation to ground water suggests that on average, approximately 1 percent of the rainfall is recharged, while the remainder is lost to evaporation. Ground-water recharge from precipitation in individual basins ranged from less than 1 to nearly 4 percent and was directly proportional to total precipitation. Independent calculations of recharge using Darcy's Law were consistent with these findings and are within the range typically found in other arid areas of the world. Development of ground water has lowered the water level beneath the wadis and provided more storage thus minimizing chloride loss from the basin by river discharge. Any loss of chloride from the basin results in an overestimate of the recharge flux by the chloride-mass balance approach. In well-constrained systems recharge in arid, mountainous areas where the mass of chloride entering and leaving the basin is known or can be reasonably estimated, the CMB approach provides a rapid, inexpensive method for estimating time