Science.gov

Sample records for estimated average requirement

  1. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  2. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  3. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  4. Phase-based direct average strain estimation for elastography.

    PubMed

    Ara, Sharmin R; Mohsin, Faisal; Alam, Farzana; Rupa, Sharmin Akhtar; Awwal, Rayhana; Lee, Soo Yeol; Hasan, Md Kamrul

    2013-11-01

    In this paper, a phase-based direct average strain estimation method is developed. A mathematical model is presented to calculate axial strain directly from the phase of the zero-lag cross-correlation function between the windowed precompression and stretched post-compression analytic signals. Unlike phase-based conventional strain estimators, for which strain is computed from the displacement field, strain in this paper is computed in one step using the secant algorithm by exploiting the direct phase-strain relationship. To maintain strain continuity, instead of using the instantaneous phase of the interrogative window alone, an average phase function is defined using the phases of the neighboring windows with the assumption that the strain is essentially similar in a close physical proximity to the interrogative window. This method accounts for the effect of lateral shift but without requiring a prior estimate of the applied strain. Moreover, the strain can be computed both in the compression and relaxation phases of the applied pressure. The performance of the proposed strain estimator is analyzed in terms of the quality metrics elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe), and mean structural similarity (MSSIM), using a finite element modeling simulation phantom. The results reveal that the proposed method performs satisfactorily in terms of all the three indices for up to 2.5% applied strain. Comparative results using simulation and experimental phantom data, and in vivo breast data of benign and malignant masses also demonstrate that the strain image quality of our method is better than the other reported techniques.

  5. Sample size bias in retrospective estimates of average duration.

    PubMed

    Smith, Andrew R; Rule, Shanon; Price, Paul C

    2017-03-25

    People often estimate the average duration of several events (e.g., on average, how long does it take to drive from one's home to his or her office). While there is a great deal of research investigating estimates of duration for a single event, few studies have examined estimates when people must average across numerous stimuli or events. The current studies were designed to fill this gap by examining how people's estimates of average duration were influenced by the number of stimuli being averaged (i.e., the sample size). Based on research investigating the sample size bias, we predicted that participants' judgments of average duration would increase as the sample size increased. Across four studies, we demonstrated a sample size bias for estimates of average duration with different judgment types (numeric estimates and comparisons), study designs (between and within-subjects), and paradigms (observing images and performing tasks). The results are consistent with the more general notion that psychological representations of magnitudes in one dimension (e.g., quantity) can influence representations of magnitudes in another dimension (e.g., duration).

  6. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  7. Estimating a weighted average of stratum-specific parameters.

    PubMed

    Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul

    2008-10-30

    This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.

  8. A Spectral Estimate of Average Slip in Earthquakes

    NASA Astrophysics Data System (ADS)

    Boatwright, J.; Hanks, T. C.

    2014-12-01

    We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.

  9. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  10. Advising Students about Required Grade-Point Averages

    ERIC Educational Resources Information Center

    Moore, W. Kent

    2006-01-01

    Sophomores interested in professional colleges with grade-point average (GPA) standards for admission to upper division courses will need specific and realistic information concerning the requirements. Specifically, those who fall short of the standard must assess the likelihood of achieving the necessary GPA for professional program admission.…

  11. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  12. Variations in Nimbus-7 cloud estimates. Part I: Zonal averages

    SciTech Connect

    Weare, B.C. )

    1992-12-01

    Zonal averages of low, middle, high, and total cloud amount estimates derived from measurements from Nimbus-7 have been analyzed for the six-year period April 1979 through March 1985. The globally and zonally averaged valued of six-year annual means and standard deviations of total cloud amount and a proxy of cloudtop height are illustrated. Separate means for day and night and land and sea are also shown. The globally averaged value of intra-annual variability of total cloud amount is greater than 7%, and that for cloud height is greater than 0.3 km. Those of interannual variability are more than one-third of these values. Important latitudinal differences in variability are illustrated. The dominant empirical orthogonal analyses of the intra-annual variations of total cloud amount and heights show strong annual cycles, indicating that in the tropics increases in total cloud amount of up to about 30% are often accompanied by increases in cloud height of up to 1.2 km. This positive link is also evident in the dominant empirical orthogonal function of interannual variations of a total cloud/cloud height complex. This function shows a large coherent variation in total cloud cover of about 10% coupled with changes in cloud height of about 1.1 km associated with the 1982-83 El Ni[tilde n]o-Southern Oscillation event. 14 refs. 12 figs., 2 tabs.

  13. Identification and estimation of survivor average causal effects

    PubMed Central

    Tchetgen, Eric J Tchetgen

    2014-01-01

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

  14. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.

  15. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  16. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  17. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    DTIC Science & Technology

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to

  18. Asymptotic Properties of Some Estimators in Moving Average Models

    DTIC Science & Technology

    1975-09-08

    consider a different approach due to Durbin (1959), based on approximating the moving average of order .q by an autoregression of order k ( k ~ q). This...method shows good statistical properties. The paper by Durbin does not treat in detail the role of k in the parameters of the limiting normal...k) confirming some of the examples presented by Durbin . The parallel analysis with k = k(T) was also attempted.:> but at this point no complete

  19. Effect of wind averaging time on wind erosivity estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...

  20. Estimating the average time for inter-continental transport of air pollutants

    NASA Astrophysics Data System (ADS)

    Liu, Junfeng; Mauzerall, Denise L.

    2005-06-01

    We estimate the average time required for inter-continental transport of atmospheric tracers based on simulations with the global chemical tracer model MOZART-2 driven with NCEP meteorology. We represent the average transport time by a ratio of the concentration of two tracers with different lifetimes. We find that average transport times increase with tracer lifetimes. With tracers of 1- and 2-week lifetimes the average transport time from East Asia (EA) to the surface of western North America (NA) in April is 2-3 weeks, approximately a half week longer than transport from NA to western Europe (EU) and from EU to EA. We develop an `equivalent circulation' method to estimate a timescale which has little dependence on tracer lifetimes and obtain similar results to those obtained with short-lived tracers. Our findings show that average inter-continental transport times, even for tracers with short lifetimes, are on average 1-2 weeks longer than rapid transport observed in plumes.

  1. Average Lifetime of an Intelligent Civilization Estimated on its

    NASA Astrophysics Data System (ADS)

    Kompanichenko, Vladimir

    In the cycle of existence of all natural systems (possessing an excess of free energy with respect to the environment - stars, living organisms, social systems, etc.) 4 universal stages can be distinguished: growth. internal development, stationary state, ageing (Kompanichenko, Futures, 1994, 26/5). In the context of this approach, human civilization that originated 10000 years ago is going through the natural cycle of its development. The analogy between the two different-hierarchy active systems is drawn: a human ( constructed of 60 trillion autonomous living systems - cells) and the human community (consisting of 6 billion autonomous living systems - people). In the cycle of the existence of a human, each of the four stages accounts for roughly 25% of its lifespan: growth 0-18 years, internal development, reaching maturity 18-36 years, stationary state 36-54 years, ageing 54-72 years. The humankind is now almost approaching the limits of its growth. Consequently, it can correspond to the age of 16-17 years old person. Thus, we can assume that during 10000 years of its existence human civilization has passed about 25% of the cycle of development. There is 30000 years of normative existence ahead (middle estimation). Actual existence can oscillate between 300 years and 3 million years, depending on the reasonable, conscious approach of humankind to its future. According to the Drake equation, with normative estimation L=30000 years there should exist at least several thousand of intelligent civilizations in our Galaxy.

  2. Analysis of the Estimators of the Average Coefficient of Dominance of Deleterious Mutations

    PubMed Central

    Fernández, B.; García-Dorado, A.; Caballero, A.

    2004-01-01

    We investigate the sources of bias that affect the most commonly used methods of estimation of the average degree of dominance (h) of deleterious mutations, focusing on estimates from segregating populations. The main emphasis is on the effect of the finite size of the populations, but other sources of bias are also considered. Using diffusion approximations to the distribution of gene frequencies in finite populations as well as stochastic simulations, we assess the behavior of the estimators obtained from populations at mutation-selection-drift balance under different mutational scenarios and compare averages of h for newly arisen and segregating mutations. Because of genetic drift, the inferences concerning newly arisen mutations based on the mutation-selection balance theory can have substantial upward bias depending upon the distribution of h. In addition, estimates usually refer to h weighted by the homozygous deleterious effect in different ways, so that inferences are complicated when these two variables are negatively correlated. Due to both sources of bias, the widely used regression of heterozygous on homozygous means underestimates the arithmetic mean of h for segregating mutations, in contrast to their repeatedly assumed equality in the literature. We conclude that none of the estimators from segregating populations provides, under general conditions, a useful tool to ascertain the properties of the degree of dominance, either for segregating or for newly arisen deleterious mutations. Direct estimates of the average h from mutation-accumulation experiments are shown to suffer some bias caused by purging selection but, because they do not require assumptions on the causes maintaining segregating variation, they appear to give a more reliable average dominance for newly arisen mutations. PMID:15514075

  3. Analysis of the estimators of the average coefficient of dominance of deleterious mutations.

    PubMed

    Fernández, B; García-Dorado, A; Caballero, A

    2004-10-01

    We investigate the sources of bias that affect the most commonly used methods of estimation of the average degree of dominance (h) of deleterious mutations, focusing on estimates from segregating populations. The main emphasis is on the effect of the finite size of the populations, but other sources of bias are also considered. Using diffusion approximations to the distribution of gene frequencies in finite populations as well as stochastic simulations, we assess the behavior of the estimators obtained from populations at mutation-selection-drift balance under different mutational scenarios and compare averages of h for newly arisen and segregating mutations. Because of genetic drift, the inferences concerning newly arisen mutations based on the mutation-selection balance theory can have substantial upward bias depending upon the distribution of h. In addition, estimates usually refer to h weighted by the homozygous deleterious effect in different ways, so that inferences are complicated when these two variables are negatively correlated. Due to both sources of bias, the widely used regression of heterozygous on homozygous means underestimates the arithmetic mean of h for segregating mutations, in contrast to their repeatedly assumed equality in the literature. We conclude that none of the estimators from segregating populations provides, under general conditions, a useful tool to ascertain the properties of the degree of dominance, either for segregating or for newly arisen deleterious mutations. Direct estimates of the average h from mutation-accumulation experiments are shown to suffer some bias caused by purging selection but, because they do not require assumptions on the causes maintaining segregating variation, they appear to give a more reliable average dominance for newly arisen mutations.

  4. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and

  5. Estimation of the Area of a Reverberant Plate Using Average Reverberation Properties

    NASA Astrophysics Data System (ADS)

    Achdjian, Hossep; Moulin, Emmanuel; Benmeddour, Farouk; Assaad, Jamal

    This paper aims to present an original method for the estimation of the area of thin plates of arbitrary geometrical shapes. This method relies on the acquisition and ensemble processing of reverberated elastic signals on few sensors. The acoustical Green's function in a reverberant solid medium is modeled by a nonstationary random process based on the image-sources method. In that way, mathematical expectations of the signal envelopes can be analytically related to reverberation properties and structural parameters such as plate area, group velocity, or source-receiver distance. Then, a simple curve fitting applied to an ensemble average over N realizations of the late envelopes allows to estimate a global term involving the values of structural parameters. From simple statistical modal arguments, it is shown that the obtained relation depends on the plate area and not on the plate shape. Finally, by considering an additional relation obtained from the early characteristics (treated in a deterministic way) of the reverberation signals, it is possible to deduce the area value. This estimation is performed without geometrical measurements and requires an access to only a small portion of the plate. Furthermore, this method does not require any time measurement nor trigger synchronization between the input channels of instrumentation (between measured signals), thus implying low hardware constraints. Experimental results obtained on metallic plates with free boundary conditions and embedded window glasses will be presented. Areas of up to several meter-squares are correctly estimated with a relative error of a few percents.

  6. Estimates of zonally averaged tropical diabatic heating in AMIP GCM simulations. PCMDI report No. 25

    SciTech Connect

    Boyle, J.S.

    1995-07-01

    An understanding of the processess that generate the atmospheric diabatic heating rates is basic to an understanding of the time averaged general circulation of the atmosphere and also circulation anomalies. Knowledge of the sources and sinks of atmospheric heating enables a fuller understanding of the nature of the atmospheric circulation. An actual assesment of the diabatic heating rates in the atmosphere is a difficult problem that has been approached in a number of ways. One way is to estimate the total diabatic heating by estimating individual components associated with the radiative fluxes, the latent heat release, and sensible heat fluxes. An example of this approach is provided by Newell. Another approach is to estimate the net heating rates from consideration of the balance required of the mass and wind variables as routinely observed and analyzed. This budget computation has been done using the thermodynamic equation and more recently done by using the vorticity and thermodynamic equations. Schaak and Johnson compute the heating rates through the integration of the isentropic mass continuity equation. The estimates of heating arrived at all these methods are severely handicapped by the uncertainties in the observational data and analyses. In addition the estimates of the individual heating components suffer an additional source of error from the parameterizations used to approximate these quantities.

  7. The Estimation of Theta in the Integrated Moving Average Time-Series Model.

    ERIC Educational Resources Information Center

    Martin, Gerald R.

    Through Monte Carlo procedures, three different techniques for estimating the parameter theta (proportion of the "shocks" remaining in the system) in the Integrated Moving Average (0,1,1) time-series model are compared in terms of (1) the accuracy of the estimates, (2) the independence of the estimates from the true value of theta, and…

  8. A comparison of spatial averaging and Cadzow's method for array wavenumber estimation

    SciTech Connect

    Harris, D.B.; Clark, G.A.

    1989-10-31

    We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.

  9. Weighted interframe averaging-based channel estimation for orthogonal frequency division multiplexing passive optical network

    NASA Astrophysics Data System (ADS)

    Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan

    2015-10-01

    Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.

  10. Estimation of the global average temperature with optimally weighted point gauges

    NASA Technical Reports Server (NTRS)

    Hardin, James W.; Upson, Robert B.

    1993-01-01

    This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.

  11. [Adaptive moving averaging based estimation of single event-related potentials].

    PubMed

    Qi, C; Liang, D; Jiang, X

    2001-03-01

    Event-related potentials (ERP) is pertinent to medical research and clinical diagnosis. Estimation of single event-related potentials (sERP) is the objective of ERP processing. A new technique, adaptive moving averaging based method for estimation of sERP, is presented. After analysis of the properties of background noise by crossing zero, the window length of moving averaging is adaptively set according to the maximum width of the impulse noise for each recorded raw data. The experiments are made with real recorded data and the results demonstrate that the performance of sERP estimation is excellent. So the method proposed is suitable to sERP processing.

  12. Robust estimation for class averaging in cryo-EM Single Particle Reconstruction.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2014-01-01

    Single Particle Reconstruction (SPR) for Cryogenic Electron Microscopy (cryo-EM) aligns and averages the images extracted from micrographs to improve the Signal-to-Noise ratio (SNR). Outliers compromise the fidelity of the averaging. We propose a robust cross-correlation-like w-estimator for combating the effect of outliers on the average images in cryo-EM. The estimator accounts for the natural variation of signal contrast among the images and eliminates the need for a threshold for outlier rejection. We show that the influence function of our estimator is asymptotically bounded. Evaluations of the estimator on simulated and real cryo-EM images show good performance in the presence of outliers.

  13. Noninvasive average flow and differential pressure estimation for an implantable rotary blood pump using dimensional analysis.

    PubMed

    Lim, Einly; Karantonis, Dean M; Reizes, John A; Cloherty, Shaun L; Mason, David G; Lovell, Nigel H

    2008-08-01

    Accurate noninvasive average flow and differential pressure estimation of implantable rotary blood pumps (IRBPs) is an important practical element for their physiological control. While most attempts at developing flow and differential pressure estimate models have involved purely empirical techniques, dimensional analysis utilizes theoretical principles of fluid mechanics that provides valuable insights into parameter relationships. Based on data obtained from a steady flow mock loop under a wide range of pump operating points and fluid viscosities, flow and differential pressure estimate models were thus obtained using dimensional analysis. The algorithm was then validated using data from two other VentrAssist IRBPs. Linear correlations between estimated and measured pump flow over a flow range of 0.5 to 8.0 L/min resulted in a slope of 0.98 ( R(2) = 0.9848). The average flow error was 0.20 +/- 0.14 L/min (mean +/- standard deviation) and the average percentage error was 5.79%. Similarly, linear correlations between estimated and measured pump differential pressure resulted in a slope of 1.027 ( R(2) = 0.997) over a pressure range of 60 to 180 mmHg. The average differential pressure error was 1.84 +/- 1.54 mmHg and the average percentage error was 1.51%.

  14. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  15. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  16. Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory

    ERIC Educational Resources Information Center

    Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena

    2013-01-01

    This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…

  17. Estimation of genetic parameters for average daily gain using models with competition effects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Components of variance for ADG with models including competition effects were estimated from data provided by Pig Improvement Company on 11,235 pigs from 4 selected lines of swine. Fifteen pigs with average age of 71 d were randomly assigned to a pen by line and sex and taken off test after approxi...

  18. Assessment of estimated daily intakes of benzoates for average and high consumers in Korea.

    PubMed

    Yoon, Hae Jung; Cho, Yang Hee; Park, Juyeon; Lee, Chang Hee; Park, Sung Kwan; Cho, Young Ju; Han, Ki Won; Lee, Jong Ok; Lee, Chul Won

    2003-02-01

    A study was performed to evaluate the estimated daily intakes (EDI) of benzoates for the average and high (90th percentile) consumers by age and sex categories in Korea. The estimation of daily intakes of benzoates was based on individual dietary intake data from the National Health and Nutrition Survey in 1998 and on the determination of benzoates in eight food categories. The EDI of benzoates for average consumers of different age groups ranged from 0.009 to 0.025 mg kg(-1) bw day(-1). For high consumers, the range of EDI of benzoates was 0.195-1.878 mg kg(-1) bw day(-1). The intakes represented 0.18-0.50% of the acceptable daily intake (ADI) of benzoates for average consumers and 3.9-37.6% of the ADI for high consumers. Foods that contributed most to the daily intakes of benzoates were mixed beverages and soy sauce in Korea.

  19. How ants use quorum sensing to estimate the average quality of a fluctuating resource.

    PubMed

    Franks, Nigel R; Stuttard, Jonathan P; Doran, Carolina; Esposito, Julian C; Master, Maximillian C; Sendova-Franks, Ana B; Masuda, Naoki; Britton, Nicholas F

    2015-07-08

    We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures.

  20. Limitations of the spike-triggered averaging for estimating motor unit twitch force: a theoretical analysis.

    PubMed

    Negro, Francesco; Yavuz, Ş Utku; Yavuz, Utku Ş; Farina, Dario

    2014-01-01

    Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.

  1. Optimal estimators and asymptotic variances for nonequilibrium path-ensemble averages

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Chodera, John D.

    2009-10-01

    Existing optimal estimators of nonequilibrium path-ensemble averages are shown to fall within the framework of extended bridge sampling. Using this framework, we derive a general minimal-variance estimator that can combine nonequilibrium trajectory data sampled from multiple path-ensembles to estimate arbitrary functions of nonequilibrium expectations. The framework is also applied to obtain asymptotic variance estimates, which are a useful measure of statistical uncertainty. In particular, we develop asymptotic variance estimates pertaining to Jarzynski's equality for free energies and the Hummer-Szabo expressions for the potential of mean force, calculated from uni- or bidirectional path samples. These estimators are demonstrated on a model single-molecule pulling experiment. In these simulations, the asymptotic variance expression is found to accurately characterize the confidence intervals around estimators when the bias is small. Hence, the confidence intervals are inaccurately described for unidirectional estimates with large bias, but for this model it largely reflects the true error in a bidirectional estimator derived by Minh and Adib.

  2. [Estimation of average traffic emission factor based on synchronized incremental traffic flow and air pollutant concentration].

    PubMed

    Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng

    2014-04-01

    On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors.

  3. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  4. Bayesian and Frequentist Estimation of the Average Capacity of Log Normal Wireless Optical Channels

    NASA Astrophysics Data System (ADS)

    Katsis, A.; Nistazakis, H. E.; Tombras, G. S.

    2008-11-01

    We investigate the average (ergodic) capacity of practical free-space optical communication channels using the frequentist and the Bayesian approach. We concentrate on the cases of weak and moderate atmospheric turbulence leading to channels modeled by Log-Normal distributed intensity fading and derive closed-form expressions and estimation procedures for their achievable capacity and the important parameters of interest. Each methodology is reviewed in terms of their analytic convenience and their accuracy is also discussed.

  5. Estimation of annual average daily traffic for off-system roads in Florida. Final report

    SciTech Connect

    Shen, L.D.; Zhao, F.; Ospina, D.I.

    1999-07-28

    Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination of roadway geometry, congestion management, pavement design, safety considerations, etc. AADT is also used to estimate state wide vehicle miles traveled on all the roads and is used by local governments and the environmental protection agencies to determine compliance with the 1990 Clean Air Act Amendment. Additionally, AADT is reported annually by the Florida Department of transportation (FDOT) to the Federal Highway Administration. In the past, considerable efforts have been made in obtaining traffic counts to estimate AADT on state roads. However, traffic counts are often not available on off-system roads, and less attention has been paid to the estimation of AADT in the absence of counts. Current estimates rely on comparisons with roads that are subjectively considered to be similar. Such comparisons are inherently subject to large errors, and also may not be repeated often enough to remain current. Therefore, a better method is needed for estimating AADT for off-system roads in Florida. This study investigates the possibility of establishing one or more models for estimating AADT for off-system roads in Florida.

  6. Estimating the path-average rainwater content and updraft speed along a microwave link

    NASA Technical Reports Server (NTRS)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  7. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528

  8. An Estimate of the Average Number of Recessive Lethal Mutations Carried by Humans

    PubMed Central

    Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly

    2015-01-01

    The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes. PMID:25697177

  9. An estimate of the average number of recessive lethal mutations carried by humans.

    PubMed

    Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly

    2015-04-01

    The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes.

  10. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  11. microclim: Global estimates of hourly microclimate based on long-term monthly climate averages

    PubMed Central

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764

  12. A temperature-based model for estimating monthly average daily global solar radiation in China.

    PubMed

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China.

  13. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  14. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  15. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  16. Unmanned Aerial Vehicles unique cost estimating requirements

    NASA Astrophysics Data System (ADS)

    Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.

    Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.

  17. Fetal cardiac time intervals estimated on fetal magnetocardiograms: single cycle analysis versus average beat inspection.

    PubMed

    Comani, Silvia; Alleva, Giovanna

    2007-01-01

    Fetal cardiac time intervals (fCTI) are dependent on fetal growth and development, and may reveal useful information for fetuses affected by growth retardation, structural cardiac defects or long QT syndrome. Fetal cardiac signals with a signal-to-noise ratio (SNR) of at least 15 dB were retrieved from fetal magnetocardiography (fMCG) datasets with a system based on independent component analysis (ICA). An automatic method was used to detect the onset and offset of the cardiac waves on single cardiac cycles of each signal, and the fCTI were quantified for each heartbeat; long rhythm strips were used to calculate average fCTI and their variability for single fetal cardiac signals. The aim of this work was to compare the outcomes of this system with the estimates of fCTI obtained with a classical method based on the visual inspection of averaged beats. No fCTI variability can be measured from averaged beats. A total of 25 fMCG datasets (fetal age from 22 to 37 weeks) were evaluated, and 1768 cardiac cycles were used to compute fCTI. The real differences between the values obtained with a single cycle analysis and visual inspection of averaged beats were very small for all fCTI. They were comparable with signal resolution (+/-1 ms) for QRS complex and QT interval, and always <5 ms for the PR interval, ST segment and T wave. The coefficients of determination between the fCTI estimated with the two methods ranged between 0.743 and 0.917. Conversely, inter-observer differences were larger, and the related coefficients of determination ranged between 0.463 and 0.807, assessing the high performance of the automated single cycle analysis, which is also rapid and unaffected by observer-dependent bias.

  18. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  19. Beyond intent to treat (ITT): A complier average causal effect (CACE) estimation primer.

    PubMed

    Peugh, James L; Strotman, Daniel; McGrady, Meghan; Rausch, Joseph; Kashikar-Zuck, Susmita

    2017-02-01

    Randomized control trials (RCTs) have long been the gold standard for allowing causal inferences to be made regarding the efficacy of a treatment under investigation, but traditional RCT data analysis perspectives do not take into account a common reality: imperfect participant compliance to treatment. Recent advances in both maximum likelihood parameter estimation and mixture modeling methodology have enabled treatment effects to be estimated, in the presence of less than ideal levels of participant compliance, via a Complier Average Causal Effect (CACE) structural equation mixture model. CACE is described in contrast to "intent to treat" (ITT), "per protocol", and "as treated" RCT data analysis perspectives. CACE model assumptions, specification, estimation, and interpretation will all be demonstrated with simulated data generated from a randomized controlled trial of cognitive-behavioral therapy for Juvenile Fibromyalgia. CACE analysis model figures, linear model equations, and Mplus estimation syntax examples are all provided. Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article.

  20. [Estimation of the Average Glandular Dose Using the Mammary Gland Image Analysis in Mammography].

    PubMed

    Otsuka, Tomoko; Teramoto, Atsushi; Asada, Yasuki; Suzuki, Shoichi; Fujita, Hiroshi; Kamiya, Satoru; Anno, Hirofumi

    2016-05-01

    Currently, the glandular dose is evaluated quantitatively on the basis of the measured data using phantom, and not in a dose based on the mammary gland structure of an individual patient. However, mammary gland structures of the patients are different from each other and mammary gland dose of an individual patient cannot be obtained by the existing methods. In this study, we present an automated estimation method of mammary gland dose by means of mammary structure which is measured automatically using mammogram. In this method, mammary gland structure is extracted by Gabor filter; mammary region is segmented by the automated thresholding. For the evaluation, mammograms of 100 patients diagnosed with category 1 were collected. Using these mammograms we compared the mammary gland ratio measured by proposed method and visual evaluation. As a result, 78% of the total cases were matched. Furthermore, the mammary gland ratio and average glandular dose among the patients with same breast thickness was matched well. These results show that the proposed method may be useful for the estimation of average glandular dose for the individual patients.

  1. Spontaneous BOLD event triggered averages for estimating functional connectivity at resting state

    PubMed Central

    Tagliazucchi, Enzo; Balenzuela, Pablo; Fraiman, Daniel; Montoya, Pedro; Chialvo, Dante R.

    2010-01-01

    Recent neuroimaging studies have demonstrated that the spontaneous brain activity reflects, to a large extent, the same activation patterns measured in response to cognitive and behavioral tasks. This correspondence between activation and rest has been explored with a large repertoire of computational methods, ranging from analysis of pairwise interactions between areas of the brain to the global brain networks yielded by independent component analysis. In this paper we describe an alternative method based on the averaging of the BOLD signal at a region of interest (target) triggered by spontaneous increments in activity at another brain area (seed). The resting BOLD event triggered averages (“rBeta”) can be used to estimate functional connectivity at resting state. Using two simple examples, here we illustrate how the analysis of the average response triggered by spontaneous increases/decreases in the BOLD signal is sufficient to capture the aforementioned correspondence in a variety of circumstances. The computation of the non linear response during rest here described allows for a direct comparison with results obtained during task performance, providing an alternative measure of functional interaction between brain areas. PMID:21078369

  2. [Computer planning of average per capita requirements of the population in nutrients and energy].

    PubMed

    Meerovikh, R I; Gilianskaia, S K; Efanova, N P

    1975-01-01

    The expediency, feasibility and effectiveness of employing computors for multivariant calculations of an average per capita requirement of the population in basic nutrients and energy are substantiated. The prodcedures involved in preparation of the initial information are described and the fundamental parts of the program elaborated in "fortran" language and tentatively tested with an "EC-1020" type computor are discussed. The devised program may be adopted as a standard one.

  3. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  4. Noise estimation from averaged diffusion weighted images: Can unbiased quantitative decay parameters assist cancer evaluation?

    PubMed Central

    Dikaios, Nikolaos; Punwani, Shonit; Hamy, Valentin; Purpura, Pierpaolo; Rice, Scott; Forster, Martin; Mendes, Ruheena; Taylor, Stuart; Atkinson, David

    2014-01-01

    Purpose Multiexponential decay parameters are estimated from diffusion-weighted-imaging that generally have inherently low signal-to-noise ratio and non-normal noise distributions, especially at high b-values. Conventional nonlinear regression algorithms assume normally distributed noise, introducing bias into the calculated decay parameters and potentially affecting their ability to classify tumors. This study aims to accurately estimate noise of averaged diffusion-weighted-imaging, to correct the noise induced bias, and to assess the effect upon cancer classification. Methods A new adaptation of the median-absolute-deviation technique in the wavelet-domain, using a closed form approximation of convolved probability-distribution-functions, is proposed to estimate noise. Nonlinear regression algorithms that account for the underlying noise (maximum probability) fit the biexponential/stretched exponential decay models to the diffusion-weighted signal. A logistic-regression model was built from the decay parameters to discriminate benign from metastatic neck lymph nodes in 40 patients. Results The adapted median-absolute-deviation method accurately predicted the noise of simulated (R2 = 0.96) and neck diffusion-weighted-imaging (averaged once or four times). Maximum probability recovers the true apparent-diffusion-coefficient of the simulated data better than nonlinear regression (up to 40%), whereas no apparent differences were found for the other decay parameters. Conclusions Perfusion-related parameters were best at cancer classification. Noise-corrected decay parameters did not significantly improve classification for the clinical data set though simulations show benefit for lower signal-to-noise ratio acquisitions. PMID:23913479

  5. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  6. Planning and Estimation of Operations Support Requirements

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon

    2010-01-01

    Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D

  7. Autoregressive moving average modeling for spectral parameter estimation from a multigradient echo chemical shift acquisition.

    PubMed

    Taylor, Brian A; Hwang, Ken-Pin; Hazle, John D; Stafford, R Jason

    2009-03-01

    The authors investigated the performance of the iterative Steiglitz-McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (< or equal 16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer-Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR) > or =5 for echo train lengths (ETLs) > or =4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and/or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with > or =4 echoes and for T2*(<1.0%) with > or =7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire < or =16 echoes for one- and two-peak systems. Preliminary ex vivo

  8. Nonlinear models for estimating GSFC travel requirements

    NASA Technical Reports Server (NTRS)

    Buffalano, C.; Hagan, F. J.

    1974-01-01

    A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.

  9. A Method for the Estimation of p-Mode Parameters from Averaged Solar Oscillation Power Spectra

    NASA Astrophysics Data System (ADS)

    Reiter, J.; Rhodes, E. J., Jr.; Kosovichev, A. G.; Schou, J.; Scherrer, P. H.; Larson, T. P.

    2015-04-01

    A new fitting methodology is presented that is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from m-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the “Windowed, MuLTiple-Peak, averaged-spectrum” or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run, using weights from a leakage matrix that takes into account observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method, which employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure, which is based upon 6366 modes that we computed using the WMLTP method on the 66 day 2010 Solar and Heliospheric Observatory/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion, we developed a new procedure for the identification and correction of outliers in a frequency dataset. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle 24 during mid-2010.

  10. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  11. The Average Distance between Item Values: A Novel Approach for Estimating Internal Consistency

    ERIC Educational Resources Information Center

    Sturman, Edward D.; Cribbie, Robert A.; Flett, Gordon L.

    2009-01-01

    This article presents a method for assessing the internal consistency of scales that works equally well with short and long scales, namely, the average proportional distance. The method provides information on the average distance between item scores for a particular scale. In this article, we sought to demonstrate how this relatively simple…

  12. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGES

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; ...

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  13. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    SciTech Connect

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.

  14. Another Failure to Replicate Lynn's Estimate of the Average IQ of Sub-Saharan Africans

    ERIC Educational Resources Information Center

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    In his comment on our literature review of data on the performance of sub-Saharan Africans on Raven's Progressive Matrices, Lynn (this issue) criticized our selection of samples of primary and secondary school students. On the basis of the samples he deemed representative, Lynn concluded that the average IQ of sub-Saharan Africans stands at 67…

  15. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  16. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  17. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    NASA Astrophysics Data System (ADS)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  18. 31 CFR 205.23 - What requirements apply to estimates?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What requirements apply to estimates? 205.23 Section 205.23 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... Treasury-State Agreement § 205.23 What requirements apply to estimates? The following requirements...

  19. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine.

  20. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate

  1. Estimation of Critical Population Support Requirements.

    DTIC Science & Technology

    1984-05-30

    VA 22160 W.U. 4921H 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Federal Emergency Management Agency May 30, 1984 Industrial Protection...ensure the availability of industrial production required to support the population, maintain defense capabilities and perform command and control ...the population, maintain national defense capabilities and perform command and control activi- ties during a national emergency such as a threat of a

  2. Does the orbit-averaged theory require a scale separation between periodic orbit size and perturbation correlation length?

    SciTech Connect

    Zhang, Wenlu; Lin, Zhihong

    2013-10-15

    Using the canonical perturbation theory, we show that the orbit-averaged theory only requires a time-scale separation between equilibrium and perturbed motions and verifies the widely accepted notion that orbit averaging effects greatly reduce the microturbulent transport of energetic particles in a tokamak. Therefore, a recent claim [Hauff and Jenko, Phys. Rev. Lett. 102, 075004 (2009); Jenko et al., ibid. 107, 239502 (2011)] stating that the orbit-averaged theory requires a scale separation between equilibrium orbit size and perturbation correlation length is erroneous.

  3. Estimation of heat load in waste tanks using average vapor space temperatures

    SciTech Connect

    Crowe, R.D.; Kummerer, M.; Postma, A.K.

    1993-12-01

    This report describes a method for estimating the total heat load in a high-level waste tank with passive ventilation. This method relates the total heat load in the tank to the vapor space temperature and the depth of waste in the tank. Q{sub total} = C{sub f} (T{sub vapor space {minus}} T{sub air}) where: C{sub f} = Conversion factor = (R{sub o}k{sub soil}{sup *}area)/(z{sub tank} {minus} z{sub surface}); R{sub o} = Ratio of total heat load to heat out the top of the tank (function of waste height); Area = cross sectional area of the tank; k{sub soil} = thermal conductivity of soil; (z{sub tank} {minus} z{sub surface}) = effective depth of soil covering the top of tank; and (T{sub vapor space} {minus} T{sub air}) = mean temperature difference between vapor space and the ambient air at the surface. Three terms -- depth, area and ratio -- can be developed from geometrical considerations. The temperature difference is measured for each individual tank. The remaining term, the thermal conductivity, is estimated from the time-dependent component of the temperature signals coming from the periodic oscillations in the vapor space temperatures. Finally, using this equation, the total heat load for each of the ferrocyanide Watch List tanks is estimated. This provides a consistent way to rank ferrocyanide tanks according to heat load.

  4. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... historical costs, and other analyses used to generate cost estimates. (b) General. The Contractor shall... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in...

  5. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  6. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  7. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  8. Estimates of the average strength of natural selection are not inflated by sampling error or publication bias.

    PubMed

    Knapczyk, Frances N; Conner, Jeffrey K

    2007-10-01

    Kingsolver et al.'s review of phenotypic selection gradients from natural populations provided a glimpse of the form and strength of selection in nature and how selection on different organisms and traits varies. Because this review's underlying database could be a key tool for answering fundamental questions concerning natural selection, it has spawned discussion of potential biases inherent in the review process. Here, we explicitly test for two commonly discussed sources of bias: sampling error and publication bias. We model the relationship between variance among selection gradients and sample size that sampling error produces by subsampling large empirical data sets containing measurements of traits and fitness. We find that this relationship was not mimicked by the review data set and therefore conclude that sampling error does not bias estimations of the average strength of selection. Using graphical tests, we find evidence for bias against publishing weak estimates of selection only among very small studies (N<38). However, this evidence is counteracted by excess weak estimates in larger studies. Thus, estimates of average strength of selection from the review are less biased than is often assumed. Devising and conducting straightforward tests for different biases allows concern to be focused on the most troublesome factors.

  9. A Budyko framework for estimating how spatial heterogeneity and lateral moisture redistribution affect average evapotranspiration rates as seen from the atmosphere

    NASA Astrophysics Data System (ADS)

    Rouholahnejad Freund, Elham; Kirchner, James W.

    2017-01-01

    Most Earth system models are based on grid-averaged soil columns that do not communicate with one another, and that average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). These models also typically ignore topographically driven lateral redistribution of water (either as groundwater or surface flows), both within and between model grid cells. Here, we present a first attempt to quantify the effects of spatial heterogeneity and lateral redistribution on grid-cell-averaged evapotranspiration (ET) as seen from the atmosphere over heterogeneous landscapes. Our approach uses Budyko curves, as a simple model of ET as a function of atmospheric forcing by P and PET. From these Budyko curves, we derive a simple sub-grid closure relation that quantifies how spatial heterogeneity affects average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimations of average ET. For a sample high-relief grid cell in the Himalayas, this overestimation bias is shown to be roughly 12 %; for adjacent lower-relief grid cells, it is substantially smaller. We use a similar approach to derive sub-grid closure relations that quantify how lateral redistribution of water could alter average ET as seen from the atmosphere. We derive expressions for the maximum possible effect of lateral redistribution on average ET, and the amount of lateral redistribution required to achieve this effect, using only estimates of P and PET in possible source and recipient locations as inputs. We show that where the aridity index P/PET increases with altitude, gravitationally driven lateral redistribution will increase average ET (and models that overlook lateral redistribution will underestimate average ET). Conversely, where the aridity index P/PET decreases with altitude, gravitationally driven lateral redistribution will decrease average

  10. Targeted estimation and inference for the sample average treatment effect in trials with and without pair-matching.

    PubMed

    Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J

    2016-09-20

    In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Noninvasive average flow estimation for an implantable rotary blood pump: a new algorithm incorporating the role of blood viscosity.

    PubMed

    Malagutti, Nicolò; Karantonis, Dean M; Cloherty, Shaun L; Ayre, Peter J; Mason, David G; Salamonsen, Robert F; Lovell, Nigel H

    2007-01-01

    The effect of blood hematocrit (HCT) on a noninvasive flow estimation algorithm was examined in a centrifugal implantable rotary blood pump (iRBP) used for ventricular assistance. An average flow estimator, based on three parameters, input electrical power, pump speed, and HCT, was developed. Data were collected in a mock loop under steady flow conditions for a variety of pump operating points and for various HCT levels. Analysis was performed using three-dimensional polynomial surfaces to fit the collected data for each different HCT level. The polynomial coefficients of the surfaces were then analyzed as a function of HCT. Linear correlations between estimated and measured pump flow over a flow range from 1.0 to 7.5 L/min resulted in a slope of 1.024 L/min (R2=0.9805). Early patient data tested against the estimator have shown promising consistency, suggesting that consideration of HCT can improve the accuracy of existing flow estimation algorithms.

  12. Estimation of daily average downward shortwave radiation from MODIS data using principal components regression method: Fars province case study

    NASA Astrophysics Data System (ADS)

    Barzin, Razieh; Shirvani, Amin; Lotfi, Hossein

    2017-01-01

    Downward shortwave radiation is a key quantity in the land-atmosphere interaction. Since the moderate resolution imaging spectroradiometer data has a coarse temporal resolution, which is not suitable for estimating daily average radiation, many efforts have been undertaken to estimate instantaneous solar radiation using moderate resolution imaging spectroradiometer data. In this study, the principal components analysis technique was applied to capture the information of moderate resolution imaging spectroradiometer bands, extraterrestrial radiation, aerosol optical depth, and atmospheric water vapour. A regression model based on the principal components was used to estimate daily average shortwave radiation for ten synoptic stations in the Fars province, Iran, for the period 2009-2012. The Durbin-Watson statistic and autocorrelation function of the residuals of the fitted principal components regression model indicated that the residuals were serially independent. The results indicated that the fitted principal components regression models accounted for about 86-96% of total variance of the observed shortwave radiation values and the root mean square error was about 0.9-2.04 MJ m-2 d-1. Also, the results indicated that the model accuracy decreased as the aerosol optical depth increased and extraterrestrial radiation was the most important predictor variable among all.

  13. Enhancement of accuracy and reproducibility of parametric modeling for estimating abnormal intra-QRS potentials in signal-averaged electrocardiograms.

    PubMed

    Lin, Chun-Cheng

    2008-09-01

    This work analyzes and attempts to enhance the accuracy and reproducibility of parametric modeling in the discrete cosine transform (DCT) domain for the estimation of abnormal intra-QRS potentials (AIQP) in signal-averaged electrocardiograms. One hundred sets of white noise with a flat frequency response were introduced to simulate the unpredictable, broadband AIQP when quantitatively analyzing estimation error. Further, a high-frequency AIQP parameter was defined to minimize estimation error caused by the overlap between normal QRS and AIQP in low-frequency DCT coefficients. Seventy-two patients from Taiwan were recruited for the study, comprising 30 patients with ventricular tachycardia (VT) and 42 without VT. Analytical results showed that VT patients had a significant decrease in the estimated AIQP. The global diagnostic performance (area under the receiver operating characteristic curve) of AIQP rose from 73.0% to 84.2% in lead Y, and from 58.3% to 79.1% in lead Z, when the high-frequency range fell from 100% to 80%. The combination of AIQP and ventricular late potentials further enhanced performance to 92.9% (specificity=90.5%, sensitivity=90%). Therefore, the significantly reduced AIQP in VT patients, possibly also including dominant unpredictable potentials within the normal QRS complex, may be new promising evidence of ventricular arrhythmias.

  14. Estimation of the monthly average daily solar radiation using geographic information system and advanced case-based reasoning.

    PubMed

    Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon

    2013-05-07

    The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.

  15. Estimating Watershed-Averaged Precipitation and Evapotranspiration Fluxes using Streamflow Measurements in a Semi-Arid, High Altitude Montane Catchment

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.

    2014-12-01

    Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.

  16. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.

  17. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  18. Is the Whole Really More than the Sum of Its Parts? Estimates of Average Size and Orientation Are Susceptible to Object Substitution Masking

    ERIC Educational Resources Information Center

    Jacoby, Oscar; Kamke, Marc R.; Mattingley, Jason B.

    2013-01-01

    We have a remarkable ability to accurately estimate average featural information across groups of objects, such as their average size or orientation. It has been suggested that, unlike individual object processing, this process of "feature averaging" occurs automatically and relatively early in the course of perceptual processing,…

  19. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  20. Sample Size Requirements for Estimating Pearson, Spearman and Kendall Correlations.

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Wright, Thomas A.

    2000-01-01

    Reviews interval estimates of the Pearson, Kendall tau-alpha, and Spearman correlates and proposes an improved standard error for the Spearman correlation. Examines the sample size required to yield a confidence interval having the desired width. Findings show accurate results from a two-stage approximation to the sample size. (SLD)

  1. [Level of stress during pregnancy estimated by mothers and average weight of newborns and frequency of low birth weight].

    PubMed

    Steplewski, Z; Buczyńska, G; Rogoszewski, M; Kuban, T; Steplewska-Mazur, K; Jaskólecki, H; Kasperczyk, J

    1998-02-01

    Low birth weight is still important health problem in many countries. Children's low birth weight increases mortality, injures central nervous system, somatic, interferes with intellectual and emotional development. Low birth weight is frequently occurring in Poland--between 7-9% of live births. There are many risk factors, among them behavioural and environmental. In Poland an attention was put on chemical and physical environmental factors. Behavioural factors (stress) are disregarded. In the present paper it was decided to check the relationship between stress during pregnancy (estimated by pregnant), child birth weight and frequency of low birth weight. The research was carried out by use of a questionnaire using the "case-control study". In the research were involved 450 mothers of new-born children (the group of cases: untimely, premature delivery or child birth weight below 2500 g) and 450 mothers of new-born children (control group-physiologically delivered). Mothers were asked about their relations to the pregnancy; professional and personal stress during pregnancy was estimated. The results were analysed by counting risk ratio coefficient (RR) and correlation coefficient. The research showed, that there is no relation between acceptation of pregnancy, stress and frequency of low birth weight or the average child birth weight. The researches didn't prove unfavourable influence of stress reaction caused by professional and personal stressors on intrauterine foetus development.

  2. Using seed purity data to estimate an average pollen mediated gene flow from crops to wild relatives.

    PubMed

    Lavigne, C; Klein, E K; Couvet, D

    2002-01-01

    Gene flow from crops to wild related species has been recently under focus in risk-assessment studies of the ecological consequences of growing transgenic crops. However, experimental studies addressing this question are usually temporally or spatially limited. Indirect population-structure approaches can provide more global estimates of gene flow, but their assumptions appear inappropriate in an agricultural context. In an attempt to help the committees providing advice on the release of transgenic crops, we present a new method to estimate the quantity of genes migrating from crops to populations of related wild plants by way of pollen dispersal. This method provides an average estimate at a landscape level. Its originality is based on the measure of the inverse gene flow, i.e. gene flow from the wild plants to the crop. Such gene flow results in an observed level of impurities from wild plants in crop seeds. This level of impurity is usually known by the seed producers and, in any case, its measure is easier than a direct screen of wild populations because crop seeds are abundant and their genetic profile is known. By assuming that wild and cultivated plants have a similar individual pollen dispersal function, we infer the level of pollen-mediated gene flow from a crop to the surrounding wild populations from this observed level of impurity. We present an example for sugar beet data. Results suggest that under conditions of seed production in France (isolation distance of 1,000 m) wild beets produce high numbers of seeds fathered by cultivated plants.

  3. Origins for the estimations of water requirements in adults.

    PubMed

    Vivanti, A P

    2012-12-01

    Water homeostasis generally occurs without conscious effort; however, estimating requirements can be necessary in settings such as health care. This review investigates the derivation of equations for estimating water requirements. Published literature was reviewed for water estimation equations and original papers sought. Equation origins were difficult to ascertain and original references were often not cited. One equation (% of body weight) was based on just two human subjects and another equation (ml water/kcal) was reported for mammals and not specifically for humans. Other findings include that some equations: for children were subsequently applied to adults; had undergone modifications without explicit explanation; had adjusted for the water from metabolism or food; and had undergone conversion to simplify application. The primary sources for equations are rarely mentioned or, when located, lack details conventionally considered important. The sources of water requirement equations are rarely made explicit and historical studies do not satisfy more rigorous modern scientific method. Equations are often applied without appreciating their derivation, or adjusting for the water from food or metabolism as acknowledged by original authors. Water requirement equations should be used as a guide only while employing additional means (such as monitoring short-term weight changes, physical or biochemical parameters and urine output volumes) to ensure the adequacy of water provision in clinical or health-care settings.

  4. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  5. Estimated water requirements for gold heap-leach operations

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water necessary for conventional gold heap-leach operations. Water is required for drilling and dust suppression during mining, for agglomeration and as leachate during ore processing, to support the workforce (requires water in potable form and for sanitation), for minesite reclamation, and to compensate for water lost to evaporation and leakage. Maintaining an adequate water balance is especially critical in areas where surface and groundwater are difficult to acquire because of unfavorable climatic conditions [arid conditions and (or) a high evaporation rate]; where there is competition with other uses, such as for agriculture, industry, and use by municipalities; and where compliance with regulatory requirements may restrict water usage. Estimating the water consumption of heap-leach operations requires an understanding of the heap-leach process itself. The task is fairly complex because, although they all share some common features, each gold heap-leach operation is unique. Also, estimating the water consumption requires a synthesis of several fields of science, including chemistry, ecology, geology, hydrology, and meteorology, as well as consideration of economic factors.

  6. Estimation of temporal variations in path-averaged atmospheric refractive index gradient from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; McCrae, Jack E.; Fiorino, Steven; Przelomski, Jared

    2016-09-01

    The sea level vertical refractive index gradient in the U.S. Standard Atmosphere model is -2.7×10-8 m-1 at 500 nm. At any particular location, the actual refractive index gradient varies due to turbulence and local weather conditions. An imaging experiment was conducted to measure the temporal variability of this gradient. A tripod mounted digital camera captured images of a distant building every minute. Atmospheric turbulence caused the images to wander quickly, randomly, and statistically isotropically and changes in the average refractive index gradient along the path caused the images to move vertically and more slowly. The temporal variations of the refractive index gradient were estimated from the slow, vertical motion of the building over a period of several days. Comparisons with observational data showed the gradient variations derived from the time-lapse imagery correlated well with solar heating and other weather conditions. The time-lapse imaging approach has the potential to be used as a validation tool for numerical weather models. These validations will benefit directed energy simulation tools and applications.

  7. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    NASA Astrophysics Data System (ADS)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  8. Occurrence of aflatoxin M1 in human milk samples in Vojvodina, Serbia: Estimation of average daily intake by babies.

    PubMed

    Radonić, Jelena R; Kocić Tanackov, Sunčica D; Mihajlović, Ivana J; Grujić, Zorica S; Vojinović Miloradov, Mirjana B; Škrinjar, Marija M; Turk Sekulić, Maja M

    2017-01-02

    The objectives of the study were to determine the aflatoxin M1 content in human milk samples in Vojvodina, Serbia, and to assess the risk of infants' exposure to aflatoxins food contamination. The growth of Aspergillus flavus and production of aflatoxin B1 in corn samples resulted in higher concentrations of AFM1 in milk and dairy products in 2013, indicating higher concentrations of AFM1 in human milk samples in 2013 and 2014 in Serbia. A total number of 60 samples of human milk (colostrum and breast milk collected 4-8 months after delivery) were analyzed for the presence of AFM1 using the Enzyme Linked Immunosorbent Assay method. The estimated daily intake of AFM1 through breastfeeding was calculated for the colostrum samples using an average intake of 60 mL/kg body weight (b.w.)/day on the third day of lactation. All breast milk collected 4-8 months after delivery and 36.4% of colostrum samples were contaminated with AFM1. The greatest percentage of contaminated colostrum (85%) and all samples of breast milk collected 4-8 months after delivery had AFM1 concentration above maximum allowable concentration according to the Regulation on health safety of dietetic products. The mean daily intake of AFM1 in colostrum was 2.65 ng/kg bw/day. Results of our study indicate the high risk of infants' exposure, who are at the early stage of development and vulnerable to toxic contaminants.

  9. Stochastic physical ecohydrologic-based model for estimating irrigation requirement

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Climate uncertainty affects both natural and managed hydrological systems. Therefore, methods which could take this kind of uncertainty into account are of primal importance for management of ecosystems, especially agricultural ecosystems. One of the famous problems in these ecosystems is crop water requirement estimation under climatic uncertainty. Both deterministic physically-based methods and stochastic time series modeling have been utilized in the literature. Like other fields of hydroclimatic sciences, there is a vast area in irrigation process modeling for developing approaches integrating physics of the process and statistics aspects. This study is about deriving closed-form expressions for probability density function (p.d.f.) of irrigation water requirement using a stochastic physically-based model, which considers important aspects of plant, soil, atmosphere and irrigation technique and policy in a coherent framework. An ecohydrologic stochastic model, building upon the stochastic differential equation of soil moisture dynamics at root zone, is employed as a basis for deriving the expressions considering temporal stochasticity of rainfall. Due to distinguished nature of stochastic processes of micro and traditional irrigation applications, two different methodologies have been used. Micro-irrigation application has been modeled through dichotomic process. Chapman-Kolomogrov equation of time integral of the dichotomic process for transient condition has been solved to derive analytical expressions for probability density function of seasonal irrigation requirement. For traditional irrigation, irrigation application during growing season has been modeled using a marked point process. Using the renewal theory, probability mass function of seasonal irrigation requirement, which is a discrete-value quantity, has been analytically derived. The methodology deals with estimation of statistical properties of the total water requirement in a growing season that

  10. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  11. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    SciTech Connect

    Supanich, MP

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.

  12. SU-E-T-364: Estimating the Minimum Number of Patients Required to Estimate the Required Planning Target Volume Margins for Prostate Glands

    SciTech Connect

    Bakhtiari, M; Schmitt, J; Sarfaraz, M; Osik, C

    2015-06-15

    Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varying from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.

  13. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy.

  14. Estimating equilibrium ensemble averages using multiple time slices from driven nonequilibrium processes: theory and application to free energies, moments, and thermodynamic length in single-molecule pulling experiments.

    PubMed

    Minh, David D L; Chodera, John D

    2011-01-14

    Recently discovered identities in statistical mechanics have enabled the calculation of equilibrium ensemble averages from realizations of driven nonequilibrium processes, including single-molecule pulling experiments and analogous computer simulations. Challenges in collecting large data sets motivate the pursuit of efficient statistical estimators that maximize use of available information. Along these lines, Hummer and Szabo developed an estimator that combines data from multiple time slices along a driven nonequilibrium process to compute the potential of mean force. Here, we generalize their approach, pooling information from multiple time slices to estimate arbitrary equilibrium expectations. Our expression may be combined with estimators of path-ensemble averages, including existing optimal estimators that use data collected by unidirectional and bidirectional protocols. We demonstrate the estimator by calculating free energies, moments of the polymer extension, the thermodynamic metric tensor, and the thermodynamic length in a model single-molecule pulling experiment. Compared to estimators that only use individual time slices, our multiple time-slice estimators yield substantially smoother estimates and achieve lower variance for higher-order moments.

  15. Accelerated multiple-pass moving average: a novel algorithm for baseline estimation in CE and its application to baseline correction on real-time bases.

    PubMed

    Solis, Alejandro; Rex, Mathew; Campiglia, Andres D; Sojo, Pedro

    2007-04-01

    We present a novel algorithm for baseline estimation in CE. The new algorithm which we have named as accelerated multiple-pass moving average (AMPMA) is combined to three preexisting low-pass filters, spike-removal, moving average, and multi-pass moving average filter, to achieve real-time baseline correction with commercial instrumentation. The successful performance of AMPMA is demonstrated with simulated and experimental data. Straightforward comparison of experimental data clearly shows the improvement AMPMA provides to the linear fitting, LOD, and accuracy (absolute error) of CE analysis.

  16. Estimation of L-threonine requirements for Longyan laying ducks

    PubMed Central

    Fouad, A. M.; Zhang, H. X.; Chen, W.; Xia, W. G.; Ruan, D.; Wang, S.; Zheng, C. T.

    2017-01-01

    Objective A study was conducted to test six threonine (Thr) levels (0.39%, 0.44%, 0.49%, 0.54%, 0.59%, and 0.64%) to estimate the optimal dietary Thr requirements for Longyan laying ducks from 17 to 45 wk of age. Methods Nine hundred Longyan ducks aged 17 wk were assigned randomly to the six dietary treatments, where each treatment comprised six replicate pens with 25 ducks per pen. Results Increasing the Thr level enhanced egg production, egg weight, egg mass, and the feed conversion ratio (FCR) (linearly or quadratically; p<0.05). The Haugh unit score, yolk color, albumen height, and the weight, percentage, thickness, and breaking strength of the eggshell did not response to increases in the Thr levels, but the albumen weight and its proportion increased significantly (p<0.05), whereas the yolk weight and its proportion decreased significantly as the Thr levels increased. Conclusion According to a regression model, the optimal Thr requirement for egg production, egg mass, and FCR in Longyan ducks is 0.57%, while 0.58% is the optimal level for egg weight from 17 to 45 wk of age. PMID:27282968

  17. Estimates of the maximum time required to originate life

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Fogleman, Guy

    1989-01-01

    Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.

  18. Technical Methods Report: The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions. NCEE 2009-0061 rev.

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2009-01-01

    This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…

  19. FFT averaging of multichannel BCG signals from bed mattress sensor to improve estimation of heart beat interval.

    PubMed

    Kortelainen, Juha M; Virkkala, Jussi

    2007-01-01

    A multichannel pressure sensing Emfit foil was integrated to a bed mattress for measuring ballistocardiograph signals during sleep. We calculated the heart beat interval with cepstrum method, by applying FFT for short time windows including pair of consequent heart beats. We decreased the variance of FFT by averaging the multichannel data in the frequency domain. Relative error of our method in reference to electrocardiograph RR interval was only 0.35% for 15 night recordings with six normal subjects, when 12% of data was automatically removed due to movement artifacts. Background motivation for this work is given from the studies applying heart rate variability for the sleep staging.

  20. 48 CFR 252.215-7002 - Cost estimating system requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the estimating methods and rationale used in developing cost estimates and budgets; (v) Provide for... management systems; and (4) Is subject to applicable financial control systems. Estimating system means the Contractor's policies, procedures, and practices for budgeting and planning controls, and...

  1. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  2. How Well Can We Estimate Areal-Averaged Spectral Surface Albedo from Ground-Based Transmission in an Atlantic Coastal Area?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.

    2015-10-15

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  3. How well can we estimate areal-averaged spectral surface albedo from ground-based transmission in the Atlantic coastal area?

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina

    2015-10-01

    Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.

  4. Estimates of Average Glandular Dose with Auto-modes of X-ray Exposures in Digital Breast Tomosynthesis

    PubMed Central

    Kamal, Izdihar; Chelliah, Kanaga K.; Mustafa, Nawal

    2015-01-01

    Objectives: The aim of this research was to examine the average glandular dose (AGD) of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. Methods: This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50) and 20% glandular and 80% adipose tissue (20/80) commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA) with auto-time, auto-filter and auto-kilovolt modes. Results: The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy) for two dimension (2D) and 2.48 mGy for three dimensional (3D) images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. Conclusion: The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error. PMID:26052465

  5. Global Estimates of Average Ground-Level Fine Particulate Matter Concentrations from Satellite-Based Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.

    2010-01-01

    Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.

  6. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation

    PubMed Central

    Lee, Eugenia E.; Stewart, Barclay; Zha, Yuanting A.; Groen, Thomas A.; Burkle, Frederick M.; Kushner, Adam L.

    2016-01-01

    Background: Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   Methods: The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Results: Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People’s Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. Conclusion: As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies. PMID:27617165

  7. Estimated daily average per capita water ingestion by child and adult age categories based on USDA's 1994-1996 and 1998 continuing survey of food intakes by individuals.

    PubMed

    Kahn, Henry D; Stralka, Kathleen

    2009-05-01

    Water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for a number of age range groups. The age ranges, based on guidance from the US EPA's Risk Assessment Forum, are narrow for younger ages when development is rapid and wider for older ages when the rate of development decreases. Estimates are based on data from the United States Department of Agriculture's (USDA's) 1994-1996 and 1998 Continuing Survey of Food Intake by Individuals (CSFII). Water ingestion estimates include water ingested directly as a beverage and water added to foods and beverages during preparation at home or in local establishments. Water occurring naturally in foods or added by manufacturers to commercial products (beverage or food) is not included. Estimates are reported in milliliters (ml/person/day) and milliliters per kilogram of body weight (ml/kg/day). As a by-product of constructing estimates in terms of body weight of respondents, distributions of self-reported body weights based on the CSFII were estimated and are also reported here.

  8. Application of random regression model to estimate genetic parameters for average daily gains in Lori-Bakhtiari sheep breed of Iran.

    PubMed

    Farhangfar, H; Naeemipour, H; Zinvand, B

    2007-07-15

    A random regression model was applied to estimate (co) variances, heritabilities and additive genetic correlations among average daily gains. The data was a total of 10876 records belonging to 1828 lambs (progenies of 123 sires and 743 dams) born between 1995 and 2001 in a single large size flock of Lori-Bakhtiari sheep breed in Iran. In the model, fixed environmental effects of year-season of birth, sex, birth type, age of dam and random effects of direct and maternal additive genetic and permanent environment were included. Orthogonal polynomial regression (on the Legendre scale) of third order (cubic) was utilized to model the genetic and permanent environmental (co) variance structure throughout the growth trajectory. Direct and maternal heritability estimates of average daily gains ranged from 0.011 to 0.131 and 0.008 to 0.181, respectively in which pre-weaning average daily gain (0-3 in months) had the lowest direct and highest maternal heritability estimates among the other age groups. The highest and lowest positive direct additive genetic correlations were found to be 0.993 and 0.118 between ADG (0-9) and ADG (0-12) and between ADG (0-3) and ADG (0-12), respectively. The direct additive genetic correlations between adjacent age groups were more closely than between remote age groups.

  9. Using cone-beam CT projection images to estimate the average and complete trajectory of a fiducial marker moving with respiration

    NASA Astrophysics Data System (ADS)

    Becker, N.; Smith, W. L.; Quirk, S.; Kay, I.

    2010-12-01

    Stereotactic body radiotherapy of lung cancer often makes use of a static cone-beam CT (CBCT) image to localize a tumor that moves during the respiratory cycle. In this work, we developed an algorithm to estimate the average and complete trajectory of an implanted fiducial marker from the raw CBCT projection data. After labeling the CBCT projection images based on the breathing phase of the fiducial marker, the average trajectory was determined by backprojecting the fiducial position from images of similar phase. To approximate the complete trajectory, a 3D fiducial position is estimated from its position in each CBCT project image as the point on the source-image ray closest to the average position at the same phase. The algorithm was tested with computer simulations as well as phantom experiments using a gold seed implanted in a programmable phantom capable of variable motion. Simulation testing was done on 120 realistic breathing patterns, half of which contained hysteresis. The average trajectory was reconstructed with an average root mean square (rms) error of less than 0.1 mm in all three directions, and a maximum error of 0.5 mm. The complete trajectory reconstruction had a mean rms error of less than 0.2 mm, with a maximum error of 4.07 mm. The phantom study was conducted using five different respiratory patterns with the amplitudes of 1.3 and 2.6 cm programmed into the motion phantom. These complete trajectories were reconstructed with an average rms error of 0.4 mm. There is motion information present in the raw CBCT dataset that can be exploited with the use of an implanted fiducial marker to sub-millimeter accuracy. This algorithm could ultimately supply the internal motion of a lung tumor at the treatment unit from the same dataset currently used for patient setup.

  10. Data concurrency is required for estimating urban heat island intensity.

    PubMed

    Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang

    2016-01-01

    Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps.

  11. Areal Average Albedo (AREALAVEALB)

    DOE Data Explorer

    Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni

    2008-01-01

    he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.

  12. Space Station core resupply and return requirements estimation

    NASA Technical Reports Server (NTRS)

    Wissinger, D. B.

    1988-01-01

    A modular methodology has been developed to model both NASA Space Station onboard resupply/return requirements and Space Shuttle delivery/return capabilities. This approach divides nonpayload Space Station logistics into seven independent categories, each of which is a function of several rates multiplied by user-definable onboard usage scenarios and Shuttle resupply profiles. The categories are summed to arrive at an overall resupply or return requirement. Unused Shuttle resupply and return capacities are also evaluated. The method allows an engineer to evaluate the transportation requirements for a candidate Space Station operational scenario.

  13. A Study on Estimation of Average Power Output Fluctuation of Clustered Photovoltaic Power Generation Systems in Urban District of a Few km2

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Suzuoki, Yasuo

    The fluctuation of the total power output of clustered PV systems would be smaller than that of single PV system because of the time difference in the power output fluctuation among PV systems at different locations. This effect, so called smoothing-effect, must be taken into account properly when the impact of clustered PV systems on electric power system is assessed. If the average power output of clustered PV systems can be estimated from the power output of single PV system, it is very useful and helpful for the impact assessment. In this study, we propose a simple method to estimate the total power output fluctuation of clustered PV systems. In the proposed method, a smoothing effect is assumed to be caused as a result of two factors, i.e. time difference of overhead clouds passing among PV systems and the random change in the size and/or shape of clouds. The first one is formulated as a low-pass filter, assuming that output fluctuation is transmitted to the same direction as the wind direction at the constant speed. The second one is taken into account by using a Fourier transform surrogate data. The parameters in the proposed method were selected, so that the estimated fluctuation can be similar with that of ensemble average fluctuation of data observed at 5 points used as a training data set. Then, by using the selected parameters, the fluctuation property was estimated for other data set. The results show that the proposed method is useful for estimating the total power output fluctuation of clustered PV systems.

  14. A Bayesian Model Averaging Approach for Estimating the Relative Risk of Mortality Associated with Heat Waves in 105 U.S. Cities

    PubMed Central

    Bobb, Jennifer F.; Dominici, Francesca; Peng, Roger D.

    2011-01-01

    Summary Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this paper we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987–2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat wave risk estimation is sensitive to model choice. While model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. PMID:21447046

  15. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  16. Estimation of the leucine and histidine requirements for piglets fed a low-protein diet.

    PubMed

    Wessels, A G; Kluge, H; Mielenz, N; Corrent, E; Bartelt, J; Stangl, G I

    2016-11-01

    Reduction of the CP content in the diets of piglets requires supplementation with crystalline essential amino acids (AA). Data on the leucine (Leu) and histidine (His) requirements of young pigs fed low-CP diets are limited and have primarily been obtained from nonlinear models. However, these models do not consider the possible decline in appetite and growth that can occur when pigs are fed excessive amounts of AA such as Leu. Therefore, two dose-response studies were conducted to estimate the standardised ileal digestible (SID) Leu : lysine (Lys) and His : Lys required to optimise the growth performance of young pigs. In both studies, the average daily gain (ADG), average daily feed intake (ADFI) and gain-to-feed ratio (G : F) were determined during a 6-week period. To ensure that the diets had sub-limiting Lys levels, a preliminary Lys dose-response study was conducted. In the Leu study, 60 35-day-old piglets of both sexes were randomly assigned to one of five treatments and fed a low-CP diet (15%) with SID Leu : Lys levels of 83%, 94%, 104%, 115% or 125%. The His study used 120 31-day-old piglets of both sexes, which were allotted to one of five treatments and fed a low-CP diet (14%) with SID His : Lys levels of 22%, 26%, 30%, 34% or 38%. Linear broken-line, curvilinear-plateau and quadratic-function models were used for estimations of SID Leu : Lys and SID His : Lys. The minimum SID Leu : Lys level needed to maximise ADG, ADFI and G : F was, on average, 101% based on the linear broken-line and curvilinear-plateau models. Using the quadratic-function model, the minimum SID Leu : Lys level needed to maximise ADG, ADFI and G : F was 108%. Data obtained from the quadratic-function analysis further showed that a ±10% deviation from the identified Leu requirement was accompanied by a small decline in the ADG (-3%). The minimum SID His : Lys level needed to maximise ADG, ADFI and G : F was 27% and 28% using the linear broken-line and curvilinear-plateau models

  17. Average slip rate at the transition zone on the plate interface in the Nankai subduction zone, Japan, estimated from short-term SSE catalog

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Kimura, T.

    2013-12-01

    Short-term slow slip events (S-SSEs) in the Nankai subduction zone, Japan, have been monitored by borehole strainmeters and borehole accelerometers (tiltmeters) mainly. The scale of the S-SSE in this region is small (Mw5-6), and therefore there were two problems in S-SSE identification and estimation of the fault model. (1) There were few observatories that can detect crustal deformation associated with S-SSEs. Therefore, reliability of the estimated fault model was low. (2) The signal associated with the S-SSE is relatively small. Therefore, it was difficult to detect the S-SSE only from strainmeter and tiltmeter. The former problem has become resolvable to some extent by integrating the data of borehole strainmeter, tiltmeter and groundwater (pore pressure) of the National Institute of Advanced Industrial Science and Technology, tiltmeter of the National Research Institute for Earthquake Science and Disaster Prevention and borehole strainmeter of the Japan Meteorological Agency. For the latter, by using horizontal redundant component of a multi-component strainmeter, which consists generally of four horizontal extensometers, it has become possible to extract tectonic deformation efficiently and detect a S-SSE using only strainmeter data. Using the integrated data and newly developed technique, we started to make a catalog of S-SSE in the Nankai subduction zone. For example, in central Mie Prefecture, we detect and estimate fault model of eight S-SSEs from January 2010 to September 2012. According to our estimates, the average slip rate of S-SSE is 2.7 cm/yr. Ishida et al. [2013] estimated the slip rate as 2.6-3.0 cm/yr from deep low-frequency tremors, and this value is consistent with our estimation. Furthermore, the slip deficit rate in this region evaluated by the analysis of GPS data from 2001 to 2004 is 1.0 - 2.6 cm/yr [Kobayashi et al., 2006], and the convergence rate of the Philippine Sea plate in this region is estimated as 5.0 - 7.0 cm/yr. The difference

  18. Estimation of spatial soil moisture averages in a large gully of the Loess Plateau of China through statistical and modeling solutions

    NASA Astrophysics Data System (ADS)

    Gao, Xiaodong; Wu, Pute; Zhao, Xining; Wang, Jiawen; Shi, Yinguang; Zhang, Baoqing; Tian, Lei; Li, Hongbing

    2013-04-01

    SummaryCharacterizing root-zone soil moisture patterns in large gullies is challenging as relevant datasets are scarce and difficult to collect. Therefore, we explored several statistical and modeling approaches, mainly focusing on time stability analysis, for estimating spatial soil moisture averages from point observations and precipitation time series, using 3-year root-zone (0-20, 20-40, 40-60 and 60-80 cm) soil moisture datasets for a large gully in the Loess Plateau, China. We also developed a new metric, the root mean square error (RMSE) of estimated mean soil moisture, to identify time-stable locations. The time stability analysis revealed that different time-stable locations were identified at various depths. These locations were shown to be temporally robust, by cross-validation, and more likely to be located in ridges than in pipes or plane surfaces. However, we found that MRD (mean relative difference) operators, used to predict spatial soil moisture averages by applying a constant offset, could not be transferred across root zone layers for most time-stable locations. Random combination analysis revealed that at most four randomly selected locations were needed for accurate estimation of mean soil moisture time series. Finally, a simple empirical model was developed to predict root-zone soil moisture dynamics in large gullies from precipitation time series. The results showed that the model reproduced root-zone soil moisture well in dry seasons, whereas relatively large estimation error was observed during wet seasons. This implies that only precipitation observations might be not enough to accurately predict root-zone soil moisture dynamics in large gullies, and time series of soil moisture loss coefficient should be modeled and included.

  19. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  20. Ground-water pumpage and artificial recharge estimates for calendar year 2000 and average annual natural recharge and interbasin flow by hydrographic area, Nevada

    USGS Publications Warehouse

    Lopes, Thomas J.; Evetts, David M.

    2004-01-01

    Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth

  1. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  2. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  3. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  4. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...

  5. Estimated water requirements for the conventional flotation of copper ores

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2012-01-01

    This report provides a perspective on the amount of water used by a conventional copper flotation plant. Water is required for many activities at a mine-mill site, including ore production and beneficiation, dust and fire suppression, drinking and sanitation, and minesite reclamation. The water required to operate a flotation plant may outweigh all of the other uses of water at a mine site, [however,] and the need to maintain a water balance is critical for the plant to operate efficiently. Process water may be irretrievably lost or not immediately available for reuse in the beneficiation plant because it has been used in the production of backfill slurry from tailings to provide underground mine support; because it has been entrapped in the tailings stored in the TSF, evaporated from the TSF, or leaked from pipes and (or) the TSF; and because it has been retained as moisture in the concentrate. Water retained in the interstices of the tailings and the evaporation of water from the surface of the TSF are the two most significant contributors to water loss at a conventional flotation circuit facility.

  6. EURRECA-Estimating zinc requirements for deriving dietary reference values.

    PubMed

    Lowe, Nicola M; Dykes, Fiona C; Skinner, Anna-Louise; Patel, Sujata; Warthon-Medina, Marisol; Decsi, Tamás; Fekete, Katalin; Souverein, Olga W; Dullemeijer, Carla; Cavelaars, Adriënne E; Serra-Majem, Lluis; Nissensohn, Mariela; Bel, Silvia; Moreno, Luis A; Hermoso, Maria; Vollhardt, Christiane; Berti, Cristiana; Cetin, Irene; Gurinovic, Mirjana; Novakovic, Romana; Harvey, Linda J; Collings, Rachel; Hall-Moran, Victoria

    2013-01-01

    Zinc was selected as a priority micronutrient for EURRECA, because there is significant heterogeneity in the Dietary Reference Values (DRVs) across Europe. In addition, the prevalence of inadequate zinc intakes was thought to be high among all population groups worldwide, and the public health concern is considerable. In accordance with the EURRECA consortium principles and protocols, a series of literature reviews were undertaken in order to develop best practice guidelines for assessing dietary zinc intake and zinc status. These were incorporated into subsequent literature search strategies and protocols for studies investigating the relationships between zinc intake, status and health, as well as studies relating to the factorial approach (including bioavailability) for setting dietary recommendations. EMBASE (Ovid), Cochrane Library CENTRAL, and MEDLINE (Ovid) databases were searched for studies published up to February 2010 and collated into a series of Endnote databases that are available for the use of future DRV panels. Meta-analyses of data extracted from these publications were performed where possible in order to address specific questions relating to factors affecting dietary recommendations. This review has highlighted the need for more high quality studies to address gaps in current knowledge, in particular the continued search for a reliable biomarker of zinc status and the influence of genetic polymorphisms on individual dietary requirements. In addition, there is a need to further develop models of the effect of dietary inhibitors of zinc absorption and their impact on population dietary zinc requirements.

  7. Feasibility of non-invasive temperature estimation by the assessment of the average gray-level content of B-mode images.

    PubMed

    Teixeira, C A; Alvarenga, A V; Cortela, G; von Krüger, M A; Pereira, W C A

    2014-08-01

    This paper assesses the potential of the average gray-level (AVGL) from ultrasonographic (B-mode) images to estimate temperature changes in time and space in a non-invasive way. Experiments were conducted involving a homogeneous bovine muscle sample, and temperature variations were induced by an automatic temperature regulated water bath, and by therapeutic ultrasound. B-mode images and temperatures were recorded simultaneously. After data collection, regions of interest (ROIs) were defined, and the average gray-level variation computed. For the selected ROIs, the AVGL-Temperature relation were determined and studied. Based on uniformly distributed image partitions, two-dimensional temperature maps were developed for homogeneous regions. The color-coded temperature estimates were first obtained from an AVGL-Temperature relation extracted from a specific partition (where temperature was independently measured by a thermocouple), and then extended to the other partitions. This procedure aimed to analyze the AVGL sensitivity to changes not only in time but also in space. Linear and quadratic relations were obtained depending on the heating modality. We found that the AVGL-Temperature relation is reproducible over successive heating and cooling cycles. One important result was that the AVGL-Temperature relations extracted from one region might be used to estimate temperature in other regions (errors inferior to 0.5 °C) when therapeutic ultrasound was applied as a heating source. Based on this result, two-dimensional temperature maps were developed when the samples were heated in the water bath and also by therapeutic ultrasound. The maps were obtained based on a linear relation for the water bath heating, and based on a quadratic model for the therapeutic ultrasound heating. The maps for the water bath experiment reproduce an acceptable heating/cooling pattern, and for the therapeutic ultrasound heating experiment, the maps seem to reproduce temperature profiles

  8. 19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... withdrawal for consumption in the following situations may be made without depositing the estimated Customs... or manufacturer may enter or withdraw for consumption cigars, cigarettes, and cigarette papers and... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of...

  9. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    SciTech Connect

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)

  10. An ensemble average method to estimate absolute TEC using radio beacon-based differential phase measurements: Applicability to regions of large latitudinal gradients in plasma density

    NASA Astrophysics Data System (ADS)

    Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.

    2014-12-01

    A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.

  11. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation

  12. Grid-point requirements for large eddy simulation: Chapman's estimates revisited

    NASA Astrophysics Data System (ADS)

    Choi, Haecheon; Moin, Parviz

    2012-01-01

    Resolution requirements for large eddy simulation (LES), estimated by Chapman [AIAA J. 17, 1293 (1979)], are modified using accurate formulae for high Reynolds number boundary layer flow. The new estimates indicate that the number of grid points (N) required for wall-modeled LES is proportional to ReLx , but a wall-resolving LES requires N ˜ReLx 13 /7 , where Lx is the flat-plate length in the streamwise direction. On the other hand, direct numerical simulation, resolving the Kolmogorov length scale, requires N ˜ReLx 37 /14 .

  13. Visual Estimation of Spatial Requirements for Locomotion in Novice Wheelchair Users

    ERIC Educational Resources Information Center

    Higuchi, Takahiro; Takada, Hajime; Matsuura, Yoshifusa; Imanaka, Kuniyasu

    2004-01-01

    Locomotion using a wheelchair requires a wider space than does walking. Two experiments were conducted to test the ability of nonhandicapped adults to estimate the spatial requirements for wheelchair use. Participants judged from a distance whether doorlike apertures of various widths were passable or not passable. Experiment 1 showed that…

  14. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  15. An estimation of the protein requirements of Iberian x Duroc 50:50 crossbred growing pigs.

    PubMed

    Rojas-Cano, M L; Ruiz-Guerrero, V; Lara, L; Nieto, R; Aguilera, J F

    2014-04-01

    The effects of dietary protein content on the rates of gain and protein deposition were studied in Iberian (IB) × Duroc (DU) 50:50 barrows at 2 stages of growth [10.6 ± 0.2 (n = 28) and 60.0 ± 0.4 (n = 24) kg initial BW]. Two feeding, digestibility, and N-balance trials were performed. At each stage of growth, they were allocated in individual pens and given restrictedly (at 0.9 × ad libitum intake) one of 4 pelleted diets of similar energy concentration (13.8 to 14.5 MJ ME/kg DM), formulated to provide 4 different (ideal) CP contents (236, 223, 208, and 184 g CP/kg DM in the first trial, and 204, 180, 143, and 114 g CP/kg DM in the second trial). Feed allowance was offered in 2 daily equal meals. The average concentration of Lys was 6.59 ± 0.13 g /100 g CP for all diets. Whatever the stage of growth, average daily BW gain and gain to feed ratio were unchanged by increases in dietary CP content (477 ± 7 and 1,088 ± 20 g, and 0.475 ± 0.027 and 0.340 ± 0.113, respectively, in the first and second trial). In pigs growing from 10 to 27 kg BW, the average rate of N retention increased linearly (P < 0.01) on increasing the protein content in the diet up to a break point, so a linear-plateau dose response was observed. Pigs fed diets providing 208 to 236 g/kg DM did not differ in rate of protein deposition (PD). A maximum value of 87 (13.93 g N retained × 6.25) g PD/d was obtained when the diet supplied at least 208 g CP/kg DM. The broken-line regression analysis estimated dietary CP requirements at 211 g ideal CP (15.2 g total Lys)/kg DM. In the fattening pigs, there was a quadratic response (P < 0.01) in the rate of N retention as dietary CP content increased. Maximum N retention (18.7 g/d) was estimated from the first derivative of the function that relates the observed N retained (g/d) and dietary CP content (g/kg DM). This maximum value would be obtained by feeding a diet containing 185 g ideal CP (13.3 g total Lys)/kg DM and represents the maximum capacity

  16. Estimating crop water requirements of a command area using multispectral video imagery and geographic information systems

    NASA Astrophysics Data System (ADS)

    Ahmed, Rashid Hassan

    This research focused on the potential use of multispectral video remote sensing for irrigation water management. Two methods for estimating crop evapotranspiration were investigated, the energy balance estimation from multispectral video imagery and use of reflectance-based crop coefficients from multitemporal multispectral video imagery. The energy balance method was based on estimating net radiation, and soil and sensible heat fluxes, using input from the multispectral video imagery. The latent heat flux was estimated as a residual. The results were compared to surface heat fluxes measured on the ground. The net radiation was estimated within 5% of the measured values. However, the estimates of sensible and soil heat fluxes were not consistent with the measured values. This discrepancy was attributed to the methods for estimating the two fluxes. The degree of uncertainty in the parameters used in the methods made their application too limited for extrapolation to large agricultural areas. The second method used reflectance-based crop coefficients developed from the multispectral video imagery using alfalfa as a reference crop. The daily evapotranspiration from alfalfa was estimated using a nearby weather station. With the crop coefficients known for a canal command area, irrigation scheduling was simulated using the soil moisture balance method. The estimated soil moisture matched the actual soil moisture measured using the neutron probe method. Also, the overall water requirement estimated by this method was found to be in close agreement with the canal water deliveries. The crop coefficient method has great potential for irrigation management of large agricultural areas.

  17. ESTIMATED DAILY AVERAGE PER CAPITA WATER INGESTION BY CHILD AND ADULT AGE CATEGORIES BASED ON USDA'S 1994-96 AND 1998 CONTINUING SURVEY OF FOOD INTAKES BY INDIVIDUALS (JOURNAL ARTICLE)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The ...

  18. [Estimation model for water requirement of greenhouse tomato under drip irrigation].

    PubMed

    Liu, Hao; Sun, Jing-Sheng; Liang, Yuan-Yuan; Wang, Cong-Cong; Duan, Ai-Wang

    2011-05-01

    Based on the modified Penman-Monteith equation, and through the analysis of the relationships between crop coefficient and cumulative temperature, a new model for estimating the water requirement of greenhouse tomato under drip irrigation was built. The model was validated with the measured data of plant transpiration and soil evaporation in May 2-13 (flowering-fruit-developing stage) and June 9-20 (fruit-maturing stage) , 2009. This model was suitable for the estimation of reference evapotranspiration (ET(0)) in greenhouse. The crop coefficient of greenhouse tomato was correlated as a quadratic function of cumulative temperature. The mean relative error between measured and estimated values was less than 10%, being able to estimate the water requirement of greenhouse tomato under drip irrigation.

  19. Method of Estimating the Principal Characteristics of an Infantry Fighting Vehicle from Basic Performance Requirements

    DTIC Science & Technology

    2013-08-01

    Characteristics of an Infantry Fighting Vehicle from Basic Performance Requirements David R. Gillingham Prashant R. Patel, Project Leader INSTITUTE FOR DEFENSE...Paul M. Kodzwa, David A. Sparrow, and Jeremy A. Teichman for performing technical review of this document. Copyright Notice © 2013 Institute for...Paper P-5032 Method of Estimating the Principal Characteristics of an Infantry Fighting Vehicle from Basic Performance Requirements David R. Gillingham

  20. [Estimating the impacts of future climate change on water requirement and water deficit of winter wheat in Henan Province, China].

    PubMed

    Ji, Xing-jie; Cheng, Lin; Fang, Wen-song

    2015-09-01

    Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future.

  1. Maintenance nitrogen requirement of the turkey breeder hen with an estimate of associated essential amino acid needs.

    PubMed

    Moran, E T; Ferket, P R; Blackman, J R

    1983-09-01

    Nonproducing, small-type breeder hens in excess of 65 weeks of age were used to represent the maintenance state. All birds had been in laying cages since 30 weeks and accustomed to 16 hr of 70 lx lighting at 16 C. Nitrogen (N) balance was performed in metabolism cages under the same conditions. Ad libitum intake of a common breeder ration led to an intake of ca. 47 kcal metabolizable energy (ME)/kg body weight (BW)/day, which was considered to represent the maintenace energy requirement. Nitrogen retained while consuming this feed averaged 172 mg/kg BW/day. Force-feeding a N-free diet to satisfy the maintenance energy requirement resulted in an 85 mg N/kg BW/day endogenous loss. Total maintenance nitrogen requirement was considered to approximate 257 mg/kg BW/day. Nitrogen retention after force-feeding corn-soybean meal rations having a progressive protein content indicated that the associated amino acids were more efficient in satisfying the endogenous than the total N requirement. A model that estimated maintenance amino acid requirements was assembled by combining the relative concentrations found in muscle and feathers to represent endogenous and retained N, respectively. For the most part, model values agreed with published results for the rooster; however, verification in balance studies was less than successful and believed to be attributable to hen variation in feather cover and protein reserves.

  2. Determining storm sampling requirements for improving precision of annual load estimates of nutrients from a small forested watershed.

    PubMed

    Ide, Jun'ichiro; Chiwa, Masaaki; Higashi, Naoko; Maruno, Ryoko; Mori, Yasushi; Otsuki, Kyoichi

    2012-08-01

    This study sought to determine the lowest number of storm events required for adequate estimation of annual nutrient loads from a forested watershed using the regression equation between cumulative load (∑L) and cumulative stream discharge (∑Q). Hydrological surveys were conducted for 4 years, and stream water was sampled sequentially at 15-60-min intervals during 24 h in 20 events, as well as weekly in a small forested watershed. The bootstrap sampling technique was used to determine the regression (∑L-∑Q) equations of dissolved nitrogen (DN) and phosphorus (DP), particulate nitrogen (PN) and phosphorus (PP), dissolved inorganic nitrogen (DIN), and suspended solid (SS) for each dataset of ∑L and ∑Q. For dissolved nutrients (DN, DP, DIN), the coefficient of variance (CV) in 100 replicates of 4-year average annual load estimates was below 20% with datasets composed of five storm events. For particulate nutrients (PN, PP, SS), the CV exceeded 20%, even with datasets composed of more than ten storm events. The differences in the number of storm events required for precise load estimates between dissolved and particulate nutrients were attributed to the goodness of fit of the ∑L-∑Q equations. Bootstrap simulation based on flow-stratified sampling resulted in fewer storm events than the simulation based on random sampling and showed that only three storm events were required to give a CV below 20% for dissolved nutrients. These results indicate that a sampling design considering discharge levels reduces the frequency of laborious chemical analyses of water samples required throughout the year.

  3. Use Of Crop Canopy Size To Estimate Water Requirements Of Vegetable Crops

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Planting time, plant density, variety, and cultural practices vary widely for horticultural crops. It is difficult to estimate crop water requirements for crops with these variations. Canopy size, or factional ground cover, as an indicator of intercepted sunlight, is related to crop water use. We...

  4. Estimation of Managerial and Technical Personnel Requirements in Selected Industries. Training for Industry Series, No. 2.

    ERIC Educational Resources Information Center

    United Nations Industrial Development Organization, Vienna (Austria).

    The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…

  5. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    ERIC Educational Resources Information Center

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  6. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Special income averaging rules for...

  7. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  8. The effects of the variations in sea surface temperature and atmospheric stability in the estimation of average wind speed by SEASAT-SASS

    NASA Technical Reports Server (NTRS)

    Liu, W. T.

    1984-01-01

    The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.

  9. The Balance Super Learner: A robust adaptation of the Super Learner to improve estimation of the average treatment effect in the treated based on propensity score matching.

    PubMed

    Pirracchio, Romain; Carone, Marco

    2016-01-01

    Consistency of the propensity score estimators rely on correct specification of the propensity score model. The propensity score is frequently estimated using a main effect logistic regression. It has recently been shown that the use of ensemble machine learning algorithms, such as the Super Learner, could improve covariate balance and reduce bias in a meaningful manner in the case of serious model misspecification for treatment assignment. However, the loss functions normally used by the Super Learner may not be appropriate for propensity score estimation since the goal in this problem is not to optimize propensity score prediction but rather to achieve the best possible balance in the covariate distribution between treatment groups. In a simulation study, we evaluated the benefit of a modification of the Super Learner by propensity score estimation geared toward achieving covariate balance between the treated and untreated after matching on the propensity score. Our simulation study included six different scenarios characterized by various degrees of deviation from the usual main term logistic model for the true propensity score and outcome as well as the presence (or not) of instrumental variables. Our results suggest that the use of this adapted Super Learner to estimate the propensity score can further improve the robustness of propensity score matching estimators.

  10. Estimates of the energy deficit required to reverse the trend in childhood obesity in Australian schoolchildren

    PubMed Central

    Davey, Rachel; de Castella, F. Robert

    2015-01-01

    Abstract Objectives: To estimate: 1) daily energy deficit required to reduce the weight of overweight children to within normal range; 2) time required to reach normal weight for a proposed achievable (small) target energy deficit of 0.42 MJ/day; 3) impact that such an effect may have on prevalence of childhood overweight. Methods: Body mass index and fitness were measured in 31,424 Australian school children aged between 4.5 and 15 years. The daily energy deficit required to reduce weight to within normal range for the 7,747 (24.7%) overweight children was estimated. Further, for a proposed achievable target energy deficit of 0.42 MJ/day, the time required to reach normal weight was estimated. Results: About 18% of children were overweight and 6.6% obese; 69% were either sedentary or light active. If an energy deficit of 0.42 MJ/day could be achieved, 60% of overweight children would reach normal weight and the current prevalence of overweight of 24.7% (24.2%–25.1%) would be reduced to 9.2% (8.9%–9.6%) within about 15 months. Conclusions: The prevalence of overweight in Australian school children could be reduced significantly within one year if even a small daily energy deficit could be achieved by children currently classified as overweight or obese. PMID:26561382

  11. Spent fuel disassembly hardware and other non-fuel bearing components: characterization, disposal cost estimates, and proposed repository acceptance requirements

    SciTech Connect

    Luksic, A.T.; McKee, R.W.; Daling, P.M.; Konzek, G.J.; Ludwick, J.D.; Purcell, W.L.

    1986-10-01

    There are two categories of waste considered in this report. The first is the spent fuel disassembly (SFD) hardware. This consists of the hardware remaining after the fuel pins have been removed from the fuel assembly. This includes end fittings, spacer grids, water rods (BWR) or guide tubes (PWR) as appropriate, and assorted springs, fasteners, etc. The second category is other non-fuel-bearing (NFB) components the DOE has agreed to accept for disposal, such as control rods, fuel channels, etc., under Appendix E of the standard utiltiy contract (10 CFR 961). It is estimated that there will be approximately 150 kg of SFD and NFB waste per average metric ton of uranium (MTU) of spent uranium. PWR fuel accounts for approximately two-thirds of the average spent-fuel mass but only 50 kg of the SFD and NFB waste, with most of that being spent fuel disassembly hardware. BWR fuel accounts for one-third of the average spent-fuel mass and the remaining 100 kg of the waste. The relatively large contribution of waste hardware in BWR fuel, will be non-fuel-bearing components, primarily consisting of the fuel channels. Chapters are devoted to a description of spent fuel disassembly hardware and non-fuel assembly components, characterization of activated components, disposal considerations (regulatory requirements, economic analysis, and projected annual waste quantities), and proposed acceptance requirements for spent fuel disassembly hardware and other non-fuel assembly components at a geologic repository. The economic analysis indicates that there is a large incentive for volume reduction.

  12. Technical Note: On the Matt-Shuttleworth approach to estimate crop water requirements

    NASA Astrophysics Data System (ADS)

    Lhomme, J. P.; Boudhina, N.; Masmoudi, M. M.

    2014-11-01

    The Matt-Shuttleworth method provides a way to make a one-step estimate of crop water requirements with the Penman-Monteith equation by translating the crop coefficients, commonly available in United Nations Food and Agriculture Organization (FAO) publications, into equivalent surface resistances. The methodology is based upon the theoretical relationship linking crop surface resistance to a crop coefficient and involves the simplifying assumption that the reference crop evapotranspiration (ET0) is equal to the Priestley-Taylor estimate with a fixed coefficient of 1.26. This assumption, used to eliminate the dependence of surface resistance on certain weather variables, is questionable; numerical simulations show that it can lead to substantial differences between the true value of surface resistance and its estimate. Consequently, the basic relationship between surface resistance and crop coefficient, without any assumption, appears to be more appropriate for inferring crop surface resistance, despite the interference of weather variables.

  13. An approach to estimating human resource requirements to achieve the Millennium Development Goals.

    PubMed

    Dreesch, Norbert; Dolea, Carmen; Dal Poz, Mario R; Goubarev, Alexandre; Adams, Orvill; Aregawi, Maru; Bergstrom, Karin; Fogstad, Helga; Sheratt, Della; Linkins, Jennifer; Scherpbier, Robert; Youssef-Fox, Mayada

    2005-09-01

    In the context of the Millennium Development Goals, human resources represent the most critical constraint in achieving the targets. Therefore, it is important for health planners and decision-makers to identify what are the human resources required to meet those targets. Planning the human resources for health is a complex process. It needs to consider both the technical aspects related to estimating the number, skills and distribution of health personnel for meeting population health needs, and the political implications, values and choices that health policy- and decision-makers need to make within given resources limitations. After presenting an overview of the various methods for planning human resources for health, with their advantages and limitations, this paper proposes a methodological approach to estimating the requirements of human resources to achieve the goals set forth by the Millennium Declaration. The method builds on the service-target approach and functional job analysis.

  14. Technical note: A procedure to estimate glucose requirements of an activated immune system in steers.

    PubMed

    Kvidera, S K; Horst, E A; Abuajamieh, M; Mayorga, E J; Sanz Fernandez, M V; Baumgard, L H

    2016-11-01

    Infection and inflammation impede efficient animal productivity. The activated immune system ostensibly requires large amounts of energy and nutrients otherwise destined for synthesis of agriculturally relevant products. Accurately determining the immune system's in vivo energy needs is difficult, but a better understanding may facilitate developing nutritional strategies to maximize productivity. The study objective was to estimate immune system glucose requirements following an i.v. lipopolysaccharide (LPS) challenge. Holstein steers (148 ± 9 kg; = 15) were jugular catheterized bilaterally and assigned to 1 of 3 i.v.

  15. An examination of population exposure to traffic related air pollution: Comparing spatially and temporally resolved estimates against long-term average exposures at the home location.

    PubMed

    Shekarrizfard, Maryam; Faghih-Imani, Ahmadreza; Hatzopoulou, Marianne

    2016-05-01

    Air pollution in metropolitan areas is mainly caused by traffic emissions. This study presents the development of a model chain consisting of a transportation model, an emissions model, and atmospheric dispersion model, applied to dynamically evaluate individuals' exposure to air pollution by intersecting daily trajectories of individuals and hourly spatial variations of air pollution across the study domain. This dynamic approach is implemented in Montreal, Canada to highlight the advantages of the method for exposure analysis. The results for nitrogen dioxide (NO2), a marker of traffic related air pollution, reveal significant differences when relying on spatially and temporally resolved concentrations combined with individuals' daily trajectories compared to a long-term average NO2 concentration at the home location. We observe that NO2 exposures based on trips and activity locations visited throughout the day were often more elevated than daily NO2 concentrations at the home location. The percentage of all individuals with a lower 24-hour daily average at home compared to their 24-hour mobility exposure is 89.6%, of which 31% of individuals increase their exposure by more than 10% by leaving the home. On average, individuals increased their exposure by 23-44% while commuting and conducting activities out of home (compared to the daily concentration at home), regardless of air quality at their home location. We conclude that our proposed dynamic modelling approach significantly improves the results of traditional methods that rely on a long-term average concentration at the home location and we shed light on the importance of using individual daily trajectories to understand exposure.

  16. Bioenergetics model for estimating food requirements of female Pacific walruses (Odobenus rosmarus divergens)

    USGS Publications Warehouse

    Noren, S.R.; Udevitz, M.S.; Jay, C.V.

    2012-01-01

    Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.

  17. Model requirements for estimating and reporting soil C stock changes in national greenhouse gas inventories

    NASA Astrophysics Data System (ADS)

    Didion, Markus; Blujdea, Viorel; Grassi, Giacomo; Hernández, Laura; Jandl, Robert; Kriiska, Kaie; Lehtonen, Aleksi; Saint-André, Laurent

    2016-04-01

    Globally, soils are the largest terrestrial store of carbon (C) and small changes may contribute significantly to the global C balance. Due to the potential implications for climate change, accurate and consistent estimates of C fluxes at the large-scale are important as recognized, for example, in international agreements such as the United Nations Framework Convention on Climate Change (UNFCCC). Under the UNFCCC and also under the Kyoto Protocol it is required to report C balances annually. Most measurement-based soil inventories are currently not able to detect annual changes in soil C stocks consistently across space and representative at national scales. The use of models to obtain relevant estimates is considered an appropriate alternative under the UNFCCC and the Kyoto Protocol. Several soil carbon models have been developed but few models are suitable for a consistent application across larger-scales. Consistency is often limited by the lack of input data for models, which can result in biased estimates and, thus, the reporting criteria of accuracy (i.e., emission and removal estimates are systematically neither over nor under true emissions or removals) may be met. Based on a qualitative assessment of the ability to meet criteria established for GHG reporting under the UNFCCC including accuracy, consistency, comparability, completeness, and transparency, we identified the suitability of commonly used simulation models for estimating annual C stock changes in mineral soil in European forests. Among six discussed simulation models we found a clear trend toward models for providing quantitative precise site-specific estimates which may lead to biased estimates across space. To meet reporting needs for national GHG inventories, we conclude that there is a need for models producing qualitative realistic results in a transparent and comparable manner. Based on the application of one model along a gradient from Boreal forests in Finland to Mediterranean forests

  18. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  19. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  20. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  1. 26 CFR 5c.1305-1 - Special income averaging rules for taxpayers otherwise required to compute tax in accordance with...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) TEMPORARY INCOME TAX REGULATIONS UNDER THE ECONOMIC RECOVERY TAX ACT OF 1981 § 5c.1305-1 Special income averaging..., 26 U.S.C. 1305; 68A Stat. 917, 26 U.S.C. 7805); secs. 508(c) and 509 of the Economic Recovery Tax...

  2. Using Complier Average Causal Effect Estimation to Determine the Impacts of the Good Behavior Game Preventive Intervention on Teacher Implementers.

    PubMed

    Berg, Juliette K; Bradshaw, Catherine P; Jo, Booil; Ialongo, Nicholas S

    2016-05-20

    Complier average causal effect (CACE) analysis is a causal inference approach that accounts for levels of teacher implementation compliance. In the current study, CACE was used to examine one-year impacts of PAX good behavior game (PAX GBG) and promoting alternative thinking strategies (PATHS) on teacher efficacy and burnout. Teachers in 27 elementary schools were randomized to PAX GBG, an integration of PAX GBG and PATHS, or a control condition. There were positive overall effects on teachers' efficacy beliefs, but high implementing teachers also reported increases in burnout across the school year. The CACE approach may offer new information not captured using a traditional intent-to-treat approach.

  3. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  4. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  5. Studies on the tryptophan requirement of lactating sows. Part 2: Estimation of the tryptophan requirement by physiological criteria.

    PubMed

    Pampuch, F G; Paulicks, B R; Roth-Maier, D A

    2006-12-01

    Mature sows were fed for a total of 72 lactations with diets which provided an adequate supply of energy and nutrients except for tryptophan (Trp). By supplementing a basal diet [native 1.2 g Trp/kg, equivalent to 0.8 g apparent ileal digestible (AID) Trp or 0.9 g true ileal digestible (TID) Trp] with L-Trp, five further diets (2-6) containing 1.5-4.2 g Trp/kg were formulated. The dietary Trp content had no effect on amino acid contents in milk on days 20 and 21 of lactation, but Trp in blood plasma on day 28 of lactation reflected the alimentary Trp supply with an increase from 2.74 +/- 1.14 mg/l (diet 1) to 23.91 +/- 7.53 mg/l (diet 6; p < 0.001). There were no directional differences between the diets with regard to the other amino acids. Concentrations of urea in milk and blood were higher with diet 1 (211 and 272 mg/l, respectively) than with diets 3-6 (183 and 227 mg/l, respectively). Serotonin levels in the blood serum were lower with diet 1 (304 ng/ml) than the average of diets 4-6 (540 ng/ml). This study confirms previously given recommendations for the Trp content in the diet of lactating sows, estimated by means of performance, of 1.9 g AID Trp (equivalent to 2.0 g TID Trp; approximately 2.6 g gross Trp) per kg diet.

  6. Irrigation Requirement Estimation using MODIS Vegetation Indices and Inverse Biophysical Modeling; A Case Study for Oran, Algeria

    NASA Technical Reports Server (NTRS)

    Bounoua, L.; Imhoff, M.L.; Franks, S.

    2008-01-01

    the study site, for the month of July, spray irrigation resulted in an irrigation amount of about 1.4 mm per occurrence with an average frequency of occurrence of 24.6 hours. The simulated total monthly irrigation for July was 34.85 mm. In contrast, the drip irrigation resulted in less frequent irrigation events with an average water requirement about 57% less than that simulated during the spray irrigation case. The efficiency of the drip irrigation method rests on its reduction of the canopy interception loss compared to the spray irrigation method. When compared to a country-wide average estimate of irrigation water use, our numbers are quite low. We would have to revise the reported country level estimates downward to 17% or less

  7. Dietary energy requirements in relatively healthy maintenance hemodialysis patients estimated from long-term metabolic studies1

    PubMed Central

    Shah, Anuja; Bross, Rachelle; Shapiro, Bryan B; Morrison, Gillian; Kopple, Joel D

    2016-01-01

    Background: Studies that examined dietary energy requirements (DERs) of patients undergoing maintenance hemodialysis (MHD) have shown mixed results. Many studies reported normal DERs, but some described increased energy needs. DERs in MHD patients have been estimated primarily from indirect calorimetry and from nitrogen balance studies. The present study measured DERs in MHD patients on the basis of their dietary energy intake and changes in body composition. Objective: This study assessed DERs in MHD patients who received a constant energy intake while changes in their body composition were measured. Design: Seven male and 6 female sedentary, clinically stable MHD patients received a constant mean (±SD) energy intake for 92.2 ± 7.9 d while residing in a metabolic research ward. Changes in fat and fat-free mass, measured by dual-energy X-ray absorptiometry, were converted to calorie equivalents and added to energy intake to calculate energy requirements. Results: The average DER was 31 ± 3 kcal · kg−1 · d−1 calculated from energy intake and change in fat and fat-free calories, which was 28 ± 197 kcal/d over the 92 d of the study. DERs of MHD patients correlated strongly with their body weight (r = 0.81, P = 0.002) and less closely with their measured resting energy expenditure expressed as kcal/d (r = 0.69, P = 0.01). Although the average observed DER in MHD patients was similar to published estimated values for normal sedentary individuals of similar age and sex, there was wide variability in DER among individual patients (range: 26–36 kcal · kg−1 · d−1). Conclusions: Average DERs of sedentary, clinically stable patients receiving MHD are similar to those of sedentary normal individuals. Our data do not support the theory that MHD patients have increased DERs. Due to the high variability in DERs, careful monitoring of the nutritional status of individual MHD patients is essential. This trial was registered at clinicaltrials.gov as NCT02194114

  8. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  9. Conservative estimation of whole-body-averaged SARs in infants with a homogeneous and simple-shaped phantom in the GHz region

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Ito, Naoki; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi

    2008-12-01

    We calculated the whole-body-averaged specific absorption rates (WBSARs) in a Japanese 9-month-old infant model and its corresponding homogeneous spheroidal and ellipsoidal models with 2/3 muscle tissue for 1-6 GHz far-field exposure. As a result, we found that in comparison with the WBSAR in the infant model, the ellipsoidal model with the same frontally projected area as that of the infant model provides an underestimate, whereas the ellipsoidal model with the same surface area yields an overestimate. In addition, the WBSARs in the homogenous infant models were found to be strongly affected by the electrical constant of tissue, and to be larger in the order of 2/3 muscle, skin and muscle tissues, regardless of the model shapes or polarization of incident waves. These findings suggest that the ellipsoidal model having the same surface area as that of the infant model and electrical constants of muscle tissue provides a conservative WBSAR over wide frequency bands. To confirm this idea, based on the Kaup index for Japanese 9-month-old infants, which is often used to represent the obesity of infants, we developed linearly reduced 9-month-old infant models and the corresponding muscle ellipsoidals and re-calculated their whole-body-averaged SARs with respect to body shapes. Our results reveal that the ellipsoidal model with the same surface area as that of a 9-month-old infant model gives a conservative WBSAR for different infant models, whose variability due to the model shape reaches 15%.

  10. Continuous series of catchment-averaged sensible heat flux from a Large Aperture Scintillometer: efficient estimation of stability conditions and importance of fluxes under stable conditions

    NASA Astrophysics Data System (ADS)

    De Lathauwer, E.; Samain, B.; Defloor, W.; Pauwels, V. R.

    2011-12-01

    A Large Aperture Scintillometer (LAS) observes the intensity of the atmospheric turbulence across large distances, which is related to the path averaged sensible heat flux, H. This sensible heat flux can then easily be inverted into evapotranspiration rates using the surface energy balance. In this prestentation, two problems in the derivation of continuous series of H from LAS-data are investigated and the importance of nighttime H -fluxes is assessed. Firstly, as a LAS is unable to determine the sign of H, the transition from unstable to stable conditions is evaluated in order to make continuous H-series. Therefore, different algorithms to judge the atmospheric stability for a LAS installed over a distance of 9.5km have been tested. The algorithm based on the diurnal cycle of the refractive index structure parameter, CN2, has been found to be very suitable and operationally the most appropriate. A second issue is the humidity correction for LAS-data, which is performed by using the Bowen ratio (β). As β is taken from ground-based measurements with data gaps, the number of resulting H -values is reduced. Not including this humidity correction results in a marginal error in H, but increases the completeness of the resulting H -series. Applying these conclusions to the two-year time series of the LAS, results in an almost continuous H -time series. As the majority of the time steps has been found to be under stable conditions, there is a clear impact of Hstable on H24h, the 24h average of H. For stable conditions, Hstable -values are mostly negative, and hence lower than the H = 0 assumption as is mostly adopted. For months where stable conditions prevail (Winter), H24h is overestimated using this assumption, and calculation of Hstable is recommended.

  11. Continuous series of catchment-averaged sensible heat flux from a Large Aperture Scintillometer: efficient estimation of stability conditions and importance of fluxes under stable conditions.

    NASA Astrophysics Data System (ADS)

    Samain, B.; Defloor, W.; Pauwels, V. R. N.

    2012-04-01

    A Large Aperture Scintillometer (LAS) observes the intensity of the atmospheric turbulence across large distances, which is related to the path averaged sensible heat flux, H. Two problems in the derivation of continuous series of H from LAS-data are investigated and the importance of nighttime H -fluxes is assessed. Firstly, as a LAS is unable to determine the sign of H, the transition from unstable to stable conditions is evaluated in order to make continuous H -series. Therefore, different algorithms to judge the atmospheric stability for a LAS installed over a distance of 9.5 km have been tested. The diurnal cycle of the refractive index structure parameter, CN2, results in the best suitable, operational algorithm. A second issue is the humidity correction for LAS-data, which is performed by using the Bowen ratio (β). As β is taken from ground-based measurements with data gaps, the number of resulting H -values is reduced. Not including this humidity correction results in a marginal error in H, but increases the completeness of the resulting H -series. Applying these conclusions to the two-year time series of the LAS, results in an almost continuous H -time series. As the majority of the time steps has been found to be under stable conditions, there is a clear impact of Hstable on H24h ,the 24h average of H. For stable conditions, Hstable -values are mostly negative, and hence lower than the H = 0 W/m2 assumption as is mostly adopted. For months where stable conditions prevail (Winter), H24h is overestimated using this assumption, and calculation of Hstable is recommended.

  12. Estimating sugarcane water requirements for biofuel feedstock production in Maui, Hawaii using satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Anderson, R. G.; Wang, D.

    2011-12-01

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop evapotranspiration (ETc). Generic Kc values are available for many crop types but not for sugarcane in Maui, Hawaii, which grows on a relatively unstudied biennial cycle. In this study, an algorithm is being developed to estimate sugarcane Kc using normalized difference vegetation index (NDVI) derived from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery. A series of ASTER NDVI maps were used to depict canopy development over time or fractional canopy cover (fc) which was measured with a handheld multispectral camera in the fields during satellite overpass days. Canopy cover was correlated with NDVI values. Then the NDVI based canopy cover was used to estimate Kc curves for sugarcane plants. The remotely estimated Kc and ETc values were compared and validated with ground-truth ETc measurements. The approach is a promising tool for large scale estimation of evapotranspiration of sugarcane or other biofuel crops.

  13. Estimation of fan pressure ratio requirements and operating performance for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Gloss, B. B.; Nystrom, D.

    1981-01-01

    The National Transonic Facility (NTF), a fan-driven, transonic, pressurized, cryogenic wind tunnel, will operate over the Mach number range of 0.10 to 1.20 with stagnation pressures varying from 1.00 to about 8.8 atm and stagnation temperatures varying from 77 to 340 K. The NTF is cooled to cryogenic temperatures by the injection of liquid nitrogen into the tunnel stream with gaseous nitrogen as the test gas. The NTF can also operate at ambient temperatures using a conventional chilled water heat exchanger with air on nitrogen as the test gas. The methods used in estimating the fan pressure ratio requirements are described. The estimated NTF operating envelopes at Mach numbers from 0.10 to 1.20 are presented.

  14. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions

    PubMed Central

    Trites, Andrew W.; Rosen, David A. S.; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion—an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric—Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal—and they should

  15. Averaged Propulsive Body Acceleration (APBA) Can Be Calculated from Biologging Tags That Incorporate Gyroscopes and Accelerometers to Estimate Swimming Speed, Hydrodynamic Drag and Energy Expenditure for Steller Sea Lions.

    PubMed

    Ware, Colin; Trites, Andrew W; Rosen, David A S; Potvin, Jean

    2016-01-01

    Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal-and they should also

  16. Updated estimates of long-term average dissolved-solids loading in streams and rivers of the Upper Colorado River Basin

    USGS Publications Warehouse

    Tillman, Fred D; Anning, David W.

    2014-01-01

    The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating over 4.5 million acres of farmland, and annually generating about 12 billion kilowatt hours of hydroelectric power. The Upper Colorado River Basin, part of the Colorado River Basin, encompasses more than 110,000 mi2 and is the source of much of more than 9 million tons of dissolved solids that annually flows past the Hoover Dam. High dissolved-solids concentrations in the river are the cause of substantial economic damages to users, primarily in reduced agricultural crop yields and corrosion, with damages estimated to be greater than 300 million dollars annually. In 1974, the Colorado River Basin Salinity Control Act created the Colorado River Basin Salinity Control Program to investigate and implement a broad range of salinity control measures. A 2009 study by the U.S. Geological Survey, supported by the Salinity Control Program, used the Spatially Referenced Regressions on Watershed Attributes surface-water quality model to examine dissolved-solids supply and transport within the Upper Colorado River Basin. Dissolved-solids loads developed for 218 monitoring sites were used to calibrate the 2009 Upper Colorado River Basin Spatially Referenced Regressions on Watershed Attributes dissolved-solids model. This study updates and develops new dissolved-solids loading estimates for 323 Upper Colorado River Basin monitoring sites using streamflow and dissolved-solids concentration data through 2012, to support a planned Spatially Referenced Regressions on Watershed Attributes modeling effort that will investigate the contributions to dissolved-solids loads from irrigation and rangeland practices.

  17. Refugee issues. Summary of WFP / UNHCR guidelines for estimating food and nutritional requirements.

    PubMed

    1997-12-01

    In line with recent recommendations by WHO and the Committee on International Nutrition, WFP and UNHCR will now use 2100 kcal/person/day as the initial energy requirement for designing food aid rations in emergencies. In an emergency situation, it is essential to establish such a value to allow for rapid planning and response to the food and nutrition requirements of an affected population. An in-depth assessment is often not possible in the early days of an emergency, and an estimated value is needed to make decisions about the immediate procurement and shipment of food. The initial level is applicable only in the early stages of an emergency. As soon as demographic, health, nutritional and food security information is available, the estimated per capita energy requirements should be adjusted accordingly. Food rations should complement any food that the affected population is able to obtain on its own through activities such as agricultural production, trade, labor, and small business. An understanding of the various mechanisms used by the population to gain access to food is essential to give an accurate estimate of food needs. Therefore, a prerequisite for the design of a longer-term ration is a thorough assessment of the degree of self-reliance and level of household food security. Frequent assessments are necessary to adequately determine food aid needs on an ongoing basis. The importance of ensuring a culturally acceptable, adequate basic ration for the affected population at the onset of an emergency is considered to be one of the basic principles in ration design. The quality of the ration provided, particularly in terms of micronutrients, is stressed in the guidelines, and levels provided will aim to conform with standards set by other technical agencies.

  18. Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.

  19. A Method for Automated Classification of Parkinson’s Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI

    PubMed Central

    Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.

    2016-01-01

    Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  20. Competing conservation objectives for predators and prey: estimating killer whale prey requirements for Chinook salmon.

    PubMed

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  1. Competing Conservation Objectives for Predators and Prey: Estimating Killer Whale Prey Requirements for Chinook Salmon

    PubMed Central

    Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss

    2011-01-01

    Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict

  2. Estimation of nitrogen maintenance requirements and potential for nitrogen deposition in fast-growing chickens depending on age and sex.

    PubMed

    Samadi, F; Liebert, F

    2006-08-01

    Experiments were conducted to estimate daily N maintenance requirements (NMR) and the genetic potential for daily N deposition (ND(max)T) in fast-growing chickens depending on age and sex. In N-balance studies, 144 male and 144 female chickens (Cobb 500) were utilized in 4 consecutive age periods (I: 10 to 25 d; II: 30 to 45 d; III: 50 to 65 d; and IV: 70 to 85 d). The experimental diets contained high-protein soybean meal and crystalline amino acids as protein sources and 6 graded levels of protein supply (N1 = 6.6%; N2 = 13.0%; N3 = 19.6%; N4 = 25.1%; N5 = 31.8%; and N6 = 37.6% CP in DM). The connection between N intake and total N excretion was fitted for NMR determination by an exponential function. The average NMR value (252 mg of N/BW(kg)0.67 per d) was applied for further calculation of ND(max)T as the threshold value of the function between N intake and daily N balance. For estimating the threshold value, the principle of the Levenberg-Marquardt algorithm within the SPSS program (Version 11.5) was applied. As a theoretical maximum for ND(max)T, 3,592, 2,723, 1,702, and 1,386 mg of N/BW(kg)0.67 per d for male and 3,452, 2,604, 1,501, and 1,286 mg of N/BW(kg)0.67 per d for female fast-growing chickens (corresponding to age periods I to IV) were obtained. The determined model parameters were the precondition for modeling of the amino acid requirement based on an exponential N-utilization model and depended on performance and dietary amino acid efficiency. This procedure will be further developed and applied in the subsequent paper.

  3. A synergistic approach using optical and SAR data to estimate crop's irrigation requirements

    NASA Astrophysics Data System (ADS)

    Rolim, João.; Navarro Ferreira, Ana; Saraiva, Cátia; Catalão, João.

    2016-10-01

    A study conducted in the scope of the Alcantara initiative in Angola shown that optical and SAR images allows the estimation of crop's irrigation requirements (CIR) based on a soil water balance model (IrrigRotation). The methodology was applied to east central Portugal, to evaluate its transferability in cases of different climatic conditions and crop types. SPOT-5 Take-5 and Sentinel-1A data from April to September 2015 are used to generate NDVI and backscattering maize crop time series. Both time series are then correlated and a linear regression equation is computed for some maize parcels identified in the test area. Next, basal crop coefficients (Kcb) are determined empirically from the Kcb-NDVI relationships applied within the PLEIADeS project and also from the Kcb-SAR relationships retrieved from the linear fit of both EO data for other maize parcels. These Kcb allow to overcome a major drawback related to the use of the FAO tabulated Kcb, only available for the initial, mid and late season of a certain crop type. More frequent Kcb values also allow a better identification of the crop's phenological stages lengths. CIR estimated from EO data are comparable to the ones obtained with tabulated FAO 56 Kcb values for crops produced under standard conditions, while for crops produced in suboptimal conditions, EO data allow to improve the estimation of the CIR. Although CIR results are promising, further research is required in order to improve the Kcb initial and Kcb end values to avoid the overestimation of the CIR.

  4. An Estimate of Diesel High-Efficiency Clean Combustion Impacts on FTP-75 Aftertreatment Requirements (SAE Paper Number 2006-01-3311)

    SciTech Connect

    Sluder, Scott; Wagner, Robert M

    2006-01-01

    A modified Mercedes 1.7-liter, direct-injection diesel engine was operated in both normal and high-efficiency clean combustion (HECC) combustion modes. Four steady-state engine operating points that were previously identified by the Ad-hoc fuels working group were used as test points to allow estimation of the hot-start FTP-75 emissions levels in both normal and HECC combustion modes. The results indicate that operation in HECC modes generally produce reductions in NOX and PM emissions at the expense of CO, NMHC, and H2CO emissions. The FTP emissions estimates indicate that aftertreatment requirements for NOX are reduced, while those for PM may not be impacted. Cycle-average aftertreatment requirements for CO, NMHC, and H2CO may be challenging, especially at the lowest temperature conditions.

  5. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    NASA Astrophysics Data System (ADS)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  6. Laboratory estimation of degree-day developmental requirements of Phlebotomus papatasi (Diptera: Psychodidae).

    PubMed

    Kasap, Ozge Erisoz; Alten, Bulent

    2005-12-01

    Cutaneous leishmaniasis is one of the most important vector-borne endemic diseases in Turkey. The main objective of this study was to evaluate the influence of temperature on the developmental rates of one important vector of leishmaniasis, Phlebotomus papatasi (Scopoli, 1786) (Diptera: Psychodidae). Eggs from laboratory-reared colonies of Phlebotomus papatasi were exposed to six constant temperature regimes from 15 to 32 degrees C with a daylength of 14 h and relative humidity of 65-75%. No adult emergence was observed at 15 degrees C. Complete egg to adult development ranged from 27.89 +/- 1.88 days at 32 degrees C to 246.43 +/- 13.83 days at 18 degrees C. The developmental zero values were estimated to vary from 11.6 degrees C to 20.25 degrees C depending on life stages, and egg to adult development required 440.55 DD above 20.25 degrees C.

  7. Estimating the Residency Expansion Required to Avoid Projected Primary Care Physician Shortages by 2035

    PubMed Central

    Petterson, Stephen M.; Liaw, Winston R.; Tran, Carol; Bazemore, Andrew W.

    2015-01-01

    PURPOSE The purpose of this study was to calculate the projected primary care physician shortage, determine the amount and composition of residency growth needed, and estimate the impact of retirement age and panel size changes. METHODS We used the 2010 National Ambulatory Medical Care Survey to calculate utilization of ambulatory primary care services and the US Census Bureau to project demographic changes. To determine the baseline number of primary care physicians and the number retiring at 66 years, we used the 2014 American Medical Association Masterfile. Using specialty board and American Osteopathic Association figures, we estimated the annual production of primary care residents. To calculate shortages, we subtracted the accumulated primary care physician production from the accumulated number of primary care physicians needed for each year from 2015 to 2035. RESULTS More than 44,000 primary care physicians will be needed by 2035. Current primary care production rates will be unable to meet demand, resulting in a shortage in excess of 33,000 primary care physicians. Given current production, an additional 1,700 primary care residency slots will be necessary by 2035. A 10% reduction in the ratio of population per primary care physician would require more than 3,000 additional slots by 2035, whereas changing the expected retirement age from 66 years to 64 years would require more than 2,400 additional slots. CONCLUSIONS To eliminate projected shortages in 2035, primary care residency production must increase by 21% compared with current production. Delivery models that shift toward smaller ratios of population to primary care physicians may substantially increase the shortage. PMID:25755031

  8. Estimating Irrigation Water Requirements using MODIS Vegetation Indices and Inverse Biophysical Modeling

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah

    2007-01-01

    An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.

  9. Estimates of the wind speeds required for particle motion on Mars

    NASA Technical Reports Server (NTRS)

    Pollack, J. B.; Haberle, R.; Greeley, R.; Iversen, J.

    1976-01-01

    Threshold wind speeds for setting particles into motion on Mars are estimated by evaluating experimentally observed threshold friction velocities and determining the ratio of this velocity to the threshold wind speed at the top of earth's atmospheric boundary layer (ABL). Turning angles between the direction of the wind at the top of the ABL and the wind stress at the surface are also estimated. Detailed consideration is given to the dependence of the threshold wind speed at the top of the ABL on particle diameter, surface pressure, air temperature, atmospheric stability and composition, surface roughness, and interparticle cohesion. The results are applied to interpret a number of phenomena that have been observed on Mars and are attributable to aeolian processes. It is shown that: (1) minimum threshold wind speeds of about 50 to 100 m/sec are required to cause particle motion on Mars under 'favorable' conditions; (2) particle motion should be infrequent and strongly correlated with proximity to small topographical features; (3) in general, particle motion occurs more readily at night than during the day, in winter polar areas than equatorial areas around noon, and for H2O or CO2 ice particles than for silicate particles; and (4) the boundary between saltating and suspendible particles is located at a particle diameter of about 100 microns.

  10. Estimation of the minimum food requirement using the respiration rate of medusa of Aurelia aurita in Sihwa Lake

    NASA Astrophysics Data System (ADS)

    Han, Chang-hoon; Chae, Jinho; Jin, Jonghyeok; Yoon, Wonduk

    2012-06-01

    We examined the respiration rate of Aurelia aurita medusae at 20 °C and 28 °C to evaluate minimum metabolic demands of medusae population in Sihwa Lake, Korea during summer. While weight specific respiration rates of medusae were constant and irrespective to the wet weight (8-220 g), they significantly varied in respect to temperatures ( p<0.001, 0.11±0.03 mg C g-1 of medusa d-1 at 20°C and 0.28±0.11 mg C g-1 of medusa d-1 at 28 °C in average, where Q 10 value was 2.62). The respiration rate of medusae was defined as a function of temperature ( T, °C) and body weight ( W, g) according to the equation, R=0.13×2.62( T-20)/10 W 0.93. Population minimum food requirement ( PMFR) was estimated from the respiration rate as 15.06 and 4.86 mg C m-3 d-1 in June and July, respectively. During this period, increase in bell diameter and wet weight was not significant ( p=1 in the both), suggesting that the estimated PMFR closely represented the actual food consumption in the field. From July to August, medusae grew significantly at 0.052 d-1, thus the amount of food ingested by medusae population in situ was likely to exceed the PMFR (1.27 mg C m-3 d-1) during the period. In conclusion, the medusae population of higher density during June and July had limited amount of food, while those of lower in July and August ingested enough food for growth.

  11. A new remote sensing procedure for the estimation of crop water requirements

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, M.; Loukas, A.; Mylopoulos, N.

    2015-06-01

    The objective of this work is the development of a new approach for the estimation of water requirements for the most important crops located at Karla Watershed, central Greece. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used as a basis for the derivation of actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat ETM+ imagery. MODIS imagery has been also used, and a spatial downscaling procedure is followed between the two sensors for the derivation of a new NDVI product with a spatial resolution of 30 m x 30 m. GER 1500 spectro-radiometric measurements are additionally conducted during 2012 growing season. Cotton, alfalfa, corn and sugar beets fields are utilized, based on land use maps derived from previous Landsat 7 ETM+ images. A filtering process is then applied to derive NDVI values after acquiring Landsat ETM+ based reflectance values from the GER 1500 device. ETrF vs NDVI relationships are produced and then applied to the previous satellite based downscaled product in order to finally derive a 30 m x 30 m daily ETrF map for the study area. CropWat model (FAO) is then applied, taking as an input the new crop coefficient values with a spatial resolution of 30 m x 30 m available for every crop. CropWat finally returns daily crop water requirements (mm) for every crop and the results are analyzed and discussed.

  12. The Weighted Average Method 'WAM' for dental age estimation: a simpler method for children at the 10 year threshold: "it is vain to do with more when less will suffice" William of Ockham 1288-1358.".

    PubMed

    Roberts, Graham J; McDonald, Fraser; Neil, Monica; Lucas, Victoria S

    2014-08-01

    The mathematical principle of weighting averages to determine the most appropriate numerical outcome is well established in economic and social studies. It has seen little application in forensic dentistry. This study re-evaluated the data from a previous study of age assessment at the 10 year threshold. A semiautomatic process of weighting averages by n-td, x-tds, sd-tds, se-tds, 1/sd-tds, 1/se-tds was prepared in an Excel worksheet and the different weighted mean values reported. In addition the Fixed Effects and Random Effects models for Meta-Analysis were used and applied to the same data sets. In conclusion it has been shown that the most accurate age estimation method is to use the Random Effects Model for the mathematical procedures.

  13. Assessment of radar resolution requirements for soil moisture estimation from simulated satellite imagery. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.

    1982-01-01

    Radar simulations were performed at five-day intervals over a twenty-day period and used to estimate soil moisture from a generalized algorithm requiring only received power and the mean elevation of a test site near Lawrence, Kansas. The results demonstrate that the soil moisture of about 90% of the 20-m by 20-m pixel elements can be predicted with an accuracy of + or - 20% of field capacity within relatively flat agricultural portions of the test site. Radar resolutions of 93 m by 100 m with 23 looks or coarser gave the best results, largely because of the effects of signal fading. For the distribution of land cover categories, soils, and elevation in the test site, very coarse radar resolutions of 1 km by 1 km and 2.6 km by 3.1 km gave the best results for wet moisture conditions while a finer resolution of 93 m by 100 m was found to yield superior results for dry to moist soil conditions.

  14. Constraints on LISA Pathfinder’s self-gravity: design requirements, estimates and testing procedures

    NASA Astrophysics Data System (ADS)

    Armano, M.; Audley, H.; Auger, G.; Baird, J.; Binetruy, P.; Born, M.; Bortoluzzi, D.; Brandt, N.; Bursi, A.; Caleno, M.; Cavalleri, A.; Cesarini, A.; Cruise, M.; Danzmann, K.; de Deus Silva, M.; Desiderio, D.; Piersanti, E.; Diepholz, I.; Dolesi, R.; Dunbar, N.; Ferraioli, L.; Ferroni, V.; Fitzsimons, E.; Flatscher, R.; Freschi, M.; Gallegos, J.; García Marirrodriga, C.; Gerndt, R.; Gesa, L.; Gibert, F.; Giardini, D.; Giusteri, R.; Grimani, C.; Grzymisch, J.; Harrison, I.; Heinzel, G.; Hewitson, M.; Hollington, D.; Hueller, M.; Huesler, J.; Inchauspé, H.; Jennrich, O.; Jetzer, P.; Johlander, B.; Karnesis, N.; Kaune, B.; Korsakova, N.; Killow, C.; Lloro, I.; Liu, L.; López-Zaragoza, J. P.; Maarschalkerweerd, R.; Madden, S.; Mance, D.; Martín, V.; Martin-Polo, L.; Martino, J.; Martin-Porqueras, F.; Mateos, I.; McNamara, P. W.; Mendes, J.; Mendes, L.; Moroni, A.; Nofrarias, M.; Paczkowski, S.; Perreur-Lloyd, M.; Petiteau, A.; Pivato, P.; Plagnol, E.; Prat, P.; Ragnit, U.; Ramos-Castro, J.; Reiche, J.; Romera Perez, J. A.; Robertson, D.; Rozemeijer, H.; Rivas, F.; Russano, G.; Sarra, P.; Schleicher, A.; Slutsky, J.; Sopuerta, C. F.; Sumner, T.; Texier, D.; Thorpe, J. I.; Tomlinson, R.; Trenkel, C.; Vetrugno, D.; Vitale, S.; Wanner, G.; Ward, H.; Warren, C.; Wass, P. J.; Wealthy, D.; Weber, W. J.; Wittchen, A.; Zanoni, C.; Ziegler, T.; Zweifel, P.

    2016-12-01

    LISA Pathfinder satellite was launched on 3 December 2015 toward the Sun-Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. LTP achieves measurements of differential acceleration of free-falling test masses (TMs) with sensitivity below 3× {10}-14 {{m}} {{{s}}}-2 {{Hz}}-1/2 within the 1-30 mHz frequency band in one-dimension. The spacecraft itself is responsible for the dominant differential gravitational field acting on the two TMs. Such a force interaction could contribute a significant amount of noise and thus threaten the achievement of the targeted free-fall level. We prevented this by balancing the gravitational forces to the sub nm s-2 level, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In this paper, we will introduce the gravitational balance requirements and design, and then discuss our predictions for the balance that will be achieved in flight.

  15. Electrofishing effort required to estimate biotic condition in Southern Idaho Rivers

    USGS Publications Warehouse

    Maret, T.R.; Ott, D.S.; Herlihy, A.T.

    2007-01-01

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions. ?? Copyright by the American Fisheries Society 2007.

  16. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  17. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  18. Estimating the Reliability of Dynamic Variables Requiring Rater Judgment: A Generalizability Paradigm.

    ERIC Educational Resources Information Center

    Webber, Larry; And Others

    Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool.…

  19. Estimation of the left ventricular relaxation time constant tau requires consideration of the pressure asymptote.

    PubMed

    Langer, S F J; Habazettl, H; Kuebler, W M; Pries, A R

    2005-01-01

    The left ventricular isovolumic pressure decay, obtained by cardiac catheterization, is widely characterized by the time constant tau of the exponential regression p(t)=Pomega+(P0-Pomega)exp(-t/tau). However, several authors prefer to prefix Pomega=0 instead of coestimating the pressure asymptote empirically; others present tau values estimated by both methods that often lead to discordant results and interpretation of lusitropic changes. The present study aims to clarify the relations between the tau estimates from both methods and to decide for the more reliable estimate. The effect of presetting a zero asymptote on the tau estimate was investigated mathematically and empirically, based on left ventricular pressure decay data from isolated ejecting rat and guinea pig hearts at different preload and during spontaneous decrease of cardiac function. Estimating tau with preset Pomega=0 always yields smaller values than the regression with empirically estimated asymptote if the latter is negative and vice versa. The sequences of tau estimates from both methods can therefore proceed in reverse direction if tau and Pomega change in opposite directions between the measurements. This is exemplified by data obtained during an increasing preload in spontaneously depressed isolated hearts. The estimation of the time constant of isovolumic pressure fall with a preset zero asymptote is heavily biased and cannot be used for comparing the lusitropic state of the heart in hemodynamic conditions with considerably altered pressure asymptotes.

  20. Number of trials required to estimate a free-energy difference, using fluctuation relations

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Jarzynski, Christopher

    2016-05-01

    The difference Δ F between free energies has applications in biology, chemistry, and pharmacology. The value of Δ F can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a Δ F estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006), 10.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of Δ F . Estimating Δ F from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations.

  1. Physically-based Methods for the Estimation of Crop Water Requirements from E.O. Optical Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The estimation of evapotranspiration (ET) represent the basic information for the evaluation of crop water requirements. A widely used method to compute ET is based on the so-called "crop coefficient" (Kc), defined as the ratio of total evapotranspiration by reference evapotranspiration ET0. The val...

  2. Utility of multi temporal satellite images for crop water requirements estimation and irrigation management in the Jordan Valley

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Identifying the spatial and temporal distribution of crop water requirements is a key for successful management of water resources in the dry areas. Climatic data were obtained from three automated weather stations to estimate reference evapotranspiration (ETO) in the Jordan Valley according to the...

  3. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  4. A Decision Tool to Evaluate Budgeting Methodologies for Estimating Facility Recapitalization Requirements

    DTIC Science & Technology

    2008-03-01

    REQUIREMENTS   THESIS Presented to the Faculty Department of Systems and Engineering Management Graduate School of Engineering and... Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree...of Master of Science in Engineering Management Krista M. Hickman, BS Captain, USAF March 2008 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION

  5. Estimated quantitative amino acid requirements for Florida pompano reared in low-salinity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As with most marine carnivores, Florida pompano require relatively high crude protein diets to obtain optimal growth. Precision formulations to match the dietary indispensable amino acid (IAA) pattern to a species’ requirements can be used to lower the overall dietary protein. However IAA requirem...

  6. Bayesian Model Averaging for Propensity Score Analysis.

    PubMed

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  7. Minimizing instrumentation requirement for estimating crop water stress index and transpiration of maize

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Research was conducted in northern Colorado in 2011 to estimate the Crop Water Stress Index (CWSI) and actual water transpiration (Ta) of maize under a range of irrigation regimes. The main goal was to obtain these parameters with minimum instrumentation and measurements. The results confirmed that ...

  8. EVALUATION OF SAMPLING FREQUENCIES REQUIRED TO ESTIMATE NUTRIENT AND SUSPENDED SEDIMENT LOADS IN LARGE RIVERS

    EPA Science Inventory

    Nutrients and suspended sediments in streams and large rivers are two major issues facing state and federal agencies. Accurate estimates of nutrient and sediment loads are needed to assess a variety of important water-quality issues including total maximum daily loads, aquatic ec...

  9. Aggregation and Averaging.

    ERIC Educational Resources Information Center

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  10. Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho

    USGS Publications Warehouse

    Watkins, Carson J.; Quist, Michael; Shepard, Bradley B.; Ireland, Susan C.

    2016-01-01

    This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.

  11. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  12. Threaded average temperature thermocouple

    NASA Technical Reports Server (NTRS)

    Ward, Stanley W. (Inventor)

    1990-01-01

    A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

  13. Estimating Sugarcane Water Requirements for Biofuel Feedstock Production in Maui, Hawaii Using Satellite Imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop eva...

  14. Estimation of F-15 Peacetime Maintenance Manpower Requirements Using the Logistics Composite Model.

    DTIC Science & Technology

    1976-12-01

    Sequence of Simulation Runs. .. ... ..... .... .. 30 *Determination of.Manpower Requirements .. .. .. .... .. 30 Model Validation...17 6. LCOM F-15 TFTW Maintenance Organization Structure ....... 20 7. Sequence of Simulation Runs ...... ............. ... 31 8. Type I...constraints to simulate a sequence of maintenance activities. When the flying schedule calls for aircraft to start mission preparation, LCOM designates

  15. Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers

    EPA Science Inventory

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...

  16. The average enzyme principle

    PubMed Central

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-01-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076

  17. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    PubMed Central

    Valleriani, Angelo

    2016-01-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811

  18. A conditional likelihood is required to estimate the selection coefficient in ancient DNA

    NASA Astrophysics Data System (ADS)

    Valleriani, Angelo

    2016-08-01

    Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.

  19. Averaging of TNTC counts.

    PubMed Central

    Haas, C N; Heller, B

    1988-01-01

    When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211

  20. [In vitro estimation using radioactive phosphorus of the phosphorus requirements of rumen microorganisms].

    PubMed

    Durand, M; Beaumatin, P; Dumay, C

    1983-01-01

    Microbial requirements for P were assumed to be a function of the amount of microbial protein synthesis (microbial growth) and of the quantity of organic matter (OM) fermented in the rumen. The relationships among P incorporation into microbial matter and protein synthesis, ammonia utilization, volatile fatty acid (VFA) production and organic matter fermented (OMF) were studied in short-term incubations (3 h) using 32P-labelled phosphate. The amount of P incorporated was calculated from extracellular phosphate pool specific activity and the radioactivity incorporated into the microbial sediment during incubation (table 1). The inocula came from sheep fed a protein-free purified diet. In order to vary the intensity of fermentation, carbohydrates with a wide range of degrees of enzymatic susceptibility were used as substrates and the medium was either provided or was deficient in S and trace elements (table 4). Nitrogen was supplied as ammonium salts. Linear regression analyses showed that P incorporation was positively correlated with the criteria of protein synthesis and OM fermentation (figs. 1, 2, 3, 4). However, there was significant phosphorus incorporation when the value for nitrogen incorporation was zero (equation A: (Pi (mg) = 0.162 NH3-N + 0.376; r = 0.9). This was assumed to result either from energetic uncoupling (fermentation without concomitant bacterial growth) or from the lysis of cold microbial cells only. Equation A would reflect total P incorporation and equation A' Pi (mg) = 0.162 NH3-N (mg), net P incorporation. It was assumed that in vitro microbial requirements for P were in the range of 30-70 mg of P/liter of medium for 3-hour incubation, depending on the intensity of fermentation. From a mean value of microbial N yield of 30 g/kg of DOMR (organic matter apparently digested in the rumen), it was calculated that the total and net P requirements in vivo were 6 and 4.9 g/kg of DOMR, respectively, corresponding to 3.9 and 3.2 g/kg of DOM

  1. CNS tumor induction by radiotherapy: A report of four new cases and estimate of dose required

    SciTech Connect

    Cavin, L.W.; Dalrymple, G.V.; McGuire, E.L.; Maners, A.W.; Broadwater, J.R. )

    1990-02-01

    We have analyzed 60 cases of intra-axial brain tumors associated with antecedent radiation therapy. These include four new cases. The patients had originally received radiation therapy for three reasons: (a) cranial irradiation for acute lymphoblastic leukemia (ALL), (b) definitive treatment of CNS neoplasia, and (c) treatment of benign disease (mostly cutaneous infections). The number of cases reported during the past decade has greatly increased as compared to previous years. Forty-six of the 60 intra-axial tumors have been reported since 1978. The relative risk of induction of an intra-axial brain tumor by radiation therapy is estimated to be more than 100, as compared to individuals who have not had head irradiation.

  2. Using a generalized version of the Titius-Bode relation to extrapolate the patterns seen in Kepler multi-exoplanet systems, and estimate the average number of planets in circumstellar habitable zones

    NASA Astrophysics Data System (ADS)

    Lineweaver, Charles H.

    2015-08-01

    The Titius-Bode (TB) relation’s successful prediction of the period of Uranus was the main motivation that led to the search for another planet between Mars and Jupiter. This search led to the discovery of the asteroid Ceres and the rest of the asteroid belt. The TB relation can also provide useful hints about the periods of as-yet-undetected planets around other stars. In Bovaird & Lineweaver (2013) [1], we used a generalized TB relation to analyze 68 multi-planet systems with four or more detected exoplanets. We found that the majority of exoplanet systems in our sample adhered to the TB relation to a greater extent than the Solar System does. Thus, the TB relation can make useful predictions about the existence of as-yet-undetected planets in Kepler multi-planet systems. These predictions are one way to correct for the main obstacle preventing us from estimating the number of Earth-like planets in the universe. That obstacle is the incomplete sampling of planets of Earth-mass and smaller [2-5]. In [6], we use a generalized Titius-Bode relation to predict the periods of 228 additional planets in 151 of these Kepler multiples. These Titius-Bode-based predictions suggest that there are, on average, 2±1 planets in the habitable zone of each star. We also estimate the inclination of the invariable plane for each system and prioritize our planet predictions by their geometric probability to transit. We highlight a short list of 77 predicted planets in 40 systems with a high geometric probability to transit, resulting in an expected detection rate of ~15 per cent, ~3 times higher than the detection rate of our previous Titius-Bode-based predictions.References: [1] Bovaird, T. & Lineweaver, C.H (2013) MNRAS, 435, 1126-1138. [2] Dong S. & Zhu Z. (2013) ApJ, 778, 53 [3] Fressin F. et al. (2013) ApJ, 766, 81 [4] Petigura E. A. et al. (2013) PNAS, 110, 19273 [5] Silburt A. et al. (2014), ApJ (arXiv:1406.6048v2) [6] Bovaird, T., Lineweaver, C.H. & Jacobsen, S.K. (2015, in

  3. Calculation of the number of bits required for the estimation of the bit error ratio

    NASA Astrophysics Data System (ADS)

    Almeida, Álvaro J.; Silva, Nuno A.; Muga, Nelson J.; André, Paulo S.; Pinto, Armando N.

    2014-08-01

    We present a calculation of the required number of bits to be received in a system of communications in order to achieve a given level of confidence. The calculation assumes a binomial distribution function for the errors. The function is numerically evaluated and the results are compared with the ones obtained from Poissonian and Gaussian approximations. The performance in terms of the signal-to-noise ratio is also studied. We conclude that for higher number of errors in detection the use of approximations allows faster and more efficient calculations, without loss of accuracy.

  4. Estimation of the lead thickness required to shield scattered radiation from synchrotron radiation experiments

    NASA Astrophysics Data System (ADS)

    Wroblewski, Thomas

    2015-03-01

    In the enclosure of synchrotron radiation experiments using a monochromatic beam, secondary radiation arises from two effects, namely fluorescence and scattering. While fluorescence can be regarded as isotropic, the angular dependence of Compton scattering has to be taken into account if the shielding shall not become unreasonably thick. The scope of this paper is to clarify how the different factors starting from the spectral properties of the source and the attenuation coefficient of the shielding, over the spectral and angular distribution of the scattered radiation and the geometry of the experiment influence the thickness of lead required to keep the dose rate outside the enclosure below the desired threshold.

  5. Estimation of the dietary riboflavin required to maximize tissue riboflavin concentration in juvenile shrimp (Penaeus monodon).

    PubMed

    Chen, H Y; Hwang, G

    1992-12-01

    The riboflavin requirements of marine shrimp (Penaeus monodon) were evaluated in a 15-wk feeding trial. Juvenile shrimp (initial mean weight, 0.13 +/- 0.05 g) were fed purified diets containing seven levels (0, 8, 12, 16, 20, 40 and 80 mg/kg diet) of supplemental riboflavin. There were no significant differences in weight gains, feed efficiency ratios and survival of shrimp over the dietary riboflavin range. The riboflavin concentrations in shrimp bodies increased with the increasing vitamin supplementation. Hemolymph (blood) glutathione reductase activity coefficient was not a sensitive and specific indicator of riboflavin status of the shrimp. The dietary riboflavin level required for P. monodon was found to be 22.3 mg/kg diet, based on the broken-line model analysis of body riboflavin concentrations. Shrimp fed unsupplemented diet (riboflavin concentration of 0.48 mg/kg diet) for 15 wk showed signs of deficiency: light coloration, irritability, protuberant cuticle at intersomites and short-head dwarfism.

  6. Dietary Protein Intake in Young Children in Selected Low-Income Countries Is Generally Adequate in Relation to Estimated Requirements for Healthy Children, Except When Complementary Food Intake Is Low.

    PubMed

    Arsenault, Joanne E; Brown, Kenneth H

    2017-02-15

    Background: Previous research indicates that young children in low-income countries (LICs) generally consume greater amounts of protein than published estimates of protein requirements, but this research did not account for protein quality based on the mix of amino acids and the digestibility of ingested protein.Objective: Our objective was to estimate the prevalence of inadequate protein and amino acid intake by young children in LICs, accounting for protein quality.Methods: Seven data sets with information on dietary intake for children (6-35 mo of age) from 6 LICs (Peru, Guatemala, Ecuador, Bangladesh, Uganda, and Zambia) were reanalyzed to estimate protein and amino acid intake and assess adequacy. The protein digestibility-corrected amino acid score of each child's diet was calculated and multiplied by the original (crude) protein intake to obtain an estimate of available protein intake. Distributions of usual intake were obtained to estimate the prevalence of inadequate protein and amino acid intake for each cohort according to Estimated Average Requirements.Results: The prevalence of inadequate protein intake was highest in breastfeeding children aged 6-8 mo: 24% of Bangladeshi and 16% of Peruvian children. With the exception of Bangladesh, the prevalence of inadequate available protein intake decreased by age 9-12 mo and was very low in all sites (0-2%) after 12 mo of age. Inadequate protein intake in children <12 mo of age was due primarily to low energy intake from complementary foods, not inadequate protein density.Conclusions: Overall, most children consumed protein amounts greater than requirements, except for the younger breastfeeding children, who were consuming low amounts of complementary foods. These findings reinforce previous evidence that dietary protein is not generally limiting for children in LICs compared with estimated requirements for healthy children, even after accounting for protein quality. However, unmeasured effects of infection and

  7. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Technical Reports Server (NTRS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  8. Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.

  9. Estimates of power requirements for a manned Mars rover powered by a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey

    1991-01-01

    This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are met using an SP-100 type reactor. The primary electric power needs, which include 30-kWe net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine (FPSE) yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle (CBC) using He/Xe as the working fluid. The specific mass of the nuclear reactor power systrem, including a man-rated radiation shield, ranged from 150-kg/kWe to 190-kg/kWe and the total mass of the Rover vehicle varied depend upon the cruising speed.

  10. Estimation of irrigation requirement for wheat in the southern Spain by using a soil water balance remote sensing driven

    NASA Astrophysics Data System (ADS)

    González, Laura; Bodas, Vicente; Espósito, Gabriel; Campos, Isidro; Aliaga, Jerónimo; Calera, Alfonso

    2013-04-01

    This paper aims to evaluate the use of a remote sensing-driven soil water balance to estimate irrigation water requirements of wheat. The applied methodology is based on the approach of the dual crop coefficient proposed in the FAO-56 manual (Allen et al., 1998), where the basal crop coefficient is derived from a time series of remote sensing multispectral imagery which describes the growing cycle of wheat. This approach allows the estimation of the evapotranspiration (ET) and irrigation water requirements by means of a soil water balance in the root layer. The assimilation of satellite data into the FAO-56 soil water balance is based on the relationship between spectral vegetation indices (VI) and the transpiration coefficient (Campos et al., 2010; Sánchez et al., 2010). Two approaches to plant transpiration estimation were analyzed, the basal crop coefficient methodology and the transpiration coefficient approach described in the FAO-56 (Allen et al., 1998) and FAO-66 (Steduto et al., 2012) manuals respectively. The model is computed at daily time step and the results analyzed in this work are the net irrigation water requirements and water stress estimates. Analysis of results has been done by comparison with irrigation data (irrigation dates and volume applied) provided by farmers in 28 plots of wheat for the period 2004-2012 in the Spanish region of La Mancha, southern Spain, under different meteorological conditions. Total irrigation dose during the growing season varies from 200 mm to 700 mm. In some of plots soil moisture sensors data are available, which allowed the comparison with modeled soil moisture. Net irrigation water requirements estimated by the proposed model shows a good agreement with data, having in account the efficiency of the different irrigation systems. Despite the irrigation doses are generally greater than irrigation water requirements, the crops could suffer water stress periods during the campaign, because real irrigation timing and

  11. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  12. Temperature averaging thermal probe

    NASA Technical Reports Server (NTRS)

    Kalil, L. F.; Reinhardt, V. (Inventor)

    1985-01-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  13. Temperature averaging thermal probe

    NASA Astrophysics Data System (ADS)

    Kalil, L. F.; Reinhardt, V.

    1985-12-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  14. A RAPID NON-DESTRUCTIVE METHOD FOR ESTIMATING ABOVEGROUND BIOMASS OF SALT MARSH GRASSES

    EPA Science Inventory

    Understanding the primary productivity of salt marshes requires accurate estimates of biomass. Unfortunately, these estimates vary enough within and among salt marshes to require large numbers of replicates if the averages are to be statistically meaningful. Large numbers of repl...

  15. Barriers against required nurse estimation models applying in Iran hospitals from health system experts’ point of view

    PubMed Central

    Tabatabaee, Seyed Saeed; Nekoie-Moghadam, Mahmood; Vafaee-Najar, Ali; Amiresmaili, Mohammad Reza

    2016-01-01

    Introduction One of the strategies for accessing effective nursing care is to design and implement a nursing estimation model. The purpose of this research was to determine barriers in applying models or norms for estimating the size of a hospital’s nursing team. Methods This study was conducted from November 2015 to March 2016 among three levels of managers at the Ministry of Health, medical universities, and hospitals in Iran. We carried out a qualitative study using a Colaizzi method. We used semistructured and in-depth interviews by purposive, quota, and snowball sampling of 32 participants (10 informed experts in the area of policymaking in human resources in the Ministry of Health, 10 decision makers in employment and distribution of human resources in treatment and administrative chancellors of Medical Universities, and 12 nursing managers in hospitals). The data were analyzed by Atlas.ti software version 6.0.15. Results The following 14 subthemes emerged from data analysis: Lack of specific steward, weakness in attracting stakeholder contributions, lack of authorities trust to the models, lack of mutual interests between stakeholders, shortage of nurses, financial deficit, non-native models, designing models by people unfamiliar with nursing process, lack of attention to the nature of work in each ward, lack of attention to hospital classification, lack of transparency in defining models, reduced nurses available time, increased indirect activity of nurses, and outdated norms. The main themes were inappropriate planning and policymaking in high levels, resource constraints, and poor design of models and lack of updating the model. Conclusion The results of present study indicate that many barriers exist in applying models for estimating the size of a hospital’s nursing team. Therefore, for designing an appropriate nursing staff estimation model and implementing it, in addition to considering the present barriers, identifying the norm required features

  16. Estimating shallow groundwater availability in small catchments using streamflow recession and instream flow requirements of rivers in South Africa

    NASA Astrophysics Data System (ADS)

    Ebrahim, Girma Y.; Villholth, Karen G.

    2016-10-01

    Groundwater is an important resource for multiple uses in South Africa. Hence, setting limits to its sustainable abstraction while assuring basic human needs is required. Due to prevalent data scarcity related to groundwater replenishment, which is the traditional basis for estimating groundwater availability, the present article presents a novel method for determining allocatable groundwater in quaternary (fourth-order) catchments through information on streamflow. Using established methodologies for assessing baseflow, recession flow, and instream ecological flow requirement, the methodology develops a combined stepwise methodology to determine annual available groundwater storage volume using linear reservoir theory, essentially linking low flows proportionally to upstream groundwater storages. The approach was trialled for twenty-one perennial and relatively undisturbed catchments with long-term and reliable streamflow records. Using the Desktop Reserve Model, instream flow requirements necessary to meet the present ecological state of the streams were determined, and baseflows in excess of these flows were converted into a conservative estimates of allocatable groundwater storages on an annual basis. Results show that groundwater development potential exists in fourteen of the catchments, with upper limits to allocatable groundwater volumes (including present uses) ranging from 0.02 to 3.54 × 106 m3 a-1 (0.10-11.83 mm a-1) per catchment. With a secured availability of these volume 75% of the years, variability between years is assumed to be manageable. A significant (R2 = 0.88) correlation between baseflow index and the drainage time scale for the catchments underscores the physical basis of the methodology and also enables the reduction of the procedure by one step, omitting recession flow analysis. The method serves as an important complementary tool for the assessment of the groundwater part of the Reserve and the Groundwater Resource Directed Measures in

  17. Achronal averaged null energy condition

    SciTech Connect

    Graham, Noah; Olum, Ken D.

    2007-09-15

    The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

  18. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  19. The Future of the Army’s Civilian Workforce: Comparing Projected Inventory with Anticipated Requirements and Estimating Cost Under Different Personnel Policies

    DTIC Science & Technology

    2014-01-01

    in percentage terms) include Medical and Veterinary , Transport Equipment Operations, and Human Resources Management. Similar to commands, under...YORE –5 to –1, YORE 0 to 4, and YORE 5 and above. We then applied OPM’s formula to estimate the average, per-employee cost of a RIF for an...Multiplying the number of employees involved in a RIF by the per-employee cost of a RIF produced a total estimated RIF cost. 10 The basic formula

  20. The Spectral Form Factor Is Not Self-Averaging

    SciTech Connect

    Prange, R.

    1997-03-01

    The form factor, k(t), is the spectral statistic which best displays nonuniversal quasiclassical deviations from random matrix theory. Recent estimations of k(t) for a single spectrum found interesting new effects of this type. It was supposed that k(t) is {ital self-averaging} and thus did not require an ensemble average. We here argue that this supposition sometimes fails and that for many important systems an ensemble average is essential to see detailed properties of k(t). In other systems, notably the nontrivial zeros of Riemann zeta function, it will be possible to see the nonuniversal properties by an analysis of a single spectrum. {copyright} {ital 1997} {ital The American Physical Society}

  1. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  2. Scalable Robust Principal Component Analysis using Grassmann Averages.

    PubMed

    Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael

    2015-12-23

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  3. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  4. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  5. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  6. Thermal requirements and estimate number of generations of Palmistichus elaeisis (Hymenoptera: Eulophidae) in different Eucalyptus plantations regions.

    PubMed

    Pereira, F F; Zanuncio, J C; Oliveira, H N; Grance, E L V; Pastori, P L; Gava-Oliveira, M D

    2011-05-01

    To use Palmistichus elaeisis Delvare and LaSalle, 1993 (Hymenoptera: Eulophidae) in a biological control programme of Thyrinteina arnobia (Stoll, 1782) (Lepidoptera: Geometridae), it is necessary to study thermal requirements, because temperature can affect the metabolism and bioecological aspects. The objective was to determine the thermal requirements and estimate the number of generations of P. elaeisis in different Eucalyptus plantations regions. After 24 hours in contact with the parasitoid, the pupae was placed in 16, 19, 22, 25, 28 and 31 °C, 70 ± 10% of relative humidity and 14 hours of photophase. The duration of the life cycle of P. elaeisis was reduced with the increase in the temperature. At 31 °C the parasitoid could not finish the cycle in T. arnobia pupae. The emergence of P. elaeisis was not affected by the temperature, except at 31 °C. The number of individuals was between six and 1238 per pupae, being higher at 16 °C. The thermal threshold of development (Tb) and the thermal constant (K) of this parasitoid were 3.92 °C and 478.85 degree-days (GD), respectively, allowing for the completion of 14.98 generations per year in Linhares, Espírito Santo State, 13.87 in Pompéu and 11.75 in Viçosa, Minas Gerais State and 14.10 in Dourados, Mato Grosso do Sul State.

  7. Phytoplankton Productivity in an Arctic Fjord (West Greenland): Estimating Electron Requirements for Carbon Fixation and Oxygen Production

    PubMed Central

    Hancke, Kasper; Dalsgaard, Tage; Sejr, Mikael Kristian; Markager, Stiig; Glud, Ronnie Nøhr

    2015-01-01

    Accurate quantification of pelagic primary production is essential for quantifying the marine carbon turnover and the energy supply to the food web. Knowing the electron requirement (Κ) for carbon (C) fixation (ΚC) and oxygen (O2) production (ΚO2), variable fluorescence has the potential to quantify primary production in microalgae, and hereby increasing spatial and temporal resolution of measurements compared to traditional methods. Here we quantify ΚC and ΚO2 through measures of Pulse Amplitude Modulated (PAM) fluorometry, C fixation and O2 production in an Arctic fjord (Godthåbsfjorden, W Greenland). Through short- (2h) and long-term (24h) experiments, rates of electron transfer (ETRPSII), C fixation and/or O2 production were quantified and compared. Absolute rates of ETR were derived by accounting for Photosystem II light absorption and spectral light composition. Two-hour incubations revealed a linear relationship between ETRPSII and gross 14C fixation (R2 = 0.81) during light-limited photosynthesis, giving a ΚC of 7.6 ± 0.6 (mean ± S.E.) mol é (mol C)−1. Diel net rates also demonstrated a linear relationship between ETRPSII and C fixation giving a ΚC of 11.2 ± 1.3 mol é (mol C)−1 (R2 = 0.86). For net O2 production the electron requirement was lower than for net C fixation giving 6.5 ± 0.9 mol é (mol O2)−1 (R2 = 0.94). This, however, still is an electron requirement 1.6 times higher than the theoretical minimum for O2 production [i.e. 4 mol é (mol O2)−1]. The discrepancy is explained by respiratory activity and non-photochemical electron requirements and the variability is discussed. In conclusion, the bio-optical method and derived electron requirement support conversion of ETR to units of C or O2, paving the road for improved spatial and temporal resolution of primary production estimates. PMID:26218096

  8. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  9. Estimation of protein requirement for maintenance in adult parrots (Amazona spp.) by determining inevitable N losses in excreta.

    PubMed

    Westfahl, C; Wolf, P; Kamphues, J

    2008-06-01

    Especially in older pet birds, an unnecessary overconsumption of protein--presumably occurring in human custody--should be avoided in view of a potential decrease in the excretory organs' (liver, kidney) efficiency. Inevitable nitrogen (N)-losses enable the estimation of protein requirement for maintenance, because these losses have at least to be replaced to maintain N equilibrium. To determine the inevitable N losses in excreta of adult amazons (Amazona spp.), a frugivor-granivorous avian species from South America, adult amazons (n = 8) were fed a synthetic nearly N-free diet (in dry matter; DM: 37.8% starch, 26.6% sugar, 11.0% fat) for 9 days. Throughout the trial, feed and water intake were recorded, the amounts of excreta were measured and analysed for DM and ash content, N (Dumas analysis) and uric acid (enzymatic-photometric analysis) content. Effects of the N-free diet on body weight (BW) and protein-related blood parameters were quantified and compared with data collected during a previous 4-day period in which a commercial seed mixture was offered to the birds. After feeding an almost N-free diet for 9 days, under the conditions of a DM intake (20.1 g DM/bird/day) as in seeds and digestibility of organic matter comparable with those when fed seeds (82% and 76% respectively), it was possible to quantify the inevitable N losses via excrements to be 87.2 mg/bird/day or 172.5 mg/kg BW(0.75)/day. Assuming a utilization coefficient of 0.57 this leads to an estimated protein need of approximately 1.9 g/kg BW(0.75)/day (this value does not consider further N losses via feathers and desquamated cells; with the prerequisite that there is a balanced amino acid pattern).

  10. The paired deuterated retinol dilution technique can be used to estimate the daily vitamin A intake required to maintain a targeted whole body vitamin A pool size in men.

    PubMed

    Haskell, Marjorie J; Jamil, Kazi M; Peerson, Janet M; Wahed, Mohammed A; Brown, Kenneth H

    2011-03-01

    The estimated average requirement (EAR) for vitamin A (VA) of adult males is based on the amount of dietary VA required to maintain adequate function and provide a modest liver VA reserve (0.07 μmol/g). In the present study, the paired-deuterated retinol dilution technique was used to estimate changes in VA pool size in Bangladeshi men from low-income, urban neighborhoods who had small initial VA pool sizes (0.059 ± 0.032 mmol, or 0.047 ± 0.025 μmol/g liver; n = 16). The men were supplemented for 60 d with 1 of 8 different levels of dietary VA, ranging from 100 to 2300 μg/d (2 men/dietary VA level). VA pool size was estimated before and after the supplementation period. The mean change (plus or minus) in VA pool size in the men was plotted against their corresponding levels of daily VA intake and a regression line was fit to the data. The level of intake at which the regression line crossed the x-axis (where estimates of VA pool size remained unchanged) was used as an estimate of the EAR. A VA intake of 254-400 μg/d was sufficient to maintain a small VA pool size (0.059 ± 0.032 mmol) in the Bangladeshi men, corresponding to a VA intake of 362-571 μg/d for a 70-kg U.S. man, which is lower than their current EAR of 625 μg/d. The data suggest that the paired-deuterated retinol dilution technique could be used for estimating the EAR for VA for population subgroups for which there are currently no direct estimates.

  11. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  12. Indicator Amino Acid-Derived Estimate of Dietary Protein Requirement for Male Bodybuilders on a Nontraining Day Is Several-Fold Greater than the Current Recommended Dietary Allowance.

    PubMed

    Bandegan, Arash; Courtney-Martin, Glenda; Rafii, Mahroukh; Pencharz, Paul B; Lemon, Peter Wr

    2017-02-08

    Background: Despite a number of studies indicating increased dietary protein needs in bodybuilders with the use of the nitrogen balance technique, the Institute of Medicine (2005) has concluded, based in part on methodologic concerns, that "no additional dietary protein is suggested for healthy adults undertaking resistance or endurance exercise."Objective: The aim of the study was to assess the dietary protein requirement of healthy young male bodybuilders ( with ≥3 y training experience) on a nontraining day by measuring the oxidation of ingested l-[1-(13)C]phenylalanine to (13)CO2 in response to graded intakes of protein [indicator amino acid oxidation (IAAO) technique].Methods: Eight men (means ± SDs: age, 22.5 ± 1.7 y; weight, 83.9 ± 11.6 kg; 13.0% ± 6.3% body fat) were studied at rest on a nontraining day, on several occasions (4-8 times) each with protein intakes ranging from 0.1 to 3.5 g ⋅ kg(-1) ⋅ d(-1), for a total of 42 experiments. The diets provided energy at 1.5 times each individual's measured resting energy expenditure and were isoenergetic across all treatments. Protein was fed as an amino acid mixture based on the protein pattern in egg, except for phenylalanine and tyrosine, which were maintained at constant amounts across all protein intakes. For 2 d before the study, all participants consumed 1.5 g protein ⋅ kg(-1) ⋅ d(-1) On the study day, the protein requirement was determined by identifying the breakpoint in the F(13)CO2 with graded amounts of dietary protein [mixed-effects change-point regression analysis of F(13)CO2 (labeled tracer oxidation in breath)].Results: The Estimated Average Requirement (EAR) of protein and the upper 95% CI RDA for these young male bodybuilders were 1.7 and 2.2 g ⋅ kg(-1) ⋅ d(-1), respectively.Conclusion: These IAAO data suggest that the protein EAR and recommended intake for male bodybuilders at rest on a nontraining day exceed the current recommendations of the Institute of Medicine by ∼2

  13. Rotational averaging of multiphoton absorption cross sections

    NASA Astrophysics Data System (ADS)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  14. Rotational averaging of multiphoton absorption cross sections.

    PubMed

    Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  15. Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction

    PubMed Central

    Ahlfors, Seppo P.; Hinrichs, Hermann

    2016-01-01

    Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196

  16. Scaling registration of multiview range scans via motion averaging

    NASA Astrophysics Data System (ADS)

    Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan

    2016-07-01

    Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.

  17. A Site-sPecific Agricultural water Requirement and footprint Estimator (SPARE:WATER 1.0)

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.

    2013-07-01

    The agricultural water footprint addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). By considering site-specific properties when calculating the crop water footprint, this methodology can be used to support decision making in the agricultural sector on local to regional scale. We therefore developed the spatial decision support system SPARE:WATER that allows us to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirements and water footprints are assessed on a grid basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume inefficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water is defined as the water needed to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept, we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008, with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional estimation of crop water footprints.

  18. Capital Requirements Estimating Model (CREMOD) for electric utilities. Volume I. Methodology description, model, description, and guide to model applications. [For each year up to 1990

    SciTech Connect

    Collins, D E; Gammon, J; Shaw, M L

    1980-01-01

    The Capital Requirements Estimating Model for the Electric Utilities (CREMOD) is a system of programs and data files used to estimate the capital requirements of the electric utility industry for each year between the current one and 1990. CREMOD disaggregates new electric plant capacity levels from the Mid-term Energy Forecasting System (MEFS) Integrating Model solution over time using actual projected commissioning dates. It computes the effect on aggregate capital requirements of dispersal of new plant and capital expenditures over relatively long construction lead times on aggregate capital requirements for each year. Finally, it incorporates the effects of real escalation in the electric utility construction industry on these requirements and computes the necessary transmission and distribution expenditures. This model was used in estimating the capital requirements of the electric utility sector. These results were used in compilation of the aggregate capital requirements for the financing of energy development as published in the 1978 Annual Report to Congress. This volume, Vol. I, explains CREMOD's methodology, functions, and applications.

  19. Statistical strategies for averaging EC50 from multiple dose-response experiments.

    PubMed

    Jiang, Xiaoqi; Kopp-Schneider, Annette

    2015-11-01

    In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.

  20. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    SciTech Connect

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same

  1. Differences in concentration lengths computed using band-averaged mass extinction coefficients and band-averaged transmittance

    NASA Astrophysics Data System (ADS)

    Farmer, W. Michael

    1990-09-01

    An understanding of how broad-band transmittance is affected by the atmosphere is crucial to accurately predicting how broad-band sensors such as FLIRs will perform. This is particularly true for sensors required to function in an environment where countermeasures such as smokes/obscurants have been used to limit sensor performance. A common method of estimating the attenuation capabilities of smokes/obscurants released in the atmosphere to defeat broad-band sensors is to use a band averaged extinction coefficient with concentration length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation, and can lead to results for band averages of the relative transmittance that are significantly different from those obtained using the source spectra, sensor response, and normal atmospheric transmission. In this paper we discuss the differences that occur in predicting relative transmittance as a function of concentration length using band-averaged mass extinction coefficients or computing the band-averaged transmittance as a function of source spectra. Two examples are provided to illustrate the differences in results. The first example is applicable to 8- to l4-um band transmission through natural fogs. The second example considers 3- to 5-um transmission through phosphorus smoke produced at 17% and 90% relative humidity. The results show major differences in the prediction of concentration length values by the two methods when the relative transmittance falls below about 20%.

  2. Radial averages of astigmatic TEM images.

    PubMed

    Fernando, K Vince

    2008-10-01

    The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.

  3. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  4. First Order Estimates of Energy Requirements for Pollution Control. Interagency Energy-Environment Research and Development Program Report.

    ERIC Educational Resources Information Center

    Barker, James L.; And Others

    This U.S. Environmental Protection Agency report presents estimates of the energy demand attributable to environmental control of pollution from stationary point sources. This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes mobile sources such as trucks, and…

  5. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the

  6. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  7. Estimating Temperature Retrieval Accuracy Associated With Thermal Band Spatial Resolution Requirements for Center Pivot Irrigation Monitoring and Management

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Irons, James; Spruce, Joseph P.; Underwood, Lauren W.; Pagnutti, Mary

    2006-01-01

    This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400-800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30-240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400-m, 600-m, and 800-m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.

  8. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  9. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  10. Estimates for ELF effects: noise-based thresholds and the number of experimental conditions required for empirical searches.

    PubMed

    Weaver, J C; Astumian, R D

    1992-01-01

    Interactions between physical fields and biological systems present difficult conceptual problems. Complete biological systems, even isolated cells, are exceedingly complex. This argues against the pursuit of theoretical models, with the possible consequence that only experimental studies should be considered. In contrast, electromagnetic fields are well understood. Further, some subsystems of cells (viz. cell membranes) can be reasonably represented by physical models. This argues for the pursuit of theoretical models which quantitatively describe interactions of electromagnetic fields with that subsystem. Here we consider the hypothesis that electric fields, not magnetic fields, are the source of interactions, From this it follows that the cell membrane is a relevant subsystem, as the membrane is much more resistive than the intra- or extracellular regions. A general class of interactions is considered: electroconformational changes associated with the membrane. Expected results of such as approach include the dependence of the interaction on key parameters (e.g., cell size, field magnitude, frequency, and exposure time), constraints on threshold exposure conditions, and insight into how experiments might be designed. Further, because it is well established that strong and moderate electric fields interact significantly with cells, estimates of the extrapolated interaction for weaker fields can be sought. By employing signal-to-noise (S/N) ratio criteria, theoretical models can also be used to estimate threshold magnitudes. These estimates are particularly relevant to in vitro conditions, for which most biologically generated background fields are absent. Finally, we argue that if theoretical model predictions are unavailable to guide the selection of experimental conditions, an overwhelmingly large number of different conditions will be needed to find, establish, and characterize bioelectromagnetic effects in an empirical search. This is contrasted with well

  11. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  12. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  13. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  14. Requirements for estimation of doses from contaminants dispersed by a 'dirty bomb' explosion in an urban area.

    PubMed

    Andersson, K G; Mikkelsen, T; Astrup, P; Thykier-Nielsen, S; Jacobsen, L H; Hoe, S C; Nielsen, S P

    2009-12-01

    The ARGOS decision support system is currently being extended to enable estimation of the consequences of terror attacks involving chemical, biological, nuclear and radiological substances. This paper presents elements of the framework that will be applied in ARGOS to calculate the dose contributions from contaminants dispersed in the atmosphere after a 'dirty bomb' explosion. Conceptual methodologies are presented which describe the various dose components on the basis of knowledge of time-integrated contaminant air concentrations. Also the aerosolisation and atmospheric dispersion in a city of different types of conceivable contaminants from a 'dirty bomb' are discussed.

  15. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  16. Prediction of broadband attenuation computed using band-averaged mass extinction coefficients and band-averaged transmittance

    NASA Astrophysics Data System (ADS)

    Farmer, W. M.

    1991-09-01

    A common method of estimating the attenuation capabilities of military smokes/obscurants is to use a band-averaged mass-extinction coefficient with concentration-length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation on broadband transmittance characteristics, which can significantly affect broadband transmittance. The differences that can occur in predicting relative transmittance as a function of concentration length by using band-averaged mass-extinction coefficients as opposed to more properly computing the band-averaged transmittance are discussed in this paper. Two examples are provided to illustrate the differences in results. The first example considers 3- to 5-micron and 8- to 14-micron band transmission through natural fogs. The second example considers 3- to 5-micron and 8- to 12-micron transmission through phosphorus-derived smoke (a common military obscurant) produced at 17 percent and at 90 percent relative humidity. Major differences are found in the values of concentration lengths predicted by the two methods when the transmittance relative to an unobscured atmosphere falls below about 20 percent. These results can affect conclusions concerning the detection of targets in smokes screens, smoke concentration lengths required to obscure a target, and radiative transport through polluted atmospheres.

  17. Estimates of electricity requirements for the recovery of mineral commodities, with examples applied to sub-Saharan Africa

    USGS Publications Warehouse

    Bleiwas, Donald I.

    2011-01-01

    To produce materials from mine to market it is necessary to overcome obstacles that include the force of gravity, the strength of molecular bonds, and technological inefficiencies. These challenges are met by the application of energy to accomplish the work that includes the direct use of electricity, fossil fuel, and manual labor. The tables and analyses presented in this study contain estimates of electricity consumption for the mining and processing of ores, concentrates, intermediate products, and industrial and refined metallic commodities on a kilowatt-hour per unit basis, primarily the metric ton or troy ounce. Data contained in tables pertaining to specific currently operating facilities are static, as the amount of electricity consumed to process or produce a unit of material changes over time for a great number of reasons. Estimates were developed from diverse sources that included feasibility studies, company-produced annual and sustainability reports, conference proceedings, discussions with government and industry experts, journal articles, reference texts, and studies by nongovernmental organizations.

  18. Effects of contact network structure on epidemic transmission trees: implications for data required to estimate network structure.

    PubMed

    Carnegie, Nicole Bohme

    2017-02-13

    Understanding the dynamics of disease spread is key to developing effective interventions to control or prevent an epidemic. The structure of the network of contacts over which the disease spreads has been shown to have a strong influence on the outcome of the epidemic, but an open question remains as to whether it is possible to estimate contact network features from data collected in an epidemic. The approach taken in this paper is to examine the distributions of epidemic outcomes arising from epidemics on networks with particular structural features to assess whether that structure could be measured from epidemic data and what other constraints might be needed to make the problem identifiable. To this end, we vary the network size, mean degree, and transmissibility of the pathogen, as well as the network feature of interest: clustering, degree assortativity, or attribute-based preferential mixing. We record several standard measures of the size and spread of the epidemic, as well as measures that describe the shape of the transmission tree in order to ascertain whether there are detectable signals in the final data from the outbreak. The results suggest that there is potential to estimate contact network features from transmission trees or pure epidemic data, particularly for diseases with high transmissibility or for which the relevant contact network is of low mean degree. Copyright © 2017 John Wiley & Sons, Ltd.

  19. (U) Estimating the Photonics Budget, Resolution, and Signal Requirements for a Multi-Monochromatic X-ray Imager

    SciTech Connect

    Tregillis, Ian Lee

    2016-09-22

    This document examines the performance of a generic flat-mirror multimonochromatic imager (MMI), with special emphasis on existing instruments at NIF and Omega. We begin by deriving the standard equation for the mean number of photons detected per resolution element. The pinhole energy bandwidth is a contributing factor; this is dominated by the finite size of the source and may be considerable. The most common method for estimating the spatial resolution of such a system (quadrature addition) is, technically, mathematically invalid for this case. However, under the proper circumstances it may produce good estimates compared to a rigorous calculation based on the convolution of point-spread functions. Diffraction is an important contribution to the spatial resolution. Common approximations based on Fraunhofer (farfield) diffraction may be inappropriate and misleading, as the instrument may reside in multiple regimes depending upon its configuration or the energy of interest. It is crucial to identify the correct diffraction regime; Fraunhofer and Fresnel (near-field) diffraction profiles are substantially different, the latter being considerably wider. Finally, we combine the photonics and resolution analyses to derive an expression for the minimum signal level such that the resulting images are not dominated by photon statistics. This analysis is consistent with observed performance of the NIF MMI.

  20. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  1. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  2. Different Location Sampling Frequencies by Satellite Tags Yield Different Estimates of Migration Performance: Pooling Data Requires a Common Protocol

    PubMed Central

    Tanferna, Alessandro; López-Jiménez, Lidia; Blas, Julio; Hiraldo, Fernando; Sergio, Fabrizio

    2012-01-01

    Background Migration research is in rapid expansion and increasingly based on sophisticated satellite-tracking devices subject to constant technological refinement, but is still ripe with descriptive studies and in need of meta-analyses looking for emergent generalisations. In particular, coexistence of studies and devices with different frequency of location sampling and spatial accuracy generates doubts of data compatibility, potentially preventing meta-analyses. We used satellite-tracking data on a migratory raptor to: (1) test whether data based on different location sampling frequencies and on different position subsampling approaches are compatible, and (2) seek potential solutions that enhance compatibility and enable eventual meta-analyses. Methodology/Principal Findings We used linear mixed models to analyse the differences in the speed and route length of the migration tracks of 36 Black kites (Milvus migrans) satellite-tagged with two different types of devices (Argos vs GPS tags), entailing different regimes of position sampling frequency. We show that different location sampling frequencies and data subsampling approaches generate large (up to 33%) differences in the estimates of route length and migration speed of this migratory bird. Conclusions/Significance Our results show that the abundance of locations available for analysis affects the tortuosity and realism of the estimated migration path. To avoid flaws in future meta-analyses or unnecessary loss of data, we urge researchers to reach an agreement on a common protocol of data presentation, and to recognize that all transmitter-based studies are likely to underestimate the actual distance traveled by the marked animal. As ecological research becomes increasingly technological, new technologies should be matched with improvements in analytical capacity that guarantee data compatibility. PMID:23166742

  3. Estimation of the maintenance energy requirements, methane emissions and nitrogen utilization efficiency of two suckler cow genotypes.

    PubMed

    Zou, C X; Lively, F O; Wylie, A R G; Yan, T

    2016-04-01

    Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; P<0.001) indicated values for net energy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; P<0.001). There were positive linear relationships between N intake and N outputs in manure, and manure N accounted for 0.923 of the N intake. The present results provide approaches to predict maintenance energy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems.

  4. Development of an estimation model for the evaluation of the energy requirement of dilute acid pretreatments of biomass☆

    PubMed Central

    Mafe, Oluwakemi A.T.; Davies, Scott M.; Hancock, John; Du, Chenyu

    2015-01-01

    This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly. PMID:26109752

  5. Resting Metabolic Rate Among Old-Old Women With and Without Frailty: Variability and Estimation of Energy Requirements

    PubMed Central

    Weiss, Carlos O.; Cappola, Anne R.; Varadhan, Ravi; Fried, Linda P.

    2012-01-01

    Objectives Resting metabolic rate (RMR) is the largest component of total energy expenditure. It has not been studied in old-old adults living in the community, though abnormalities in RMR may play a critical role in the development of the clinical syndrome of frailty. The objective was to measure RMR and examine the association of measured RMR with frailty status and compare it to expected RMR generated by a predictive equation. Design Physiologic sub-study conducted as a home visit within an observational cohort study. Setting Baltimore City and County, Maryland. Participants 77 women age 83–93 years enrolled in the Women’s Health and Aging Study II. Measurements RMR with indirect calorimetry; frailty status; fat-free mass; ambient and body temperature; expected RMR via the Mifflin-St. Jeor equation. Results Average RMR was 1119 kcal/d (s.d.± 205; range 595–1560). Agreement between observed and expected RMR was biased and very poor (between-subject coefficient of variation 38.0%, 95%CI: 35.1–40.8). Variability of RMR was increased in frail subjects (heteroscedasticity F test P value=0.02). Both low and high RMR were associated with being frail (Odds Ratio 5.4, P value=0.04) and slower self-selected walking speed (P value<0.001) after adjustment for covariates. Conclusion Equations to predict RMR that are not validated in old-old adults appear to correlate poorly with measured RMR. RMR values are highly variable among old-old women, with deviations from the mean predicting clinical frailty. These exploratory findings suggest a pathway to clinical frailty through either high or low RMR. PMID:22985142

  6. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  7. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  8. A new approach to estimating the minimum dietary requirement of phosphorus for large rainbow trout based on nonfecal excretions of phosphorus and nitrogen.

    PubMed

    Sugiura, S H; Dong, F M; Hardy, R W

    2000-04-01

    A new method was developed to estimate the minimum dietary requirement of phosphorus (P) for large fish for which conventional methods are not suitable. The method is based upon nonfecal (mainly urinary) excretion of inorganic P and total nitrogen from fish placed in a metabolic tank. In the first experiment, small and large rainbow trout (body wt 203 and 400 g, respectively) and, in the second experiment, P-sufficient, P-deficient and starved rainbow trout (different in diet history; body wt 349-390 g) were fed a constant amount (standard feeding rate) of semipurified diets with incremental P concentrations once daily at 15 degrees C. In all cases, there was no measurable excretion of P when dietary P concentration was low; however, beyond a specific dietary concentration, excretion of P increased rapidly. The point where the fish started to excrete P was assumed to be the minimum dietary requirement. By d 3 of consuming the experimental diets, the response of the fish to dietary P concentration stabilized, and excretion of P remained constant within dietary treatment groups for the subsequent sampling days (d 6, 9 and 12). The minimum dietary requirement of available P for fish having body wt of 203 and 400 g was estimated to be 6.62 and 5.54 g/kg dry diet, respectively, and that for P-sufficient, P-deficient and starved fish was estimated to be 4.06, 5.83 and 4.72 g/kg dry diet, respectively, when feed efficiency is 1.

  9. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  10. A site-specific agricultural water requirement and footprint estimator (SPARE:WATER 1.0) for irrigation agriculture

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.

    2013-01-01

    The water footprint accounting method addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). Most of current water footprint assessments focus on global to continental scale. We therefore developed the spatial decision support system SPARE:WATER that allows to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirement and water footprints are assessed on a grid-basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume in-efficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water can be defined as the water to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008 with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional water footprint assessments.

  11. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  12. E.O.-based estimation of transpiration and crop water requirements for vineyards: a case study in southern Italy

    NASA Astrophysics Data System (ADS)

    D'Urso, Guido; Maltese, Antonino; Palladino, Mario

    2014-10-01

    An efficient use of water for irrigation is a challenging task. From an agronomical point of view, it requires establishing the optimal amount of water to be supplied, at the correct time, based on phenological phase and water stress spatial distribution. Indeed, the knowledge of the actual water stress is essential for agronomic decisions, vineyards need to be managed to maintain a moderate water stress, thus allowing to optimize berries quality and quantity. Methods for quickly quantifying where, when and in what extent, vines begin to experience water stress are beneficial. Traditional point based methodologies, such those based on Scholander pressure chamber, even if well established are time expensive and do not give a comprehensive picture of the vineyard water deficit. Earth Observation (E.O.) based methodologies promise to achieve a synoptic overview of the water stress. Some E.O. data, indeed, sense the territory in the thermal part of the spectrum and, as it is well recognized, leaf radiometric temperature is related to the plant water status. However, current satellite sensors have not detailed enough spatial resolution to detect pure canopy pixels; thus, the pixel radiometric temperature characterizes the whole soil-vegetation system, and in variable proportions. On the other hand, due to limits in the actual crop dusters, there is no need to characterize the water stress distribution at plant scale, and a coarser spatial characterization would be sufficient. The research aims to assess to what extent: 1) E.O. based canopy radiometric temperature can be used, straightforwardly, to detected plant water status; 2) E.O. based canopy transpiration, would be more suitable (or not) to describe the spatial variability in plant water stress. To these aims: 1) radiometric canopy temperature measured in situ, and derived from a two-source energy balance model applied on airborne data, were compared with in situ leaf water potential from freshly cut leaves; 2) two

  13. The allometric relationship between resting metabolic rate and body mass in wild waterfowl (Anatidae) and an application to estimation of winter habitat requirements

    USGS Publications Warehouse

    Miller, M.R.; Eadie, J. McA

    2006-01-01

    We examined the allometric relationship between resting metabolic rate (RMR; kJ day-1) and body mass (kg) in wild waterfowl (Anatidae) by regressing RMR on body mass using species means from data obtained from published literature (18 sources, 54 measurements, 24 species; all data from captive birds). There was no significant difference among measurements from the rest (night; n = 37), active (day; n = 14), and unspecified (n = 3) phases of the daily cycle (P > 0.10), and we pooled these measurements for analysis. The resulting power function (aMassb) for all waterfowl (swans, geese, and ducks) had an exponent (b; slope of the regression) of 0.74, indistinguishable from that determined with commonly used general equations for nonpasserine birds (0.72-0.73). In contrast, the mass proportionality coefficient (b; y-intercept at mass = 1 kg) of 422 exceeded that obtained from the nonpasserine equations by 29%-37%. Analyses using independent contrasts correcting for phylogeny did not substantially alter the equation. Our results suggest the waterfowl equation provides a more appropriate estimate of RMR for bioenergetics analyses of waterfowl than do the general nonpasserine equations. When adjusted with a multiple to account for energy costs of free living, the waterfowl equation better estimates daily energy expenditure. Using this equation, we estimated that the extent of wetland habitat required to support wintering waterfowl populations could be 37%-50% higher than previously predicted using general nonpasserine equations. ?? The Cooper Ornithological Society 2006.

  14. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  15. Modeling daily average stream temperature from air temperature and watershed area

    NASA Astrophysics Data System (ADS)

    Butler, N. L.; Hunt, J. R.

    2012-12-01

    Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7

  16. Transmitter-receiver system for time average fourier telescopy

    NASA Astrophysics Data System (ADS)

    Pava, Diego Fernando

    Time Average Fourier Telescopy (TAFT) has been proposed as a means for obtaining high-resolution, diffraction-limited images over large distances through ground-level horizontal-path atmospheric turbulence. Image data is collected in the spatial-frequency, or Fourier, domain by means of Fourier Telescopy; an inverse twodimensional Fourier transform yields the actual image. TAFT requires active illumination of the distant object by moving interference fringe patterns. Light reflected from the object is collected by a "light-buckt" detector, and the resulting electrical signal is digitized and subjected to a series of signal processing operations, including an all-critical averaging of the amplitude and phase of a number of narrow-band signals. This dissertation reports on the formulation and analysis of a transmitter-receiver system appropriate for the illumination, signal detection, and signal processing required for successful application of the TAFT concept. The analysis assumes a Kolmogorov model for the atmospheric turbulence, that the object is rough on the scale of the optical wavelength of the illumination pattern, and that the object is not changing with time during the image-formation interval. An important original contribution of this work is the development of design principles for spatio-temporal non-redundant arrays of active sources for object illumination. Spatial non-redundancy has received considerable attention in connection with the arrays of antennas used in radio astronomy. The work reported here explores different alternatives and suggests the use of two-dimensional cyclic difference sets, which favor low frequencies in the spatial frequency domain. The temporal nonredundancy condition requires that all active sources oscillate at a different optical frequency and that the frequency difference between any two sources be unique. A novel algorithm for generating the array, based on optimized perfect cyclic difference sets, is described

  17. Model Averaging for Improving Inference from Causal Diagrams.

    PubMed

    Hamra, Ghassan B; Kaufman, Jay S; Vahratian, Anjel

    2015-08-11

    Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as "wish bias". Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives.

  18. More Voodoo correlations: when average-based measures inflate correlations.

    PubMed

    Brand, Andrew; Bradley, Michael T

    2012-01-01

    A Monte-Carlo simulation was conducted to assess the extent that a correlation estimate can be inflated when an average-based measure is used in a commonly employed correlational design. The results from the simulation reveal that the inflation of the correlation estimate can be substantial, up to 76%. Additionally, data was re-analyzed from two previously published studies to determine the extent that the correlation estimate was inflated due to the use of an averaged based measure. The re-analyses reveal that correlation estimates had been inflated by just over 50% in both studies. Although these findings are disconcerting, we are somewhat comforted by the fact that there is a simple and easy analysis that can be employed to prevent the inflation of the correlation estimate that we have simulated and observed.

  19. Protein Requirements during Aging

    PubMed Central

    Courtney-Martin, Glenda; Ball, Ronald O.; Pencharz, Paul B.; Elango, Rajavel

    2016-01-01

    Protein recommendations for elderly, both men and women, are based on nitrogen balance studies. They are set at 0.66 and 0.8 g/kg/day as the estimated average requirement (EAR) and recommended dietary allowance (RDA), respectively, similar to young adults. This recommendation is based on single linear regression of available nitrogen balance data obtained at test protein intakes close to or below zero balance. Using the indicator amino acid oxidation (IAAO) method, we estimated the protein requirement in young adults and in both elderly men and women to be 0.9 and 1.2 g/kg/day as the EAR and RDA, respectively. This suggests that there is no difference in requirement on a gender basis or on a per kg body weight basis between younger and older adults. The requirement estimates however are ~40% higher than the current protein recommendations on a body weight basis. They are also 40% higher than our estimates in young men when calculated on the basis of fat free mass. Thus, current recommendations may need to be re-assessed. Potential rationale for this difference includes a decreased sensitivity to dietary amino acids and increased insulin resistance in the elderly compared with younger individuals. PMID:27529275

  20. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  1. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  2. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  3. Stochastic averaging and sensitivity analysis for two scale reaction networks

    NASA Astrophysics Data System (ADS)

    Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.

    2016-02-01

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.

  4. Comprehensive time average digital holographic vibrometry

    NASA Astrophysics Data System (ADS)

    Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan

    2016-12-01

    This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.

  5. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...

  6. Method for detection and correction of errors in speech pitch period estimates

    NASA Technical Reports Server (NTRS)

    Bhaskar, Udaya (Inventor)

    1989-01-01

    A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.

  7. Noise reduction in elastograms using temporal stretching with multicompression averaging.

    PubMed

    Varghese, T; Ophir, J; Céspedes, I

    1996-01-01

    Elastography uses estimates of the time delay (obtained by cross-correlation) to compute strain estimates in tissue due to quasistatic compression. Because the time delay estimates do not generally occur at the sampling intervals, the location of the cross-correlation peak does not give an accurate estimate of the time delay. Sampling errors in the time-delay estimate are reduced using signal interpolation techniques to obtain subsample time-delay estimates. Distortions of the echo signals due to tissue compression introduce correlation artifacts in the elastogram. These artifacts are reduced by a combination of small compressions and temporal stretching of the postcompression signal. Random noise effects in the resulting elastograms are reduced by averaging several elastograms, obtained from successive small compressions (assuming that the errors are uncorrelated). Multicompression averaging with temporal stretching is shown to increase the signal-to-noise ratio in the elastogram by an order of magnitude, without sacrificing sensitivity, resolution or dynamic range. The strain filter concept is extended in this article to theoretically characterize the performance of multicompression averaging with temporal stretching.

  8. Depth perception in disparity-defined objects: finding the balance between averaging and segregation.

    PubMed

    Cammack, P; Harris, J M

    2016-06-19

    Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes' views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators.This article is part of the themed issue 'Vision in our three-dimensional world'.

  9. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  10. Weighted Average Consensus-Based Unscented Kalman Filtering.

    PubMed

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example.

  11. Spread Spectrum Signal Characteristic Estimation Using Exponential Averaging and an AD-HOC Chip rate Estimator

    DTIC Science & Technology

    2007-03-01

    books.nips.cc/papers/files/nips19/NIPS2006_0137.pdf, last accessed 28 December 2006. [58] D. E. Knuth , The Art of Computer Programming Vol. 2, 2nd ed...potential for parallel processing, resulting in dramatically decreased computational time, without loss of performance. 15. NUMBER OF PAGES 156 14...Clark Robertson David C. Jenn Professor of Electrical Professor of Electrical & Computer

  12. Cyclosporine-induced autoimmunity. Conditions for expressing disease, requirement for intact thymus, and potency estimates of autoimmune lymphocytes in drug-treated rats

    PubMed Central

    1986-01-01

    These studies explore the phenomenon of cyclosporine-induced autoimmunity in irradiated Lewis rats. We show that (a) the presence of a thymus is required, and autoimmune precursors develop in and exit from this organ to the peripheral lymphocyte pool within a 2-wk period after the initiation of cyclosporine treatment; (b) adoptive transfers of drug-induced autoimmunity to irradiated secondary recipients can be accomplished with relatively few cells of the Th subset, and these transfers of autoimmunity can be blocked by cotransfer of normal lymphoid cells; and (c) potency estimates, using popliteal lymph node assays in syngeneic and F1 recipients indicate similar levels of auto- and alloreactivity by cells from drug-induced autoimmune donors. These various findings indicate that this particular animal model may be useful for studies of the onset and control of autoimmunity, and they raise the possibility that the lack of autoimmunity in normal animals and its induction with cyclosporine may involve similar cellular mechanism as have been found to be operative in GVH reactions and specifically induced immunologic resistance to GVHD. PMID:3490534

  13. [Estimation of the financial reserves required by livestock disease compensation funds for rebates in the course of disease outbreaks using the example of Saxony-Anhalt (Germany)].

    PubMed

    Denzin, Nicolai; Ewert, Benno; Salchert, Falk; Kramer, Matthias

    2014-01-01

    With certain restrictions, the federal states of Germany are obligated to financially compensate livestock owners for animal losses due to livestock diseases. If livestock disease compensation funds demand contributions from livestock owners for certain species in order to pay compensations, the federal states have to pay only one half of the rebate. The remaining 50% has to be financed through reserves of the respective compensation fund built up with the contributions. But there is no reference on how to calculate such financial reserves. Therefore, for the livestock disease compensation fund of Saxony-Anhalt (Germany), an attempt was made to estimate the required reserves.To this end, expert opinions concerning the expected number of affected holdings in potential outbreaks of different diseases were collected. In a conservative approach, assuming these diseases occur in parallel within a single year, overall costs as well as individual costs for altogether 25 categories and subcategories of livestock species were stochastically modeled.The 99.9th percentile of the resulting frequency distribution of the overall costs referred to a financial volume of about 23 million euro. Thus, financial reserves of 11,5 million euro were recommended to the livestock disease compensation fund.

  14. Lagrangian averaging, nonlinear waves, and shock regularization

    NASA Astrophysics Data System (ADS)

    Bhat, Harish S.

    In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity

  15. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  16. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  17. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  18. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  19. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  20. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  1. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  2. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  3. Designing Digital Control Systems With Averaged Measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1990-01-01

    Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.

  4. Time domain averaging based on fractional delay filter

    NASA Astrophysics Data System (ADS)

    Wu, Wentao; Lin, Jing; Han, Shaobo; Ding, Xianghui

    2009-07-01

    For rotary machinery, periodic components in signals are always extracted to investigate the condition of each rotating part. Time domain averaging technique is a traditional method used to extract those periodic components. Originally, a phase reference signal is required to ensure all the averaged segments are with the same initial phase. In some cases, however, there is no phase reference; we have to establish some efficient algorithms to synchronize the segments before averaging. There are some algorithms available explaining how to perform time domain averaging without using phase reference signal. However, those algorithms cannot eliminate the phase error completely. Under this background, a new time domain averaging algorithm that has no phase error theoretically is proposed. The performance is improved by incorporating the fractional delay filter. The efficiency of the proposed algorithm is validated by some simulations.

  5. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  6. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  7. Average oxidation state of carbon in proteins.

    PubMed

    Dick, Jeffrey M

    2014-11-06

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.

  8. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  9. Accuracy of dietary reference intake predictive equation for estimated energy requirements in female tennis athletes and non-athlete college students: comparison with the doubly labeled water method

    PubMed Central

    Ndahimana, Didace; Lee, Sun-Hee; Kim, Ye-Jin; Son, Hee-Ryoung; Ishikawa-Takata, Kazuko; Park, Jonghoon

    2017-01-01

    BACKGROUND/OBJECTIVES The purpose of this study was to assess the accuracy of a dietary reference intake (DRI) predictive equation for estimated energy requirements (EER) in female college tennis athletes and non-athlete students using doubly labeled water (DLW) as a reference method. MATERIALS/METHODS Fifteen female college students, including eight tennis athletes and seven non-athlete subjects (aged between 19 to 24 years), were involved in the study. Subjects' total energy expenditure (TEE) was measured by the DLW method, and EER were calculated using the DRI predictive equation. The accuracy of this equation was assessed by comparing the EER calculated using the DRI predictive equation (EERDRI) and TEE measured by the DLW method (TEEDLW) based on calculation of percentage difference mean and percentage of accurate prediction. The agreement between the two methods was assessed by the Bland-Altman method. RESULTS The percentage difference mean between the methods was -1.1% in athletes and 1.8% in non-athlete subjects, whereas the percentage of accurate prediction was 37.5% and 85.7%, respectively. In the case of athletic subjects, the DRI predictive equation showed a clear bias negatively proportional to the subjects' TEE. CONCLUSIONS The results from this study suggest that the DRI predictive equation could be used to obtain EER in non-athlete female college students at a group level. However, this equation would be difficult to use in the case of athletes at the group and individual levels. The development of a new and more appropriate equation for the prediction of energy expenditure in athletes is proposed. PMID:28194265

  10. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Cinnella, P.; Dwight, R. P.

    2014-10-01

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  11. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  12. Spatially averaged flow over a wavy boundary revisited

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.

  13. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  14. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  15. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  16. Cell averaging Chebyshev methods for hyperbolic problems

    NASA Technical Reports Server (NTRS)

    Wei, Cai; Gottlieb, David; Harten, Ami

    1990-01-01

    A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.

  17. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  18. Demonstration of a Model Averaging Capability in FRAMES

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  19. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  20. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  1. Inversion of the circular averages transform using the Funk transform

    NASA Astrophysics Data System (ADS)

    Evren Yarman, Can; Yazıcı, Birsen

    2011-06-01

    The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering.

  2. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  3. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  4. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  5. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  6. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... class or subclass: Credit = (Average Standard − Emission Level) × (Total Annual Production) × (Useful Life) Deficit = (Emission Level − Average Standard) × (Total Annual Production) × (Useful Life) (l....000 Where: FELi = The FEL to which the engine family is certified. ULi = The useful life of the...

  7. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  8. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  9. 34 CFR 668.196 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668... determine that you qualify, we notify you of that determination at the same time that we notify you of your... determine that you meet the requirements for an average rates appeal. (Approved by the Office of...

  10. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  11. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  12. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  13. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  14. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  15. AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN

    EPA Science Inventory

    The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...

  16. Analogue Divider by Averaging a Triangular Wave

    NASA Astrophysics Data System (ADS)

    Selvam, Krishnagiri Chinnathambi

    2017-03-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  17. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  18. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  19. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  20. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  1. Cosmic inhomogeneities and averaged cosmological dynamics.

    PubMed

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  2. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  3. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  6. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  7. Bimetal sensor averages temperature of nonuniform profile

    NASA Technical Reports Server (NTRS)

    Dittrich, R. T.

    1968-01-01

    Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.

  8. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  9. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  10. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  11. Approximate average head models for EEG source imaging.

    PubMed

    Valdés-Hernández, Pedro A; von Ellenrieder, Nicolás; Ojeda-Gonzalez, Alejandro; Kochen, Silvia; Alemán-Gómez, Yasser; Muravchik, Carlos; Valdés-Sosa, Pedro A

    2009-12-15

    We examine the performance of approximate models (AM) of the head in solving the EEG inverse problem. The AM are needed when the individual's MRI is not available. We simulate the electric potential distribution generated by cortical sources for a large sample of 305 subjects, and solve the inverse problem with AM. Statistical comparisons are carried out with the distribution of the localization errors. We propose several new AM. These are the average of many individual realistic MRI-based models, such as surface-based models or lead fields. We demonstrate that the lead fields of the AM should be calculated considering source moments not constrained to be normal to the cortex. We also show that the imperfect anatomical correspondence between all cortices is the most important cause of localization errors. Our average models perform better than a random individual model or the usual average model in the MNI space. We also show that a classification based on race and gender or head size before averaging does not significantly improve the results. Our average models are slightly better than an existing AM with shape guided by measured individual electrode positions, and have the advantage of not requiring such measurements. Among the studied models, the Average Lead Field seems the most convenient tool in large and systematical clinical and research studies demanding EEG source localization, when MRI are unavailable. This AM does not need a strict alignment between head models, and can therefore be easily achieved for any type of head modeling approach.

  12. Compact expressions for spherically averaged position and momentum densities

    NASA Astrophysics Data System (ADS)

    Crittenden, Deborah L.; Bernard, Yves A.

    2009-08-01

    Compact expressions for spherically averaged position and momentum density integrals are given in terms of spherical Bessel functions (jn) and modified spherical Bessel functions (in), respectively. All integrals required for ab initio calculations involving s, p, d, and f-type Gaussian functions are tabulated, highlighting a neat isomorphism between position and momentum space formulae. Spherically averaged position and momentum densities are calculated for a set of molecules comprising the ten-electron isoelectronic series (Ne-CH4) and the eighteen-electron series (Ar-SiH4, F2-C2H6).

  13. Average Soil Water Retention Curves Measured by Neutron Radiography

    SciTech Connect

    Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  14. The generic modeling fallacy: Average biomechanical models often produce non-average results!

    PubMed

    Cook, Douglas D; Robertson, Daniel J

    2016-11-07

    Computational biomechanics models constructed using nominal or average input parameters are often assumed to produce average results that are representative of a target population of interest. To investigate this assumption a stochastic Monte Carlo analysis of two common biomechanical models was conducted. Consistent discrepancies were found between the behavior of average models and the average behavior of the population from which the average models׳ input parameters were derived. More interestingly, broadly distributed sets of non-average input parameters were found to produce average or near average model behaviors. In other words, average models did not produce average results, and models that did produce average results possessed non-average input parameters. These findings have implications on the prevalent practice of employing average input parameters in computational models. To facilitate further discussion on the topic, the authors have termed this phenomenon the "Generic Modeling Fallacy". The mathematical explanation of the Generic Modeling Fallacy is presented and suggestions for avoiding it are provided. Analytical and empirical examples of the Generic Modeling Fallacy are also given.

  15. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    NASA Technical Reports Server (NTRS)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  16. Changing perception of average person's risk does not suffice to change perception of comparative risk.

    PubMed

    Aucote, Helen M; Gold, Ron S

    2008-08-01

    The direct method of assessing "unrealistic optimism" employs a question of the form, "Compared with the average person, what is the chance that event X will occur to you?" It has been proposed that when individuals construct their responses to this question (direct-estimates) they focus much more strongly on estimates of their own risk (self-estimates) than on estimates of the average person's risk (other-estimates). A challenge to this proposal comes from findings that interventions that alter other-estimates also change direct-estimates. Employing a novel intervention technique, we tested the possibility that such interventions may indirectly also change self-estimates and that this is what accounts for their effect on direct-estimates. Study 1 (n = 58) showed that an intervention which was designed to (and did) affect other-estimates also affected self-estimates, while Study 2 (n = 101) showed that it affected direct-estimates. Study 3 (n = 79) confirmed that we could modify the intervention so as to maintain the effect on other-estimates, but eliminate that on self-estimates. Study 4 (n = 112) demonstrated that when this was done, there was no longer any effect on direct-estimates. The findings are consistent with the proposal that direct-estimates are constructed largely just out of self-estimates. Implications for heath education programs are discussed.

  17. Greenhouse Gas Emissions and the Australian Diet—Comparing Dietary Recommendations with Average Intakes

    PubMed Central

    Hendrie, Gilly A.; Ridoutt, Brad G.; Wiedmann, Thomas O.; Noakes, Manny

    2014-01-01

    Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe) for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e) per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet). Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day) and energy-dense, nutrient poor “non-core” foods (3.9 kg CO2e). Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe. PMID:24406846

  18. Greenhouse gas emissions and the Australian diet--comparing dietary recommendations with average intakes.

    PubMed

    Hendrie, Gilly A; Ridoutt, Brad G; Wiedmann, Thomas O; Noakes, Manny

    2014-01-08

    Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe) for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e) per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet). Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day) and energy-dense, nutrient poor "non-core" foods (3.9 kg CO2e). Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe.

  19. Averaged controllability of parameter dependent conservative semigroups

    NASA Astrophysics Data System (ADS)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  20. Averaged initial Cartesian coordinates for long lifetime satellite studies

    NASA Technical Reports Server (NTRS)

    Pines, S.

    1975-01-01

    A set of initial Cartesian coordinates, which are free of ambiguities and resonance singularities, is developed to study satellite mission requirements and dispersions over long lifetimes. The method outlined herein possesses two distinct advantages over most other averaging procedures. First, the averaging is carried out numerically using Gaussian quadratures, thus avoiding tedious expansions and the resulting resonances for critical inclinations, etc. Secondly, by using the initial rectangular Cartesian coordinates, conventional, existing acceleration perturbation routines can be absorbed into the program without further modifications, thus making the method easily adaptable to the addition of new perturbation effects. The averaged nonlinear differential equations are integrated by means of a Runge Kutta method. A typical step size of several orbits permits rapid integration of long lifetime orbits in a short computing time.

  1. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  2. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  3. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  4. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  5. Average power meter for laser radiation

    NASA Astrophysics Data System (ADS)

    Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.

    2016-04-01

    Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.

  6. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  7. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  8. Average length of stay in hospitals.

    PubMed

    Egawa, H

    1984-03-01

    The average length of stay is essentially an important and appropriate index for hospital bed administration. However, from the position that it is not necessarily an appropriate index in Japan, an analysis is made of the difference in the health care facility system between the United States and Japan. Concerning the length of stay in Japanese hospitals, the median appeared to better represent the situation. It is emphasized that in order for the average length of stay to become an appropriate index, there is need to promote regional health, especially facility planning.

  9. The solar UV exposure time required for vitamin D3 synthesis in the human body estimated by numerical simulation and observation in Japan

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideaki; Miyauchi, Masaatsu; Hirai, Chizuko

    2013-04-01

    After the discovery of Antarctic ozone hole, the negative effect of exposure of human body to harmful solar ultraviolet (UV) radiation is widely known. However, there is positive effect of exposure to UV radiation, i.e., vitamin D synthesis. Although the importance of solar UV radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been well determined. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha, in Japan) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 2.3 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin. This quantity of vitamin D represents the recommended intake for an adult by the Ministry of Health, Labour and Welfare, and the 2010 Japanese Dietary Reference Intakes (DRIs). In contrast, it took 49.5 min to produce the same amount of vitamin D3 at Sapporo in the northern part of Japan in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 14.5 min were required, but at 09:00 68.7 min were required and at 15:00 175.8 min were required for the same meteorological conditions. Naha receives high levels of UV radiation allowing vitamin D3 synthesis almost throughout the year. According to our results, we are further developing an index to quantify the necessary time of UV radiation exposure to produce required amount of vitamin D3 from a UV radiation data.

  10. Microchannel cooled heatsinks for high average power laser diode arrays

    SciTech Connect

    Bennett, W.J.; Freitas, B.L.; Ciarlo, D.; Beach, R.; Sutton, S.; Emanuel, M.; Solarz, R.

    1993-01-15

    Detailed performance results for an efficient and low impedance laser diode array heatsink are presented. High duty factor and even cw operation of fully filled laser diode arrays at high stacking densities are enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using an anisotropic chemical etching process. A modular rack-and-stack architecture is adopted for heatsink design, allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel heatsinks is ideally suited to pump army requirements for high average power crystalline laser because of the stringent temperature demands are required to efficiently couple diode light to several-nanometer-wide absorption features characteristic of lasing ions in crystals.

  11. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  12. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  13. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  14. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  15. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  16. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  17. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  18. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  19. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  20. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  1. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  2. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  3. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  4. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  5. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  6. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  7. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  8. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  9. Average thermal characteristics of solar wind electrons

    NASA Technical Reports Server (NTRS)

    Montgomery, M. D.

    1972-01-01

    Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.

  10. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  11. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  12. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  13. How Young Is Standard Average European?

    ERIC Educational Resources Information Center

    Haspelmath, Martin

    1998-01-01

    An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…

  14. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  15. Averaging cross section data so we can fit it

    SciTech Connect

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  16. Ensemble crowd perception: A viewpoint-invariant mechanism to represent average crowd identity

    PubMed Central

    Yamanashi Leib, Allison; Fischer, Jason; Liu, Yang; Qiu, Sang; Robertson, Lynn; Whitney, David

    2014-01-01

    Individuals can rapidly and precisely judge the average of a set of similar items, including both low-level (Ariely, 2001) and high-level objects (Haberman & Whitney, 2007). However, to date, it is unclear whether ensemble perception is based on viewpoint-invariant object representations. Here, we tested this question by presenting participants with crowds of sequentially presented faces. The number of faces in each crowd and the viewpoint of each face varied from trial to trial. This design required participants to integrate information from multiple viewpoints into one ensemble percept. Participants reported the mean identity of crowds (e.g., family resemblance) using an adjustable, forward-oriented test face. Our results showed that participants accurately perceived the mean crowd identity even when required to incorporate information across multiple face orientations. Control experiments showed that the precision of ensemble coding was not solely dependent on the length of time participants viewed the crowd. Moreover, control analyses demonstrated that observers did not simply sample a subset of faces in the crowd but rather integrated many faces into their estimates of average crowd identity. These results demonstrate that ensemble perception can operate at the highest levels of object recognition after 3-D viewpoint-invariant faces are represented. PMID:25074904

  17. Comparison of mouse brain DTI maps using K-space average, image-space average, or no average approach.

    PubMed

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-11-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data were collected from five normal mice. Noisy data with signal-to-noise ratios (SNR) that varied between five and 30 (before averaging) were then simulated. The DTI indices, including relative anisotropy (RA), trace of diffusion tensor (TR), axial diffusivity (λ║), and radial diffusivity (λ⊥), derived from the k-avg, m-avg, and no-avg, were then compared in the corpus callosum white matter, cortex gray matter, and the ventricles. We found that k-avg and m-avg enhanced the SNR of DWI with no significant differences. However, k-avg produced lower RA in the white matter and higher RA in the gray matter, compared to the m-avg and no-avg, regardless of SNR. The latter two produced similar DTI quantifications. We concluded that k-avg is less preferred for DTI brain imaging.

  18. Evolution of the average steepening factor for nonlinearly propagating waves.

    PubMed

    Muhlestein, Michael B; Gee, Kent L; Neilsen, Tracianne B; Thomas, Derek C

    2015-02-01

    Difficulties arise in attempting to discern the effects of nonlinearity in near-field jet-noise measurements due to the complicated source structure of high-velocity jets. This article describes a measure that may be used to help quantify the effects of nonlinearity on waveform propagation. This measure, called the average steepening factor (ASF), is the ratio of the average positive slope in a time waveform to the average negative slope. The ASF is the inverse of the wave steepening factor defined originally by Gallagher [AIAA Paper No. 82-0416 (1982)]. An analytical description of the ASF evolution is given for benchmark cases-initially sinusoidal plane waves propagating through lossless and thermoviscous media. The effects of finite sampling rates and measurement noise on ASF estimation from measured waveforms are discussed. The evolution of initially broadband Gaussian noise and signals propagating in media with realistic absorption are described using numerical and experimental methods. The ASF is found to be relatively sensitive to measurement noise but is a relatively robust measure for limited sampling rates. The ASF is found to increase more slowly for initially Gaussian noise signals than for initially sinusoidal signals of the same level, indicating the average distortion within noise waveforms occur more slowly.

  19. Estimating the Manpower, Personnel, and Training Requirements of the Army’s Corps Support Weapon System Using the HARDMAN Methodology. Appendices

    DTIC Science & Technology

    1982-09-01

    explanation of the MOS seleccion criteria. C2.2 SUMMARY OF MOS ASSIGNMENTS BY EQUIPM-N-i Table C2-2 contains all of the MOS required for the CSWS... person trained in one MDS in the space of a different MOS. * M-uch less opportunity than today for on-the-job training; much more need for school

  20. Genetic associations among average annual productivity, growth traits, and stayability: a parallel between Nelore and composite beef cattle.

    PubMed

    Santana, M L; Eler, J P; Bignardi, A B; Ferraz, J B S

    2013-06-01

    This study was conducted to examine the relationship among average annual productivity of the cow (PRODAM), yearling weight (YW), postweaning BW gain (PWG), scrotal circumference (SC), and stayability in the herd for at least 6 yr (STAY) of Nelore and composite beef cattle. Measurements were taken on animals born between 1980 and 2010 on 70 farms located in 7 Brazilian states. Estimates of heritability and genetic and environmental correlations were obtained by Bayesian approach with 5-trait animal models. Genetic trends were estimated by regressing means of estimated breeding values by year of birth. The heritability estimates were between 0.14 and 0.47. Estimates of genetic correlation among female traits (PRODAM and STAY) and growth traits ranged from -0.02 to 0.30. Estimates of genetic correlations ranged from 0.23 to 0.94 among growth traits indicating that selection for these traits could be successful in tropical breeding programs. Genetic correlations among all traits were favorable and simultaneous selection for growth, productivity, and stayability is therefore possible. Genetic correlation between PRODAM and STAY was 0.99 and 0.85 for Nelore and composite cattle, respectively. Therefore, PRODAM and STAY might be influenced by many of the same genes. The inclusion of PRODAM instead of STAY as a selection criterion seems to be more advantageous for tropical breeding programs because the generation interval required to obtain accurate estimates of genetic merit for PRODAM is shorter. Average annual genetic changes were greater in Nelore than in composite cattle. This was not unexpected because the breeding program of composite cattle included a large number of farms, different production environments, and genetic level of the herds and breeds. Thus, the selection process has become more difficult in this population.

  1. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

  2. High average power diode pumped solid state lasers for CALIOPE

    SciTech Connect

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

  3. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  4. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  5. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  6. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  7. Averaging schemes for solving fixed point and variational inequality problems

    SciTech Connect

    Magnanti, T.L.; Perakis, G.

    1994-12-31

    In this talk we develop and study averaging schemes for solving fixed point and variational inequality problems. Typically, researchers have established convergence results for methods that solve these problems by establishing contractive estimates for the underlying algorithmic maps. In this talk we establish global convergence results using nonexpansive estimates. After first establishing convergence for a general iterative scheme for computing fixed points, we consider applications to projection and relaxation algorithms for solving variational inequality problems and to a generalized steepest descent method for solving systems of equations. As part of our development, we also establish a new interpretation of a norm condition typically used for establishing convergence of linearization schemes, by associating it with a strong-f-monotonicity condition. We conclude by applying these results to congested transportation networks.

  8. Disk-Averaged Synthetic Spectra of Mars

    NASA Astrophysics Data System (ADS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong ,William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  9. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  10. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  11. On the ensemble averaging of PIC simulations

    NASA Astrophysics Data System (ADS)

    Codur, R. J. B.; Tsung, F. S.; Mori, W. B.

    2016-10-01

    Particle-in-cell simulations are used ubiquitously in plasma physics to study a variety of phenomena. They can be an efficient tool for modeling the Vlasov or Vlasov Fokker Planck equations in multi-dimensions. However, the PIC method actually models the Klimontovich equation for finite size particles. The Vlasov Fokker Planck equation can be derived as the ensemble average of the Klimontovich equation. We present results of studying Landau damping and Stimulated Raman Scattering using PIC simulations where we use identical ``drivers'' but change the random number generator seeds. We show that even for cases where a plasma wave is excited below the noise in a single simulation that the plasma wave can clearly be seen and studied if an ensemble average over O(10) simulations is made. Comparison between the results from an ensemble average and the subtraction technique are also presented. In the subtraction technique two simulations, one with the other without the ``driver'' are conducted with the same random number generator seed and the results are subtracted. This work is supported by DOE, NSF, and ENSC (France).

  12. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  13. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  14. A simple algorithm for averaging spike trains.

    PubMed

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

  15. The allometric relationship between resting metabolic rate and body mass in wild waterfowl (Anatidae) and an application to estimation of winter habitat requirements

    USGS Publications Warehouse

    Miller, M.R.; Eadie, J. McA

    2006-01-01

    Breeding densities and migration periods of Common Snipe in Colorado were investigated in 1974-75. Sites studied were near Fort Collins and in North Park, both in north central Colorado; in the Yampa Valley in northwestern Colorado; and in the San Luis Valley in south central Colorado....Estimated densities of breeding snipe based on censuses conducted during May 1974 and 1975 were, by region: 1.3-1.7 snipe/ha near Fort Collins; 0.6 snipe/ha in North Park; 0.5-0.7 snipe/ha in the Yampa Valley; and 0.5 snipe/ha in the San Luis Valley. Overall mean densities were 06 and 0.7 snipe/ha in 1974 and 1975 respectively. On individual study sites, densities of snipe ranged from 0.2 to 2.1 snipe/ha. Areas with shallow, stable, discontinuous water levels, sparse, short vegetation, and soft organic soils had the highest densities.....Twenty-eight nests were located having a mean clutch size of 3.9 eggs. Estimated onset of incubation ranged from 2 May through 4 July. Most nests were initiated in May.....Spring migration extended from late March through early May. Highest densities of snipe were recorded in all regions during l&23 April. Fall migration was underway by early September and was completed by mid-October with highest densities occurring about the third week in September. High numbers of snipe noted in early August may have been early migrants or locally produced juveniles concentrating on favorable feeding areas.

  16. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  17. Evaluation of spline and weighted average interpolation algorithms

    NASA Astrophysics Data System (ADS)

    Eckstein, Barbara Ann

    Bivariate polynomial and weighted average interpolations were tested on two data sets. One data set consisted of irregularly spaced Bouguer gravity values. Maps derived from automated interpolation were compared to a manually created map to determine the best computer-generated diagram. For this data set, bivariate polynomial interpolation was inadequate, showing many spurious circular anomalies with extrema greatly exceeding the input values. The greatest distortion occurred near roughly colinear observations and steep field gradients. The computerized map from weighted average interpolation matched the manual map when the number of grid points was roughly nine times the number of input points. Groundwater recharge and discharge rates were used for the second example. The discharge zones are two narrow irrigation ditches, and measurements were along linear traverses. Again, polynomial interpolation produced unreasonably large interpolated values near high field gradients. The weighted average method required a higher ratio of grid points to input data (about 64 to 1) because of the long narrow shape of the discharge zones. The weighted average interpolation method was more reliable than the polynomial method because it was less sensitive to the nature of the data distribution and to the field gradients.

  18. High Average Power, High Energy Short Pulse Fiber Laser System

    SciTech Connect

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  19. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  20. Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works

    NASA Astrophysics Data System (ADS)

    Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha

    2015-04-01

    Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the

  1. A Green's function quantum average atom model

    DOE PAGES

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  2. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample (N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  3. Simulation Framework to Estimate the Performance of CO2 and O2 Sensing from Space and Airborne Platforms for the ASCENDS Mission Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Plitau, Denis; Prasad, Narasimha S.

    2012-01-01

    The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.

  4. Exploring JLA supernova data with improved flux-averaging technique

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-03-01

    In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  5. H∞ control of switched delayed systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay

    2013-12-01

    This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.

  6. Local average height distribution of fluctuating interfaces

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.

  7. Average neutronic properties of prompt fission products

    SciTech Connect

    Foster, D.G. Jr.; Arthur, E.D.

    1982-02-01

    Calculations of the average neutronic properties of the ensemble of fission products producted by fast-neutron fission of /sup 235/U and /sup 239/Pu, where the properties are determined before the first beta decay of any of the fragments, are described. For each case we approximate the ensemble by a weighted average over 10 selected nuclides, whose properties we calculate using nuclear-model parameters deduced from the systematic properties of other isotopes of the same elements as the fission fragments. The calculations were performed primarily with the COMNUC and GNASH statistical-model codes. The results, available in ENDF/B format, include cross sections, angular distributions of neutrons, and spectra of neutrons and photons, for incident-neutron energies between 10/sup -5/ eV and 20 MeV. Over most of this energy range, we find that the capture cross section of /sup 239/Pu fission fragments is systematically a factor of two to five greater than for /sup 235/U fission fragments.

  8. Local average height distribution of fluctuating interfaces.

    PubMed

    Smith, Naftali R; Meerson, Baruch; Sasorov, Pavel V

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1+1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2+1 dimensions.

  9. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  10. Multiple-level defect species evaluation from average carrier decay

    NASA Astrophysics Data System (ADS)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  11. Laser Diode Cooling For High Average Power Applications

    NASA Astrophysics Data System (ADS)

    Mundinger, David C.; Beach, Raymond J.; Benett, William J.; Solarz, Richard W.; Sperry, Verry

    1989-06-01

    Many applications for semiconductor lasers that require high average power are limited by the inability to remove the waste heat generated by the diode lasers. In order to reduce the cost and complexity of these applications a heat sink package has been developed which is based on water cooled silicon microstructures. Thermal resistivities of less than 0.025°C/01/cm2) have been measured which should be adequate for up to CW operation of diode laser arrays. This concept can easily be scaled to large areas and is ideal for high average power solid state laser pumping. Several packages which illustrate the essential features of this design have been fabricated and tested. The theory of operation will be briefly covered, and several conceptual designs will be described. Also the fabrication and assembly procedures and measured levels of performance will be discussed.

  12. Robust myelin water quantification: averaging vs. spatial filtering.

    PubMed

    Jones, Craig K; Whittall, Kenneth P; MacKay, Alex L

    2003-07-01

    The myelin water fraction is calculated, voxel-by-voxel, by fitting decay curves from a multi-echo data acquisition. Curve-fitting algorithms require a high signal-to-noise ratio to separate T(2) components in the T(2) distribution. This work compared the effect of averaging, during acquisition, to data postprocessed with a noise reduction filter. Forty regions, from five volunteers, were analyzed. A consistent decrease in the myelin water fraction variability with no bias in the mean was found for all 40 regions. Images of the myelin water fraction of white matter were more contiguous and had fewer "holes" than images of myelin water fractions from unfiltered echoes. Spatial filtering was effective for decreasing the variability in myelin water fraction calculated from 4-average multi-echo data.

  13. Microchannel heatsinks for high average power laser diode arrays

    SciTech Connect

    Beach, R.; Benett, B.; Freitas, B.; Ciarlo, D.; Sperry, V.; Comaskey, B.; Emanuel, M.; Solarz, R.; Mundinger, D.

    1992-01-01

    Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of leasing ions in crystals.

  14. Estimation of cloud fraction profile in shallow convection using a scanning cloud radar

    NASA Astrophysics Data System (ADS)

    Oue, Mariko; Kollias, Pavlos; North, Kirk W.; Tatarevic, Aleksandra; Endo, Satoshi; Vogelmann, Andrew M.; Gustafson, William I.

    2016-10-01

    Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed by using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of scanning cloud radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. This method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. The proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.

  15. Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar

    SciTech Connect

    Oue, Mariko; Kollias, Pavlos; North, Kirk W.; Tatarevic, Aleksandra; Endo, Satoshi; Vogelmann, Andrew M.; Gustafson, Jr., William I.

    2016-10-18

    Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. This method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.

  16. Ensemble estimators for multivariate entropy estimation.

    PubMed

    Sricharan, Kumar; Wei, Dennis; Hero, Alfred O

    2013-07-01

    The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T(-)(γ)(/)(d) ), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T(-1)). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.

  17. Global average net radiation sensitivity to cloud amount variations

    SciTech Connect

    Karner, O.

    1993-12-01

    Time series analysis performed using an autoregressive model is carried out to study monthly oscillations in the earth radiation budget (ERB) at the top of the atmosphere (TOA) and cloud amount estimates on a global basis. Two independent cloud amount datasets, produced elsewhere by different authors, and the ERB record based on the Nimbus-7 wide field-of-view 8-year (1978-86) observations are used. Autoregressive models are used to eliminate the effects of the earth`s orbit eccentricity on the radiation budget and cloud amount series. Nonzero cross correlation between the residual series provides a way of estimating the contribution of the cloudiness variations to the variance in the net radiation. As a result, a new parameter to estimate the net radiation sensitivity at the TOA to changes in cloud amount is introduced. This parameter has a more general character than other estimates because it contains time-lag terms of different length responsible for different cloud-radiation feedback mechanisms in the earth climate system. Time lags of 0, 1, 12, and 13 months are involved. Inclusion of the zero-lag term only shows that the albedo effect of clouds dominates, as is known from other research. Inclusion of all four terms leads to an average quasi-annual insensitivity. Approximately 96% of the ERB variance at the TOA can be explained by the eccentricity factor and 1% by cloudiness variations, provided that the data used are without error. Although the latter assumption is not fully correct, the results presented allow one to estimate the contribution of current cloudiness changes to the net radiation variability. Two independent cloud amount datasets have very similar temporal variability and also approximately equal impact on the net radiation at the TOA.

  18. 78 FR 17648 - Energy Conservation Program for Consumer Products: Representative Average Unit Costs of Energy

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-22

    ... measurement of the estimated annual operating costs or other measures of energy consumption for certain... that the estimated annual operating costs of a covered product be calculated from measurements of...: Representative Average Unit Costs of Energy AGENCY: Office of Energy Efficiency and Renewable Energy,...

  19. 77 FR 24940 - Energy Conservation Program for Consumer Products: Representative Average Unit Costs of Energy

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... measurement of the estimated annual operating costs or other measures of energy consumption for certain... that the estimated annual operating costs of a covered product be calculated from measurements of...: Representative Average Unit Costs of Energy AGENCY: Office of Energy Efficiency and Renewable Energy,...

  20. A Study on the Estimation of the Performance of Sediment Pavement and the Required Function of the Farm Road in a Paddy Area

    NASA Astrophysics Data System (ADS)

    Ogata, Hidehiko; Noda, Tomoyuki; Sakamoto, Yasufumi; Shinotsuka, Masanori; Kamada, Osamu; Nakamura, Kazuaki

    The pavement rate of the farm road which becomes important in activities of agricultural production, circulation of agricultural products and rural life is low. There are many farm roads to which the function of traveling performance, traveling comfort and prevention of the damage of agricultural products in transportation is not secured. Maintenance including improvement in the pavement rate of a farm road must be economically carried out based on the service environment, the circumference environment and the required function according to the kind of farm road. In this research, the problem of the farm road in a paddy area was extracted from the questionnaire to a land improvement district as an administrator, and the conditions which should be taken into consideration in maintenance of a farm road were clarified. The problem of a farm road is deformation of a road surface and a request is a period which does not need to repair. Moreover, the present performance of ground property and road surface of sediment pavement on-farm road was evaluated. Positive correlation is between the standard deviation of modulus of elasticity of the soil and surface roughness, negative correlation is between the modulus of elasticity of the soil in the rut and rutting depth.

  1. Early Training Estimation System

    DTIC Science & Technology

    1980-08-01

    are needed. First, by developing earlier and more accurate estimates of training requirements, the training planning process can begin earlier, and...this period and these questions require training input data and (2) the early training planning process requires a solid foundation on which to...development of initial design, task, skill, and training estimates? provision of input into training planning and acquisition documents: 2-39 provision

  2. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  3. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  4. High average power linear induction accelerator development

    SciTech Connect

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.

  5. Angle-averaged Compton cross sections

    SciTech Connect

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  6. The Average-Value Correspondence Principle

    NASA Astrophysics Data System (ADS)

    Goyal, Philip

    2007-12-01

    In previous work [1], we have presented an attempt to derive the finite-dimensional abstract quantum formalism from a set of physically comprehensible assumptions. In this paper, we continue the derivation of the quantum formalism by formulating a correspondence principle, the Average-Value Correspondence Principle, that allows relations between measurement outcomes which are known to hold in a classical model of a system to be systematically taken over into the quantum model of the system, and by using this principle to derive many of the correspondence rules (such as operator rules, commutation relations, and Dirac's Poisson bracket rule) that are needed to apply the abstract quantum formalism to model particular physical systems.

  7. Average prime-pair counting formula

    NASA Astrophysics Data System (ADS)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  8. Non-self-averaging in Ising spin glasses and hyperuniversality

    NASA Astrophysics Data System (ADS)

    Lundow, P. H.; Campbell, I. A.

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U22(T ,L ) for the spin glass susceptibility [and for higher moments Un n(T ,L ) ] is reported for dimensions 2 ,3 ,4 ,5 , and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ (T ,L ) as Un n(β ,L ) =[Kdξ (T ,L ) /L ] d and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996), 10.1103/PhysRevLett.77.3700]. Empirically, it is found that the Kd values are independent of d to within the statistics. The maximum values [Unn(T,L ) ] max are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [Unn(T,L ) ] max peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and X Y spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.

  9. Non-self-averaging in Ising spin glasses and hyperuniversality.

    PubMed

    Lundow, P H; Campbell, I A

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U_{22}(T,L) for the spin glass susceptibility [and for higher moments U_{nn}(T,L)] is reported for dimensions 2,3,4,5, and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ(T,L) as U_{nn}(β,L)=[K_{d}ξ(T,L)/L]^{d} and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996)PRLTAO0031-900710.1103/PhysRevLett.77.3700]. Empirically, it is found that the K_{d} values are independent of d to within the statistics. The maximum values [U_{nn}(T,L)]_{max} are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [U_{nn}(T,L)]_{max} peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and XY spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.

  10. Determination of dextrose equivalent value and number average molecular weight of maltodextrin by osmometry.

    PubMed

    Rong, Y; Sillick, M; Gregson, C M

    2009-01-01

    Dextrose equivalent (DE) value is the most common parameter used to characterize the molecular weight of maltodextrins. Its theoretical value is inversely proportional to number average molecular weight (M(n)), providing a theoretical basis for correlations with physical properties important to food manufacturing, such as: hygroscopicity, the glass transition temperature, and colligative properties. The use of freezing point osmometry to measure DE and M(n) was assessed. Measurements were made on a homologous series of malto-oligomers as well as a variety of commercially available maltodextrin products with DE values ranging from 5 to 18. Results on malto-oligomer samples confirmed that freezing point osmometry provided a linear response with number average molecular weight. However, noncarbohydrate species in some commercial maltodextrin products were found to be in high enough concentration to interfere appreciably with DE measurement. Energy dispersive spectroscopy showed that sodium and chloride were the major ions present in most commercial samples. Osmolality was successfully corrected using conductivity measurements to estimate ion concentrations. The conductivity correction factor appeared to be dependent on the concentration of maltodextrin. Equations were developed to calculate corrected values of DE and M(n) based on measurements of osmolality, conductivity, and maltodextrin concentration. This study builds upon previously reported results through the identification of the major interfering ions and provides an osmolality correction factor that successfully accounts for the influence of maltodextrin concentration on the conductivity measurement. The resulting technique was found to be rapid, robust, and required no reagents.

  11. The average size and temperature profile of quasar accretion disks

    SciTech Connect

    Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Motta, V.; Falco, E.

    2014-03-01

    We use multi-wavelength microlensing measurements of a sample of 10 image pairs from 8 lensed quasars to study the structure of their accretion disks. By using spectroscopy or narrowband photometry, we have been able to remove contamination from the weakly microlensed broad emission lines, extinction, and any uncertainties in the large-scale macro magnification of the lens model. We determine a maximum likelihood estimate for the exponent of the size versus wavelength scaling (r{sub s} ∝λ {sup p}, corresponding to a disk temperature profile of T∝r {sup –1/p}) of p=0.75{sub −0.2}{sup +0.2} and a Bayesian estimate of p = 0.8 ± 0.2, which are significantly smaller than the prediction of the thin disk theory (p = 4/3). We have also obtained a maximum likelihood estimate for the average quasar accretion disk size of r{sub s}=4.5{sub −1.2}{sup +1.5} lt-day at a rest frame wavelength of λ = 1026 Å for microlenses with a mean mass of M = 1 M {sub ☉}, in agreement with previous results, and larger than expected from thin disk theory.

  12. Recent advances in phase shifted time averaging and stroboscopic interferometry

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  13. Effects of Coordinate Rotation and Averaging Period on Energy Closure Characteristics of Eddy Covariance Measurements over Mountainous Terrain

    NASA Astrophysics Data System (ADS)

    Li, M.; Chen, Y.

    2009-12-01

    Coordinate rotation is typically applied to align measured turbulence data along the stream wise direction before calculating turbulent fluxes. A standard averaging period (30 min) is commonly used when estimating these fluxes. Different rotation approaches with various averaging periods can cause systemic bias and significant variations in flux estimations. Thus, measuring surface fluxes over a non-flat terrain requires that an appropriate rotation technique and optimal averaging period are applied. In this study, two coordinate rotation approaches (double and planar-fit rotations) and no-rotation, in associated with averaging periods of 15-240 min, were applied to compute heat and water vapor fluxes over a mountainous terrain using the eddy covariance method. Measurements were conducted in an experimental watershed, the Lien-Hua-Chih (LHC) watershed, located in the central Taiwan. This watershed has considerable meso-scale circulation and mountainous terrain. Vegetation type is a mixture of natural deciduous forest and shrubs; canopy height is about 17 m. A 22 m tall observation tower was built inside the canopy. The prevailing wind direction is NW during daytime and ES during the night time at the LHC site in both the dry and wet seasons. Turbulence data above the canopy were measured with an eddy covariance system comprising a 3-D sonic anemometer (Young 81000) and a krypton hygrometer (Campbell KH20). Raw data of 10 Hz were recorded simultaneously with a data logger (CR1000) and a CF card. Air temperature/humidity profiles were measured to calculate the heat/moisture storage inside the canopy layer. Air pressure data were used to correct the effect of air density fluctuations on surface fluxes. The effects of coordinate rotation approaches with various averaging periods on the average daily energy closure fraction are presented. The criteria of the best energy closure fraction and minimum uncertainty indicate that planar-fit rotation with an averaging period

  14. Least Squares Orbit Determination Using Partials of Mean Elements from Generalized Method of Averaging

    NASA Astrophysics Data System (ADS)

    Setty, Srinivas; Cefola, Paul

    Orbital debris is a well-known challenge of the space age. Maintaining a precise catalogue of space objects’ ephemeris is required to monitor and actively conduct collision avoidance maneuvers of functioning satellites. Maintaining a catalogue of hundreds of thousands of objects is computationally cumbersome. For this purpose, accurate and fast propagators along with similarly fast and accurate orbit determination method to update the catalogue with new tracking data are required. After investigating a semi-analytical satellite theory for cataloguing, we are now presenting an orbit determination system using partial derivatives of mean elements set, which is used in semi-analytical methods. In this study, combining the mean elements of semi-analytical satellite theory with well-established estimation procedures for orbit determination is performed. The selected mean elements are in equinoctial coordinate system, and are averaged for a specific theory - Draper Semi-analytical Satellite Theory (DSST). Forming a state transition matrix for least squares orbit determination from DSST’s mean elements involves the following partial derivatives: 1.the partial derivatives of the equinoctial short-periodic variations with respect to the mean equinoctial elements at the same time (within propagation) 2.the partial derivatives of the equinoctial mean elements at an arbitrary time with respect to the epoch time equinoctial mean elements 3.the partial derivatives of the equinoctial mean elements at an arbitrary time with respect to the dynamical parameters (atmospheric drag coefficient and solar radiation pressure coefficient), and 4.the partial derivatives of the equinoctial short-periodic variations with respect to the dynamical parameters The semi-analytical partial derivatives are composed of averaged partial derivatives and short periodic partial derivatives. Averaged partial derivatives are updated in time using analytical expressions, which includes certain

  15. Adaptive spectral doppler estimation.

    PubMed

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-04-01

    In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window is very short. The 2 adaptive techniques are tested and compared with the averaged periodogram (Welch's method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set of matched filters (one for each velocity component of interest) and filtering the blood process over slow-time and averaging over depth to find the PSD. The methods are tested using various experiments and simulations. First, controlled flow-rig experiments with steady laminar flow are carried out. Simulations in Field II for pulsating flow resembling the femoral artery are also analyzed. The simulations are followed by in vivo measurement on the common carotid artery. In all simulations and experiments it was concluded that the adaptive methods display superior performance for short observation windows compared with the averaged periodogram. Computational costs and implementation details are also discussed.

  16. An automated methodology for performing time synchronous averaging of a gearbox signal without speed sensor

    NASA Astrophysics Data System (ADS)

    Combet, F.; Gelman, L.

    2007-08-01

    In this paper we extend a sensorless algorithm proposed by Bonnardot et al. for angular resampling of the acceleration signal of a gearbox submitted to limited speed fluctuation. The previous algorithm estimates the shaft angular position by narrow-band demodulation of one harmonic of the mesh frequency. The harmonic was chosen by trial and error. This paper proposes a solution to select automatically the mesh harmonic used for the shaft angular position estimation. To do so it evaluates the local signal-to-noise ratio associated to the mesh harmonic and deduces the associated low-pass filtering effect on the time synchronous average (TSA) of the signal. Results are compared with the TSA obtained when using a tachometer on an industrial gearbox used for wastewater treatment. The proposed methodology requires only the knowledge of an approximate value of the running speed and the number of teeth of the gears. It forms an automated scheme which can prove useful for real-time diagnostic applications based on TSA where speed measurement is not possible or not advisable due to difficult environmental conditions.

  17. An Investigation of Online Homework: Required or Not Required?

    ERIC Educational Resources Information Center

    Wooten, Tommy; Dillard-Eggers, Jane

    2013-01-01

    In our research we investigate the use of online homework in principles of accounting classes where some classes required online homework while other classes did not. Users of online homework, compared to nonusers, had a higher grade point average and earned a higher grade in class. On average, both required and not-required users rated the online…

  18. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    NASA Astrophysics Data System (ADS)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  19. Assessment of chromium content of feedstuffs, their estimated requirement, and effects of dietary chromium supplementation on nutrient utilization, growth performance, and mineral balance in summer-exposed buffalo calves (Bubalus bubalis).

    PubMed

    Kumar, Muneendra; Kaur, Harjit; Tyagi, Amrish; Mani, Veena; Deka, Rijusmita Sarma; Chandra, Gulab; Sharma, Vijay Kumar

    2013-10-01

    This study was conducted to determine the chromium content of different feedstuffs, their estimated requirement, and effect of dietary Cr supplementation on nutrient intake, nutrient utilization, growth performance, and mineral balance in buffalo calves during summer season. Levels of Cr was higher in cultivated fodder, moderate in cakes and cereal grains, while straw, grasses, and non-conventional feeds were poor in Cr content. To test the effect of Cr supplementation in buffalo calves, 0, 0.5, 1.0, and 1.5 ppm of inorganic Cr were fed to 24 buffalo calves. Buffalo calves were randomly assigned to four treatments (n = 6) and raised for 120 days. A metabolic trial for a period of 7 days was conducted after 3 months of dietary treatments. Blood samples were collected at fortnight interval for plasma mineral estimation. The results suggested that dietary Cr supplementation in summer did not have any affects (P > 0.05) on feed consumption, growth performance, nitrogen balance, and physiological variables. However, dietary Cr supplementation had significant effect (P < 0.05) on balance and plasma Cr (ppb) levels without affecting (P > 0.05) balance and plasma levels of other trace minerals. The estimated Cr requirement of buffalo calves during summer season was calculated to be 0.044 mg/kg body mass and 10.37 ppm per day. In conclusion, dietary Cr supplementation has regardless effect on feed consumption, mass gain, and nutrient utilization in buffalo calves reared under heat stress conditions. However, supplementation of Cr had positive effect on its balance and plasma concentration without interacting with other trace minerals.

  20. Thermal management in high average power pulsed compression systems

    SciTech Connect

    Wavrik, R.W.; Reed, K.W.; Harjes, H.C.; Weber, G.J.; Butler, M.; Penn, K.J.; Neau, E.L.

    1992-08-01

    High average power repetitively pulsed compression systems offer a potential source of electron beams which may be applied to sterilization of wastes, treatment of food products, and other environmental and consumer applications. At Sandia National Laboratory, the Repetitive High Energy Pulsed Power (RHEPP) program is developing a 7 stage magnetic pulse compressor driving a linear induction voltage adder with an electron beam diode load. The RHEPP machine is being design to deliver 350 kW of average power to the diode in 60 ns FWHM, 2.5 MV, 3 kJ pulses at a repetition rate of 120 Hz. In addition to the electrical design considerations, the repetition rate requires thermal management of the electrical losses. Steady state temperatures must be kept below the material degradation temperatures to maximize reliability and component life. The optimum design is a trade off between thermal management, maximizing overall electrical performance of the system, reliability, and cost effectiveness. Cooling requirements and configurations were developed for each of the subsystems of RHEPP. Finite element models that combine fluid flow and heat transfer were used to screen design concepts. The analysis includes one, two, and three dimensional heat transfer using surface heat transfer coefficients and boundary layer models. Experiments were conducted to verify the models as well as to evaluate cooling channel fabrication materials and techniques in Metglas wound cores. 10 refs.