NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Estimating Health Services Requirements
NASA Technical Reports Server (NTRS)
Alexander, H. M.
1985-01-01
In computer program NOROCA populations statistics from National Center for Health Statistics used with computational procedure to estimate health service utilization rates, physician demands (by specialty) and hospital bed demands (by type of service). Computational procedure applicable to health service area of any size and even used to estimate statewide demands for health services.
Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2007-12-01
This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Urban noise functional stratification for estimating average annual sound level.
Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos
2015-06-01
Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time. PMID:26093410
Lopez, Michael J; Gutman, Roee
2014-11-28
Propensity score methods are common for estimating a binary treatment effect when treatment assignment is not randomized. When exposure is measured on an ordinal scale (i.e. low-medium-high), however, propensity score inference requires extensions which have received limited attention. Estimands of possible interest with an ordinal exposure are the average treatment effects between each pair of exposure levels. Using these estimands, it is possible to determine an optimal exposure level. Traditional methods, including dichotomization of the exposure or a series of binary propensity score comparisons across exposure pairs, are generally inadequate for identification of optimal levels. We combine subclassification with regression adjustment to estimate transitive, unbiased average causal effects across an ordered exposure, and apply our method on the 2005-2006 National Health and Nutrition Examination Survey to estimate the effects of nutritional label use on body mass index.
Line broadening estimate from averaged energy differences of coupled states
NASA Astrophysics Data System (ADS)
Lavrentieva, Nina N.; Dudaryonok, Anna S.; Ma, Qiancheng
2014-11-01
The method to the calculation of rotation-vibrational line half-width of asymmetric top molecules is proposed. The influence of the buffer gas on the internal state of the absorbing molecule is emphasized in this method. The basic expressions of present approach are given. The averaged energy differences method was used for the calculation of H2O and HDO lines broadening. Comparisons of the calculated line shape parameters with the experimental values in different absorption bands are made.
Estimation of the average visibility in central Europe
NASA Astrophysics Data System (ADS)
Horvath, Helmuth
Visibility has been obtained from spectral extinction coefficients measured with the University of Vienna Telephotometer or size distributions determined with an Aerosol Spectrometer. By measuring the extinction coefficient in different directions, possible influences of local sources could be determined easily. A region, undisturbed by local sources usually had a variation of extinction coefficient of less than 10% in different directions. Generally good visibility outside population centers in Europe is considered as 40-50 km. These values have been found independent of the location in central Europe, thus this represents the average European "clean" air. Under rare occasions (normally rapid change of air mass) the visibility can be 100-150 km. In towns, the visibility is a factor of approximately 2 lower. In comparison to this the visibility in remote regions of North and South America is larger by a factor of 2-4. Obviously the lower visibility in Europe is caused by its higher population density. Since the majority of visibility reducing particulate emissions come from small sources such as cars or heating, the emissions per unit area can be considered proportional to the population density. Using a simple box model and the visibility measured in central Europe and in Vienna, the difference in visibility inside and outside the town can be explained quantitatively. It thus is confirmed, that the generally low visibility in central Europe is a consequence of the emissions in connection with human activities and the low visibility (compared, e.g. to North or South America) in remote location such as the Alps is caused by the average European pollution.
Analysis of the estimators of the average coefficient of dominance of deleterious mutations.
Fernández, B; García-Dorado, A; Caballero, A
2004-10-01
We investigate the sources of bias that affect the most commonly used methods of estimation of the average degree of dominance (h) of deleterious mutations, focusing on estimates from segregating populations. The main emphasis is on the effect of the finite size of the populations, but other sources of bias are also considered. Using diffusion approximations to the distribution of gene frequencies in finite populations as well as stochastic simulations, we assess the behavior of the estimators obtained from populations at mutation-selection-drift balance under different mutational scenarios and compare averages of h for newly arisen and segregating mutations. Because of genetic drift, the inferences concerning newly arisen mutations based on the mutation-selection balance theory can have substantial upward bias depending upon the distribution of h. In addition, estimates usually refer to h weighted by the homozygous deleterious effect in different ways, so that inferences are complicated when these two variables are negatively correlated. Due to both sources of bias, the widely used regression of heterozygous on homozygous means underestimates the arithmetic mean of h for segregating mutations, in contrast to their repeatedly assumed equality in the literature. We conclude that none of the estimators from segregating populations provides, under general conditions, a useful tool to ascertain the properties of the degree of dominance, either for segregating or for newly arisen deleterious mutations. Direct estimates of the average h from mutation-accumulation experiments are shown to suffer some bias caused by purging selection but, because they do not require assumptions on the causes maintaining segregating variation, they appear to give a more reliable average dominance for newly arisen mutations.
A new estimate of average dipole field strength for the last five million years
NASA Astrophysics Data System (ADS)
Cromwell, G.; Tauxe, L.; Halldorsson, S. A.
2013-12-01
The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Experimental estimation of average fidelity of a Clifford gate on a 7-qubit quantum processor.
Lu, Dawei; Li, Hang; Trottier, Denis-Alexandre; Li, Jun; Brodutch, Aharon; Krismanich, Anthony P; Ghavami, Ahmad; Dmitrienko, Gary I; Long, Guilu; Baugh, Jonathan; Laflamme, Raymond
2015-04-10
One of the major experimental achievements in the past decades is the ability to control quantum systems to high levels of precision. To quantify the level of control we need to characterize the dynamical evolution. Full characterization via quantum process tomography is impractical and often unnecessary. For most practical purposes, it is enough to estimate more general quantities such as the average fidelity. Here we use a unitary 2-design and twirling protocol for efficiently estimating the average fidelity of Clifford gates, to certify a 7-qubit entangling gate in a nuclear magnetic resonance quantum processor. Compared with more than 10^{8} experiments required by full process tomography, we conducted 1656 experiments to satisfy a statistical confidence level of 99%. The average fidelity of this Clifford gate in experiment is 55.1%, and rises to at least 87.5% if the signal's decay due to decoherence is taken into account. The entire protocol of certifying Clifford gates is efficient and scalable, and can easily be extended to any general quantum information processor with minor modifications.
Estimates of zonally averaged tropical diabatic heating in AMIP GCM simulations. PCMDI report No. 25
Boyle, J.S.
1995-07-01
An understanding of the processess that generate the atmospheric diabatic heating rates is basic to an understanding of the time averaged general circulation of the atmosphere and also circulation anomalies. Knowledge of the sources and sinks of atmospheric heating enables a fuller understanding of the nature of the atmospheric circulation. An actual assesment of the diabatic heating rates in the atmosphere is a difficult problem that has been approached in a number of ways. One way is to estimate the total diabatic heating by estimating individual components associated with the radiative fluxes, the latent heat release, and sensible heat fluxes. An example of this approach is provided by Newell. Another approach is to estimate the net heating rates from consideration of the balance required of the mass and wind variables as routinely observed and analyzed. This budget computation has been done using the thermodynamic equation and more recently done by using the vorticity and thermodynamic equations. Schaak and Johnson compute the heating rates through the integration of the isentropic mass continuity equation. The estimates of heating arrived at all these methods are severely handicapped by the uncertainties in the observational data and analyses. In addition the estimates of the individual heating components suffer an additional source of error from the parameterizations used to approximate these quantities.
Estimating monthly averaged air-sea transfers of heat and momentum using the bulk aerodynamic method
NASA Technical Reports Server (NTRS)
Esbensen, S. K.; Reynolds, R. W.
1981-01-01
Air-sea transfers of sensible heat, latent heat and momentum are computed from 25 years of middle-latitude and subtropical ocean weather ship data in the North Atlantic and North Pacific using the bulk aerodynamic method. The results show that monthly averaged wind speeds, temperatures and humidities can be used to estimate the monthly averaged sensible and latent heat fluxes from the bulk aerodynamic equations to within a relative error of approximately 10%. The estimates of monthly averaged wind stress under the assumption of neutral stability are shown to be within approximately 5% of the monthly averaged nonneutral values.
Double robust estimator of average causal treatment effect for censored medical cost data.
Wang, Xuan; Beste, Lauren A; Maier, Marissa M; Zhou, Xiao-Hua
2016-08-15
In observational studies, estimation of average causal treatment effect on a patient's response should adjust for confounders that are associated with both treatment exposure and response. In addition, the response, such as medical cost, may have incomplete follow-up. In this article, a double robust estimator is proposed for average causal treatment effect for right censored medical cost data. The estimator is double robust in the sense that it remains consistent when either the model for the treatment assignment or the regression model for the response is correctly specified. Double robust estimators increase the likelihood the results will represent a valid inference. Asymptotic normality is obtained for the proposed estimator, and an estimator for the asymptotic variance is also derived. Simulation studies show good finite sample performance of the proposed estimator and a real data analysis using the proposed method is provided as illustration. Copyright © 2016 John Wiley & Sons, Ltd.
Double robust estimator of average causal treatment effect for censored medical cost data.
Wang, Xuan; Beste, Lauren A; Maier, Marissa M; Zhou, Xiao-Hua
2016-08-15
In observational studies, estimation of average causal treatment effect on a patient's response should adjust for confounders that are associated with both treatment exposure and response. In addition, the response, such as medical cost, may have incomplete follow-up. In this article, a double robust estimator is proposed for average causal treatment effect for right censored medical cost data. The estimator is double robust in the sense that it remains consistent when either the model for the treatment assignment or the regression model for the response is correctly specified. Double robust estimators increase the likelihood the results will represent a valid inference. Asymptotic normality is obtained for the proposed estimator, and an estimator for the asymptotic variance is also derived. Simulation studies show good finite sample performance of the proposed estimator and a real data analysis using the proposed method is provided as illustration. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26818601
A comparison of spatial averaging and Cadzow's method for array wavenumber estimation
Harris, D.B.; Clark, G.A.
1989-10-31
We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.
Elkinton, J S; Cardé, R T; Mason, C J
1984-07-01
The Sutton and more recent Gaussian plume models of atmospheric dispersion were used to estimate downwind concentrations of pheromone in a deciduous forest. Wind measurements from two bivane anemometers were recorded every 12 sec and the pheromone was emitted from a point source 1.6 m above ground level at known rates. The wingfanning response of individually caged male gypsy moths (Lymantria dispar) at 15 sites situated 20 to 80 m downwind was used to monitor when pheromone levels were above threshold over a 15-min interval. Predicted concentrations from these Gaussian-type models at locations where wing fanning occurred were often several orders of magnitude below the known behavioral thresholds determined from wind tunnel tests. Probit analyses of dose-response relationships with these models showed no relationship between predicted dose and actual response. The disparity between the predictions of concentration from these models and the actual response patterns of the male gypsy moth in the field was not unexpected. These time-average models predict concentrations for a fixed position over 3-min or longer intervals, based upon the dispersion coefficients. Thus the models estimate pheromone concentrations for time intervals appreciably longer than required for behavioral response.
Estimation of the exertion requirements of coal mining work
Harber, P.; Tamimie, J.; Emory, J.
1984-02-01
The work requirements of coal mining work were estimated by studying a group of 12 underground coal miners. A two level (rest, 300 kg X m/min) test was performed to estimate the linear relationship between each subject's heart rate and oxygen consumption. Then, heart rates were recorded during coal mining work with a Holter type recorder. From these data, the distributions of oxygen consumptions during work were estimated, allowing characterization of the range of exertion throughout the work day. The average median estimated oxygen consumption was 3.3 METS, the average 70th percentile was 4.3 METS, and the average 90th percentile was 6.3 METS. These results should be considered when assessing an individual's occupational fitness.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights
NASA Astrophysics Data System (ADS)
Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.
2010-12-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.
Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory
ERIC Educational Resources Information Center
Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena
2013-01-01
This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…
How ants use quorum sensing to estimate the average quality of a fluctuating resource
Franks, Nigel R.; Stuttard, Jonathan P.; Doran, Carolina; Esposito, Julian C.; Master, Maximillian C.; Sendova-Franks, Ana B.; Masuda, Naoki; Britton, Nicholas F.
2015-01-01
We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures. PMID:26153535
How ants use quorum sensing to estimate the average quality of a fluctuating resource.
Franks, Nigel R; Stuttard, Jonathan P; Doran, Carolina; Esposito, Julian C; Master, Maximillian C; Sendova-Franks, Ana B; Masuda, Naoki; Britton, Nicholas F
2015-07-08
We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures.
Li, Run-Kui; Zhao, Tong; Li, Zhi-Peng; Ding, Wen-Jun; Cui, Xiao-Yong; Xu, Qun; Song, Xian-Feng
2014-04-01
On-road vehicle emissions have become the main source of urban air pollution and attracted broad attentions. Vehicle emission factor is a basic parameter to reflect the status of vehicle emissions, but the measured emission factor is difficult to obtain, and the simulated emission factor is not localized in China. Based on the synchronized increments of traffic flow and concentration of air pollutants in the morning rush hour period, while meteorological condition and background air pollution concentration retain relatively stable, the relationship between the increase of traffic and the increase of air pollution concentration close to a road is established. Infinite line source Gaussian dispersion model was transformed for the inversion of average vehicle emission factors. A case study was conducted on a main road in Beijing. Traffic flow, meteorological data and carbon monoxide (CO) concentration were collected to estimate average vehicle emission factors of CO. The results were compared with simulated emission factors of COPERT4 model. Results showed that the average emission factors estimated by the proposed approach and COPERT4 in August were 2.0 g x km(-1) and 1.2 g x km(-1), respectively, and in December were 5.5 g x km(-1) and 5.2 g x km(-1), respectively. The emission factors from the proposed approach and COPERT4 showed close values and similar seasonal trends. The proposed method for average emission factor estimation eliminates the disturbance of background concentrations and potentially provides real-time access to vehicle fleet emission factors.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
NASA Astrophysics Data System (ADS)
Tsai, Frank T.-C.; Li, Xiaobao
2008-09-01
This study proposes a Bayesian model averaging (BMA) method to address parameter estimation uncertainty arising from nonuniqueness in parameterization methods. BMA is able to incorporate multiple parameterization methods for prediction through the law of total probability and to obtain an ensemble average of hydraulic conductivity estimates. Two major issues in applying BMA to hydraulic conductivity estimation are discussed. The first problem is using Occam's window in usual BMA applications to measure approximated posterior model probabilities. Occam's window only accepts models in a very narrow range, tending to single out the best method and discard other good methods. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the Kashyap information criterion (KIC) in the approximated posterior model probabilities, which tends to prefer highly uncertain parameterization methods by considering the Fisher information matrix. With sufficient amounts of observation data, the Bayesian information criterion (BIC) is a good approximation and is able to avoid controversial results from using KIC. This study adopts multiple generalized parameterization (GP) methods such as the BMA models to estimate spatially correlated hydraulic conductivity. Numerical examples illustrate the issues of using KIC and Occam's window and show the advantages of using BIC and the variance window in BMA application. Finally, we apply BMA to the hydraulic conductivity estimation of the "1500-foot" sand in East Baton Rouge Parish, Louisiana.
Estimation of annual average daily traffic for off-system roads in Florida. Final report
Shen, L.D.; Zhao, F.; Ospina, D.I.
1999-07-28
Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination of roadway geometry, congestion management, pavement design, safety considerations, etc. AADT is also used to estimate state wide vehicle miles traveled on all the roads and is used by local governments and the environmental protection agencies to determine compliance with the 1990 Clean Air Act Amendment. Additionally, AADT is reported annually by the Florida Department of transportation (FDOT) to the Federal Highway Administration. In the past, considerable efforts have been made in obtaining traffic counts to estimate AADT on state roads. However, traffic counts are often not available on off-system roads, and less attention has been paid to the estimation of AADT in the absence of counts. Current estimates rely on comparisons with roads that are subjectively considered to be similar. Such comparisons are inherently subject to large errors, and also may not be repeated often enough to remain current. Therefore, a better method is needed for estimating AADT for off-system roads in Florida. This study investigates the possibility of establishing one or more models for estimating AADT for off-system roads in Florida.
Estimating the path-average rainwater content and updraft speed along a microwave link
NASA Technical Reports Server (NTRS)
Jameson, Arthur R.
1993-01-01
There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
NASA Astrophysics Data System (ADS)
Constable, C.; Johnson, C. L.
2009-05-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509
Lesion edge preserved direct average strain estimation for ultrasound elasticity imaging.
Hussain, Mohammad Arafat; Alam, Farzana; Rupa, Sharmin Akhtar; Awwal, Rayhana; Lee, Soo Yeol; Hasan, Md Kamrul
2014-01-01
Elasticity imaging techniques with built-in or regularization-based smoothing feature for ensuring strain continuity are not intelligent enough to prevent distortion or lesion edge blurring while smoothing. This paper proposes a novel approach with built-in lesion edge preservation technique for high quality direct average strain imaging. An edge detection scheme, typically used in diffusion filtering is modified here for lesion edge detection. Based on the extracted edge information, lesion edges are preserved by modifying the strain determining cost function in the direct-average-strain-estimation (DASE) method. The proposed algorithm demonstrates approximately 3.42-4.25 dB improvement in terms of edge-mean-square-error (EMSE) than the other reported regularized or average strain estimation techniques in finite-element-modeling (FEM) simulation with almost no sacrifice in elastographic-signal-to-noise-ratio (SNRe) and elastographic-contrast-to-noise-ratio (CNRe) metrics. The efficacy of the proposed algorithm is also tested for the experimental phantom data and in vivo breast data. The results reveal that the proposed method can generate a high quality strain image delineating the lesion edge more clearly than the other reported strain estimation techniques that have been designed to ensure strain continuity. The computational cost, however, is little higher for the proposed method than the simpler DASE and considerably higher than that of the 2D analytic minimization (AM2D) method.
Coherent radar estimates of average high-latitude ionospheric Joule heating
Kosch, M.J.; Nielsen, E.
1995-07-01
The Scandinavian Twin Auroral Radar Experiment (STARE) and Sweden and Britain Radar Experiment (SABRE) bistatic coherent radar systems have been employed to estimate the spatial and temporal variation of the ionospheric Joule heating in the combined geographic latitude range 63.8 deg - 72.6 deg (corrected geomagnetic latitude 61.5 deg - 69.3 deg) over Scandinavia. The 173 days of good observations with all four radars have been analyzed during the period 1982 to 1986 to estimate the average ionospheric electric field versus time and latitude. The AE dependent empirical model of ionospheric Pedersen conductivity of Spiro et al. (1982) has been used to calculate the Joule heating. The latitudinal and diurnal variation of Joule heating as well as the estimated mean hemispherical heating of 1.7 x 10(exp 11) W are in good agreement with earlier results. Average Joule heating was found to vary linearly with the AE, AU, and AL indices and as a second-order power law with Kp. The average Joule heating was also examined as a function of the direction and magnitude of the interplanetary magnetic field. It has been shown for the first time that the ionospheric electric field magnitude as well as the Joule heating increase with increasingly negative (southward) Bz.
Chow, R.; Doss, F.W.; Taylor, J.R.; Wong, J.N.
1999-07-02
Optical components needed for high-average-power lasers, such as those developed for Atomic Vapor Laser Isotope Separation (AVLIS), require high levels of performance and reliability. Over the past two decades, optical component requirements for this purpose have been optimized and performance and reliability have been demonstrated. Many of the optical components that are exposed to the high power laser light affect the quality of the beam as it is transported through the system. The specifications for these optics are described including a few parameters not previously reported and some component manufacturing and testing experience. Key words: High-average-power laser, coating efficiency, absorption, optical components
An Estimate of the Average Number of Recessive Lethal Mutations Carried by Humans
Gao, Ziyue; Waggoner, Darrel; Stephens, Matthew; Ober, Carole; Przeworski, Molly
2015-01-01
The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and nonconsanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To overcome this limitation, we focused on a founder population that practices a communal lifestyle, for which there is almost complete Mendelian disease ascertainment and a known pedigree. Focusing on recessive lethal diseases and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.84]) recessive alleles that lead to complete sterility or death by reproductive age when homozygous. Comparison to existing estimates in humans suggests that a substantial fraction of the total burden imposed by recessive deleterious variants is due to single mutations that lead to sterility or death between birth and reproductive age. In turn, comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes. PMID:25697177
Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.
Kearney, Michael R; Isaac, Andrew P; Porter, Warren P
2014-01-01
The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms.
microclim: Global estimates of hourly microclimate based on long-term monthly climate averages
Kearney, Michael R; Isaac, Andrew P; Porter, Warren P
2014-01-01
The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764
A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China
Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin
2014-01-01
Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
NASA Astrophysics Data System (ADS)
Muthalif, Asan G. A.; Wahid, Azni N.; Nor, Khairul A. M.
2014-02-01
Engineering systems such as aircraft, ships and automotive are considered built-up structures. Dynamically they are taught of as being fabricated from many components that are classified as 'deterministic subsystems' (DS) and 'non-deterministic subsystems' (Non-DS). Structures' response of the DS is deterministic in nature and analysed using deterministic modelling methods such as finite element (FE) method. The response of Non-DS is statistical in nature and estimated using statistical modelling technique such as statistical energy analysis (SEA). SEA method uses power balance equation, in which any external input to the subsystem must be represented in terms of power. Often, input force is taken as point force and ensemble average power delivered by point force is already well-established. However, the external input can also be applied in the form of moments exerted by a piezoelectric (PZT) patch actuator. In order to be able to apply SEA method for input moments, a mathematical representation for moment generated by PZT patch in the form of average power is needed, which is attempted in this paper. A simply-supported plate with attached PZT patch is taken as a benchmark model. Analytical solution to estimate average power is derived using mobility approach. Ensemble average of power given by the PZT patch actuator to the benchmark model when subjected to structural uncertainties is also simulated using Lagrangian method and FEA software. The analytical estimation is compared with the Lagrangian model and FE method for validation. The effects of size and location of the PZT actuators on the power delivered to the plate are later investigated.
Unmanned Aerial Vehicles unique cost estimating requirements
NASA Astrophysics Data System (ADS)
Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.
Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.
Estimates of average annual tributary inflow to the lower Colorado River, Hoover Dam to Mexico
Owen-Joyce, Sandra J.
1987-01-01
Estimates of tributary inflow by basin or area and by surface water or groundwater are presented in this report and itemized by subreaches in tabular form. Total estimated average annual tributary inflow to the Colorado River between Hoover Dam and Mexico, excluding the measured tributaries, is 96,000 acre-ft or about 1% of the 7.5 million acre-ft/yr of Colorado River water apportioned to the States in the lower Colorado River basin. About 62% of the tributary inflow originates in Arizona, 30% in California, and 8% in Nevada. Tributary inflow is a small component in the water budget for the river. Most of the quantities of unmeasured tributary inflow were estimated in previous studies and were based on mean annual precipitation for 1931-60. Because mean annual precipitation for 1951-80 did not differ significantly from that of 1931-60, these tributary inflow estimates are assumed to be valid for use in 1984. Measured average annual runoff per unit drainage area on the Bill Williams River has remained the same. Surface water inflow from unmeasured tributaries is infrequent and is not captured in surface reservoirs in any of the States; it flows to the Colorado River gaging stations. Estimates of groundwater inflow to the Colorad River valley. Average annual runoff can be used in a water budget; although in wet years, runoff may be large enough to affect the calculation of consumptive use and to be estimated from hydrographs for the Colorado River valley are based on groundwater recharge estimates in the bordering areas, which have not significantly changed through time. In most areas adjacent to the Colorado River valley, groundwater pumpage is small and pumping has not significantly affected the quantity of groundwater discharged to the Colorado River valley. In some areas where groundwater pumpage exceeds the quantity of groundwater discharge and water levels have declined, the quantity of discharge probably has decreased and groundwater inflow to the Colorado
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
NASA Technical Reports Server (NTRS)
Smith, G. L.; Bess, T. D.; Minnis, P.
1983-01-01
The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
[Estimation of the Average Glandular Dose Using the Mammary Gland Image Analysis in Mammography].
Otsuka, Tomoko; Teramoto, Atsushi; Asada, Yasuki; Suzuki, Shoichi; Fujita, Hiroshi; Kamiya, Satoru; Anno, Hirofumi
2016-05-01
Currently, the glandular dose is evaluated quantitatively on the basis of the measured data using phantom, and not in a dose based on the mammary gland structure of an individual patient. However, mammary gland structures of the patients are different from each other and mammary gland dose of an individual patient cannot be obtained by the existing methods. In this study, we present an automated estimation method of mammary gland dose by means of mammary structure which is measured automatically using mammogram. In this method, mammary gland structure is extracted by Gabor filter; mammary region is segmented by the automated thresholding. For the evaluation, mammograms of 100 patients diagnosed with category 1 were collected. Using these mammograms we compared the mammary gland ratio measured by proposed method and visual evaluation. As a result, 78% of the total cases were matched. Furthermore, the mammary gland ratio and average glandular dose among the patients with same breast thickness was matched well. These results show that the proposed method may be useful for the estimation of average glandular dose for the individual patients.
[Estimation of the Average Glandular Dose Using the Mammary Gland Image Analysis in Mammography].
Otsuka, Tomoko; Teramoto, Atsushi; Asada, Yasuki; Suzuki, Shoichi; Fujita, Hiroshi; Kamiya, Satoru; Anno, Hirofumi
2016-05-01
Currently, the glandular dose is evaluated quantitatively on the basis of the measured data using phantom, and not in a dose based on the mammary gland structure of an individual patient. However, mammary gland structures of the patients are different from each other and mammary gland dose of an individual patient cannot be obtained by the existing methods. In this study, we present an automated estimation method of mammary gland dose by means of mammary structure which is measured automatically using mammogram. In this method, mammary gland structure is extracted by Gabor filter; mammary region is segmented by the automated thresholding. For the evaluation, mammograms of 100 patients diagnosed with category 1 were collected. Using these mammograms we compared the mammary gland ratio measured by proposed method and visual evaluation. As a result, 78% of the total cases were matched. Furthermore, the mammary gland ratio and average glandular dose among the patients with same breast thickness was matched well. These results show that the proposed method may be useful for the estimation of average glandular dose for the individual patients. PMID:27211083
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Planning and Estimation of Operations Support Requirements
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon
2010-01-01
Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D
Nonlinear models for estimating GSFC travel requirements
NASA Technical Reports Server (NTRS)
Buffalano, C.; Hagan, F. J.
1974-01-01
A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.
NASA Astrophysics Data System (ADS)
Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.
2015-05-01
A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.
Taylor, Brian A; Hwang, Ken-Pin; Hazle, John D; Stafford, R Jason
2009-03-01
The authors investigated the performance of the iterative Steiglitz-McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (< or equal 16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer-Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR) > or =5 for echo train lengths (ETLs) > or =4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and/or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with > or =4 echoes and for T2*(<1.0%) with > or =7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire < or =16 echoes for one- and two-peak systems. Preliminary ex vivo
Taylor, Brian A.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason
2009-01-01
The authors investigated the performance of the iterative Steiglitz–McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (≤16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer–Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR)≥5 for echo train lengths (ETLs)≥4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and∕or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with ≥4 echoes and for T2* (<1.0%) with ≥7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire ≤16 echoes for one- and two-peak systems. Preliminary ex vivo and in vivo
Mesin, Luca; Damiano, Luisa; Farina, Dario
2007-03-15
The aim of this simulation study was to assess the bias in estimating muscle fiber conduction velocity (CV) from surface electromyographic (EMG) signals in muscles with one and two pinnation angles. The volume conductor was a layered medium simulating anisotropic muscle tissue and isotropic homogeneous subcutaneous tissue. The muscle tissue was homogeneous for one pinnation angle and inhomogeneous for bipinnate muscles (two fiber directions). Interference EMG signals were obtained by simulating recruitment thresholds and discharge patterns of a set of 100 and 200 motor units for the pinnate and bipinnate muscle, respectively (15 degrees pinnation angel in both cases). Without subcutaneous layer and muscle fibers with CV 4m/s, average CV estimates from the pinnate (bipinnate) muscle were 4.81+/-0.18 m/s (4.80+/-0.18 m/s) for bipolar, 4.71+/-0.19 m/s (4.71+/-0.12 m/s) for double differential, and 4.78+/-0.16 m/s (4.79+/-0.15m/s) for Laplacian recordings. When subcutaneous layer was added (thickness 1mm) in the same conditions, estimated CV values were 4.93+/-0.25 m/s (5.16+/-0.41 m/s), 4.70+/-0.21 m/s (4.83+/-0.33 m/s), and 4.89+/-0.21 m/s (4.99+/-0.39 m/s), for the three recording systems, respectively. The main factor biasing CV estimates was the propagation of action potentials in the two directions which influenced the recording due to the scatter of the projection of end-plate and tendon locations along the fiber direction, as a consequence of pinnation. The same problem arises in muscles with the line of innervation zone locations not perpendicular to fiber direction. These results indicate an important limitation in reliability of CV estimates from the interference EMG when the innervation zone and tendon locations are not distributed perpendicular to fiber direction.
Estimated average annual alkalinity of six streams entering Deep Creek Lake Garrett County, Maryland
Hodges, A.L. )
1986-01-01
There is concern that acid rain combined with acid mine drainage from coal mining in the basin will exceed the capacity of the lake to buffer the acid input from these sources. This study was done during 1983 to determine the sources of alkalinity to the lake, and to make a rough estimate of the amount of alkalinity that enters the lake from six streams that drain carbonate and noncarbonate bedrock formations. The Mississippian Greenbrier Formation, which crops out in 5% of the basin, is the only calcareous rock unit. Four streams draining the Greenbrier and two streams draining noncarbonate formations were sampled to assess the contribution of alkalinity to Deep Creek lake. The average annual alkalinity of six sampled streams ranged from 7.6 to 36.8 tons/yr/sq mi of drainage area. The average total alkalinity contributed to Deep Creek Lake by these streams is 161 tons/yr as calcium carbonate. Mass-balance calculations based on very limited data indicate that this alkalinity is derived from both carbonate rocks (Greenbrier Formation) and from weathering and hydrolysis of silicate minerals. Other sources may contribute alkalinity to Deep Creek lake, but could not be quantified within the scope of this study. No changes in stream-water quality were found that could be directly attributed to the stream having crossed the boundary from one noncarbonate bedrock formation to another. Inflow to streams from adjacent or underlying carbonate bedrock was apparent in several streams from increased values of pH and conductance. 20 refs., 5 figs., 7 tabs.
Estimating monthly-averaged air-sea transfers of heat and momentum using the bulk aerodynamic method
NASA Technical Reports Server (NTRS)
Esbensen, S. K.; Reynolds, R. W.
1980-01-01
Air-sea transfers of sensible heat, latent heat, and momentum are computed from twenty-five years of middle-latitude and subtropical ocean weather ship data in the North Atlantic and North Pacific using the bulk aerodynamic method. The results show that monthly-averaged wind speeds, temperatures, and humidities can be used to estimate the monthly-averaged sensible and latent heat fluxes computed from the bulk aerodynamic equations to within a relative error of approximately 10%. The estimate of monthly-averaged wind stress under the assumption of neutral stability are shown to be within approximately 5% of the monthly-averaged non-neutral values.
Nayfach, Stephen; Pollard, Katherine S
2015-03-25
Average genome size is an important, yet often overlooked, property of microbial communities. We developed MicrobeCensus to rapidly and accurately estimate average genome size from shotgun metagenomic data and applied our tool to 1,352 human microbiome samples. We found that average genome size differs significantly within and between body sites and tracks with major functional and taxonomic differences. In the gut, average genome size is positively correlated with the abundance of Bacteroides and genes related to carbohydrate metabolism. Importantly, we found that average genome size variation can bias comparative analyses, and that normalization improves detection of differentially abundant genes.
31 CFR 205.23 - What requirements apply to estimates?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What requirements apply to estimates... Treasury-State Agreement § 205.23 What requirements apply to estimates? The following requirements apply when we and a State negotiate a mutually agreed upon funds transfer procedure based on an estimate...
Estimation Of The Average Velocity Of Ship(s) Using Multi Sensor Data
NASA Astrophysics Data System (ADS)
Srinivasa Rao, N.; Ali, M. M.; Rao, M. V.; Ramana, I. V.
IRS_P4 (OCM) and NOAA-AVHRR are ocean sensors. These sensors have variety of applications besides being used for retrieval of ocean parameters like chlorophyll, suspended matter, dissolved organic matter Sea surface temperature (SST) and Identification of Potential Fishing Zones (PFZ). Apart from this an attempt has been made to track the ship(s) and estimated its velocity. It is important to keep the track of the ship movement and velocity for surveillance of oil tankers and large vessels, ply in the shipping routes for possible assessment of oil spill and defense applications. The ships while moving uses fossil fuel and release their exhaust in the form of Sulfur dioxide (SO2) gas. This Sulfur dioxide when comes in contact with water vapour, leads to the formation of Sulfate Aerosol's. These aerosol's act as the nuclei or 'seeds', these nuclei are called as cloud condensation nuclei (CCN), around which cloud droplets take shape and together these droplets form clouds. Narrow lines of perturbed regions in marine stratiform clouds, caused by moving ships, appear brighter in satellite imagery. They can also appear as narrow lines of clouds in an otherwise cloud-free sky. Ship tracks are the long lived cloud lines that are formed from ship exhaust. The track of large ships is sometimes visualized as a trail, known as ship track. These are typically very long even 500km sometimes, i.e. long enough to be seen in Satellite imagery. These tracks formed in marine boundary layers that were between 300 to 750m height, high relative humidity, small Air-Sea temperature difference (0.5°C), moderate winds (average of 7 m/s) (Durkee et.al 2000).These tracks directly scattering solar radiation back to space and increasing cloud reflectivity through increased droplet concentrations, these clouds reflect much more solar energy than the ocean surface. In Infra red (IR) region the reflectance of the cloud is more compared with visual Bands, and thus we get ship track information
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter
2016-04-01
Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.
48 CFR 252.215-7002 - Cost estimating system requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Cost estimating system... of Provisions And Clauses 252.215-7002 Cost estimating system requirements. As prescribed in 215.408(2), use the following clause: Cost Estimating System Requirements (DEC 2006) (a)...
Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes
Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.
2002-12-03
The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less
Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes
Chapman, E. G.; Shaw, W. J.; Easter, R. C.; Bian, X.; Ghan, S. J.
2002-12-03
The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speeds can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.
Average fetal depth in utero: data for estimation of fetal absorbed radiation dose
Ragozzino, M.W.; Breckle, R.; Hill, L.M.; Gray, J.E.
1986-02-01
To estimate fetal absorbed dose from radiographic examinations, the depth from the anterior maternal surface to the midline of the fetal skull and abdomen was measured by ultrasound in 97 pregnant women. The relationships between fetal depth, fetal presentation, and maternal parameters of height, weight, anteroposterior (AP) thickness, gestational age, placental location, and bladder volume were analyzed. Maternal AP thickness (MAP) can be estimated from gestational age, maternal height, and maternal weight. Fetal midskull and abdominal depths were nearly equal. Fetal depth normalized to MAP was independent or nearly independent of maternal parameters and fetal presentation. These data enable a reasonable estimation of absorbed dose to fetal brain, abdomen, and whole body.
DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN
Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...
Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2011-01-01
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
NASA Astrophysics Data System (ADS)
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2016-08-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.
Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu
2015-05-01
Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine.
NASA Astrophysics Data System (ADS)
Yamaguchi, Makoto; Midorikawa, Saburoh
The empirical equation for estimating the site amplification factor of ground motion by the average shear-wave velocity of ground (AVS) is examined. In the existing equations, the coefficient on dependence of the amplification factor on the AVS was treated as constant. The analysis showed that the coefficient varies with change of the AVS for short periods. A new estimation equation was proposed considering the dependence on the AVS. The new equation can represent soil characteristics that the softer soil has the longer predominant period, and can make better estimations for short periods than the existing method.
Zhang, Wenlu; Lin, Zhihong
2013-10-15
Using the canonical perturbation theory, we show that the orbit-averaged theory only requires a time-scale separation between equilibrium and perturbed motions and verifies the widely accepted notion that orbit averaging effects greatly reduce the microturbulent transport of energetic particles in a tokamak. Therefore, a recent claim [Hauff and Jenko, Phys. Rev. Lett. 102, 075004 (2009); Jenko et al., ibid. 107, 239502 (2011)] stating that the orbit-averaged theory requires a scale separation between equilibrium orbit size and perturbation correlation length is erroneous.
Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...
NASA Astrophysics Data System (ADS)
Liu, Lin; Schaefer, Kevin; Zhang, Tingjun; Wahr, John
2012-01-01
The measurement of temporal changes in active layer thickness (ALT) is crucial to monitoring permafrost degradation in the Arctic. We develop a retrieval algorithm to estimate long-term average ALT using thaw-season surface subsidence derived from spaceborne interferometric synthetic aperture radar (InSAR) measurements. Our algorithm uses a model of vertical distribution of water content within the active layer accounting for soil texture, organic matter, and moisture. We determine the 1992-2000 average ALT for an 80 × 100 km study area of continuous permafrost on the North Slope of Alaska near Prudhoe Bay. We obtain an ALT of 30-50 cm over moist tundra areas, and a larger ALT of 50-80 cm over wet tundra areas. Our estimated ALT values match in situ measurements at Circumpolar Active Layer Monitoring (CALM) sites within uncertainties. Our results demonstrate that InSAR can provide ALT estimates over large areas at high spatial resolution.
Areally averaged estimates of surface heat flux from ARM field studies
Coulter, R.L.; Martin, T.J.; Cook, D.R.
1993-08-01
The determination of areally averaged surface fluxes is a problem of fundamental interest to the Atmospheric Radiation Measurement (ARM) program. The Cloud And Radiation Testbed (CART) sites central to the ARM program will provide high-quality data for input to and verification of General Circulation Models (GCMs). The extension of several point measurements of surface fluxes within the heterogeneous CART sites to an accurate representation of the areally averaged surface fluxes is not straightforward. Two field studies designed to investigate these problems, implemented by ARM science team members, took place near Boardman, Oregon, during June of 1991 and 1992. The site was chosen to provide strong contrasts in surface moisture while minimizing the differences in topography. The region consists of a substantial dry steppe (desert) upwind of an extensive area of heavily irrigated farm land, 15 km in width and divided into 800-m-diameter circular fields in a close packed array, in which wheat, alfalfa, corn, or potatoes were grown. This region provides marked contrasts, not only on the scale of farm-desert (10--20 km) but also within the farm (0.1--1 km), because different crops transpire at different rates, and the pivoting irrigation arms provide an ever-changing pattern of heavy surface moisture throughout the farm area. This paper primarily discusses results from the 1992 field study.
A data-driven model for estimating industry average numbers of hospital security staff.
Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M
2015-01-01
In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital. PMID:26647500
Heo, Seo Weon; Kim, Hyungsuk
2010-05-01
An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Approximate sample sizes required to estimate length distributions
Miranda, L.E.
2007-01-01
The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-09-01
Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-01-01
Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087
Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest
2009-12-01
Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions
Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest
2009-12-01
Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions.
Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J
2016-09-20
In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd.
Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J
2016-09-20
In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27087478
DeLong, L.L.; Wells, D.K.
1988-01-01
A method was developed to determine the average dissolved-solids yield contributed by small basins characterized by ephemeral and intermittent streams in the Green River basin in Wyoming. The method is different from that commonly used for perennial streams. Estimates of dissolved-solids discharge at eight water quality sampling stations operated by the U.S. Geological Survey in cooperation with the U.S. Bureau of Land Management range from less than 2 to 95 tons/day. The dissolved-solids yield upstream from the sampling stations ranges from 0.023 to 0.107 tons/day/sq mi. However, estimates of dissolved solids yield contributed by drainage areas between paired stations on Bitter, Salt Wells, Little Muddy, and Muddy creeks, based on dissolved-solids discharge versus drainage area, range only from 0.081 to 0.092 tons/day/sq mi. (USGS)
NASA Astrophysics Data System (ADS)
Aloysius, N. R.
2005-12-01
Doubling of atmospheric CO2 concentrations are likely to increase the average global temperature by two to five degrees Celsius and would cause significant changes in climate. Changes in temperature will directly impact water requirements of plants. Since agriculture is the major water consumer, accurate estimates of agricultural crop water requirements are required in order for countries to better plan their future water allocations to different sectors, especially to the agriculture sector. Changes in crop water requirements are indicated by changes in reference evapotranspiration (ET). Procedures for estimating ET should be applicable to all climatic conditions and accurate for both short and long term periods, and should not require data or information that is usually of very limited availability. This is of greater importance when it comes to planning regional and country level agricultural water requirements. Three methods to estimate monthly ET, namely Hargreaves method (ETH), Samani method (ETS) and Food and Agricultural Organization's (FAO) modified Penman-Monteith (FAOETP) method are compared with data obtained for the Indian Sub-continent. While ETH and ETS requires minimum data (minimum and maximum temperatures), FAOETP requires the estimation of net solar radiation and wind velocity in addition to the above two and is physically based. The results are compared with Penman-Monteith ET (ETP) computed from field data. Regression analyses were performed considering ETP as dependant variable and the other three ETs' as independent variables. Results indicates that FAOETP has very high correlation with ETP (average monthly R Square = 0.794, CV = 0.157) compared to ETH (average monthly R Square = 0.709, CV=0.163) and ETS (average monthly R Square = 0.458, CV=0.495). While FAOETP method gives better results and has the capability of incorporating more variables to improve its performance, ETH method is also comparable and provides a simplified procedure. Both
NASA Astrophysics Data System (ADS)
Herrington, C.; Gonzalez-Pinzon, R.
2014-12-01
Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. PMID:27494960
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first
Estimation of the tryptophan requirement in piglets by meta-analysis.
Simongiovanni, A; Corrent, E; Le Floc'h, N; van Milgen, J
2012-04-01
There is no consensus concerning the Trp requirement for piglets expressed relative to Lys on a standardized ileal digestible basis (SID Trp : Lys). A meta-analysis was performed to estimate the SID Trp : Lys ratio that maximizes performance of weaned piglets between 7 and 25 kg of BW. A database comprising 130 experiments on the Trp requirement in piglets was established. The nutritional values of the diets were calculated from the composition of feed ingredients. Among all experiments, 37 experiments were selected to be used in the meta-analysis because they were designed to express the Trp requirement relative to Lys (e.g. Lys was the second-limiting amino acid in the diet) while testing at least three levels of Trp. The linear-plateau (LP), curvilinear-plateau (CLP) and asymptotic (ASY) models were tested to estimate the SID Trp : Lys requirement using average daily gain (ADG), average daily feed intake (ADFI) and gain-to-feed ratio (G : F) as response criteria. A multiplicative trial effect was included in the models on the plateau value, assuming that the experimental conditions affected only this parameter and not the requirement or the shape of the response to Trp. Model choice appeared to have an important impact on the estimated requirement. Using ADG and ADFI as response criteria, the SID Trp : Lys requirement was estimated at 17% with the LP model, at 22% with the CLP model and at 26% with the ASY model. Requirement estimates were slightly lower when G : F was used as response criterion. The Trp requirement was not affected by the composition of the diet (corn v. a mixture of cereals). The CLP model appeared to be the best-adapted model to describe the response curve of a population. This model predicted that increasing the SID Trp : Lys ratio from 17% to 22% resulted in an increase in ADG by 8%.
ERIC Educational Resources Information Center
Jacoby, Oscar; Kamke, Marc R.; Mattingley, Jason B.
2013-01-01
We have a remarkable ability to accurately estimate average featural information across groups of objects, such as their average size or orientation. It has been suggested that, unlike individual object processing, this process of "feature averaging" occurs automatically and relatively early in the course of perceptual processing, without the need…
Ahrén, Bo; Foley, James E
2016-01-01
We hypothesized that the relative contribution of fasting plasma glucose (FPG) versus postprandial plasma glucose (PPG) to glycated haemoglobin (HbA1c) could be calculated using an algorithm developed by the A1c-Derived Average Glucose (ADAG) study group to make HbA1c values more clinically relevant to patients. The algorithm estimates average glucose (eAG) exposure, which can be used to calculate apparent PPG (aPPG) by subtracting FPG. The hypothesis was tested in a large dataset (comprising 17 studies) from the vildagliptin clinical trial programme. We found that 24 weeks of treatment with vildagliptin monotherapy (n = 2523) reduced the relative contribution of aPPG to eAG from 8.12% to 2.95% (by 64%, p < 0.001). In contrast, when vildagliptin was added to metformin (n = 2752), the relative contribution of aPPG to eAG insignificantly increased from 1.59% to 2.56%. In conclusion, glucose peaks, which are often prominent in patients with type 2 diabetes, provide a small contribution to the total glucose exposure assessed by HbA1c, and the ADAG algorithm is not robust enough to assess this small relative contribution in patients receiving combination therapy. PMID:27635135
2016-01-01
We hypothesized that the relative contribution of fasting plasma glucose (FPG) versus postprandial plasma glucose (PPG) to glycated haemoglobin (HbA1c) could be calculated using an algorithm developed by the A1c-Derived Average Glucose (ADAG) study group to make HbA1c values more clinically relevant to patients. The algorithm estimates average glucose (eAG) exposure, which can be used to calculate apparent PPG (aPPG) by subtracting FPG. The hypothesis was tested in a large dataset (comprising 17 studies) from the vildagliptin clinical trial programme. We found that 24 weeks of treatment with vildagliptin monotherapy (n = 2523) reduced the relative contribution of aPPG to eAG from 8.12% to 2.95% (by 64%, p < 0.001). In contrast, when vildagliptin was added to metformin (n = 2752), the relative contribution of aPPG to eAG insignificantly increased from 1.59% to 2.56%. In conclusion, glucose peaks, which are often prominent in patients with type 2 diabetes, provide a small contribution to the total glucose exposure assessed by HbA1c, and the ADAG algorithm is not robust enough to assess this small relative contribution in patients receiving combination therapy.
2016-01-01
We hypothesized that the relative contribution of fasting plasma glucose (FPG) versus postprandial plasma glucose (PPG) to glycated haemoglobin (HbA1c) could be calculated using an algorithm developed by the A1c-Derived Average Glucose (ADAG) study group to make HbA1c values more clinically relevant to patients. The algorithm estimates average glucose (eAG) exposure, which can be used to calculate apparent PPG (aPPG) by subtracting FPG. The hypothesis was tested in a large dataset (comprising 17 studies) from the vildagliptin clinical trial programme. We found that 24 weeks of treatment with vildagliptin monotherapy (n = 2523) reduced the relative contribution of aPPG to eAG from 8.12% to 2.95% (by 64%, p < 0.001). In contrast, when vildagliptin was added to metformin (n = 2752), the relative contribution of aPPG to eAG insignificantly increased from 1.59% to 2.56%. In conclusion, glucose peaks, which are often prominent in patients with type 2 diabetes, provide a small contribution to the total glucose exposure assessed by HbA1c, and the ADAG algorithm is not robust enough to assess this small relative contribution in patients receiving combination therapy. PMID:27635135
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-05-01
Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1
Wilkes, C R; Koontz, M D; Billick, I H
1996-09-01
Range gas consumption in households tends to follow an annual cycle resembling a sinusoid, with peak consumption during the winter. When outdoor NO2 concentrations have a constant or small impact, the resulting indoor NO2 concentrations also tend to resemble an annual sinusoid. Optimal monitoring strategies can be designed to take advantage of this knowledge to obtain a better estimate of the true annual average gas consumption or indoor NO2 concentration. Gas consumption data, together with measured outdoor concentrations, house volumes, sampled emission rates, air exchange rates, and NO2 decay rates, are used to model weekly indoor NO2 concentrations throughout the year. Based on the modeling results, various monitoring strategies are evaluated for their accuracy in estimating the annual mean. Analysis of the results indicates that greater accuracy is attained using samples equally spaced throughout the year. In addition, the expected error for various monitoring strategies and various numbers of equally spaced samples is quantified, and their ability to classify homes into correct concentration categories is assessed.
Origins for the estimations of water requirements in adults.
Vivanti, A P
2012-12-01
Water homeostasis generally occurs without conscious effort; however, estimating requirements can be necessary in settings such as health care. This review investigates the derivation of equations for estimating water requirements. Published literature was reviewed for water estimation equations and original papers sought. Equation origins were difficult to ascertain and original references were often not cited. One equation (% of body weight) was based on just two human subjects and another equation (ml water/kcal) was reported for mammals and not specifically for humans. Other findings include that some equations: for children were subsequently applied to adults; had undergone modifications without explicit explanation; had adjusted for the water from metabolism or food; and had undergone conversion to simplify application. The primary sources for equations are rarely mentioned or, when located, lack details conventionally considered important. The sources of water requirement equations are rarely made explicit and historical studies do not satisfy more rigorous modern scientific method. Equations are often applied without appreciating their derivation, or adjusting for the water from food or metabolism as acknowledged by original authors. Water requirement equations should be used as a guide only while employing additional means (such as monitoring short-term weight changes, physical or biochemical parameters and urine output volumes) to ensure the adequacy of water provision in clinical or health-care settings.
Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling
NASA Technical Reports Server (NTRS)
Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon
2010-01-01
We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.
Estimated water requirements for gold heap-leach operations
Bleiwas, Donald I.
2012-01-01
This report provides a perspective on the amount of water necessary for conventional gold heap-leach operations. Water is required for drilling and dust suppression during mining, for agglomeration and as leachate during ore processing, to support the workforce (requires water in potable form and for sanitation), for minesite reclamation, and to compensate for water lost to evaporation and leakage. Maintaining an adequate water balance is especially critical in areas where surface and groundwater are difficult to acquire because of unfavorable climatic conditions [arid conditions and (or) a high evaporation rate]; where there is competition with other uses, such as for agriculture, industry, and use by municipalities; and where compliance with regulatory requirements may restrict water usage. Estimating the water consumption of heap-leach operations requires an understanding of the heap-leach process itself. The task is fairly complex because, although they all share some common features, each gold heap-leach operation is unique. Also, estimating the water consumption requires a synthesis of several fields of science, including chemistry, ecology, geology, hydrology, and meteorology, as well as consideration of economic factors.
Estimates of galactic cosmic ray shielding requirements during solar minimum
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.
1990-01-01
Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.
NASA Astrophysics Data System (ADS)
Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul
2015-10-01
Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.
Stochastic physical ecohydrologic-based model for estimating irrigation requirement
NASA Astrophysics Data System (ADS)
Alizadeh, H.; Mousavi, S. J.
2012-04-01
Climate uncertainty affects both natural and managed hydrological systems. Therefore, methods which could take this kind of uncertainty into account are of primal importance for management of ecosystems, especially agricultural ecosystems. One of the famous problems in these ecosystems is crop water requirement estimation under climatic uncertainty. Both deterministic physically-based methods and stochastic time series modeling have been utilized in the literature. Like other fields of hydroclimatic sciences, there is a vast area in irrigation process modeling for developing approaches integrating physics of the process and statistics aspects. This study is about deriving closed-form expressions for probability density function (p.d.f.) of irrigation water requirement using a stochastic physically-based model, which considers important aspects of plant, soil, atmosphere and irrigation technique and policy in a coherent framework. An ecohydrologic stochastic model, building upon the stochastic differential equation of soil moisture dynamics at root zone, is employed as a basis for deriving the expressions considering temporal stochasticity of rainfall. Due to distinguished nature of stochastic processes of micro and traditional irrigation applications, two different methodologies have been used. Micro-irrigation application has been modeled through dichotomic process. Chapman-Kolomogrov equation of time integral of the dichotomic process for transient condition has been solved to derive analytical expressions for probability density function of seasonal irrigation requirement. For traditional irrigation, irrigation application during growing season has been modeled using a marked point process. Using the renewal theory, probability mass function of seasonal irrigation requirement, which is a discrete-value quantity, has been analytically derived. The methodology deals with estimation of statistical properties of the total water requirement in a growing season that
NASA Astrophysics Data System (ADS)
Riegels, Niels; Jensen, Roar; Bensasson, Lisa; Banou, Stella; Møller, Flemming; Bauer-Gottwein, Peter
2011-01-01
SummaryResource costs of meeting EU WFD ecological status requirements at the river basin scale are estimated by comparing net benefits of water use given ecological status constraints to baseline water use values. Resource costs are interpreted as opportunity costs of water use arising from water scarcity. An optimization approach is used to identify economically efficient ways to meet WFD requirements. The approach is implemented using a river basin simulation model coupled to an economic post-processor; the simulation model and post-processor are run from a central controller that iterates until an allocation is found that maximizes net benefits given WFD requirements. Water use values are estimated for urban/domestic, agricultural, industrial, livestock, and tourism water users. Ecological status is estimated using metrics that relate average monthly river flow volumes to the natural hydrologic regime. Ecological status is only estimated with respect to hydrologic regime; other indicators are ignored in this analysis. The decision variable in the optimization is the price of water, which is used to vary demands using consumer and producer water demand functions. The price-based optimization approach minimizes the number of decision variables in the optimization problem and provides guidance for pricing policies that meet WFD objectives. Results from a real-world application in northern Greece show the suitability of the approach for use in complex, water-stressed basins. The impact of uncertain input values on model outcomes is estimated using the Info-Gap decision analysis framework.
A History-based Estimation for LHCb job requirements
NASA Astrophysics Data System (ADS)
Rauschmayr, Nathalie
2015-12-01
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.
Lessons Learned for Planning and Estimating Operations Support Requirements
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn
2011-01-01
Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead projects to focus on hardware development schedules and costs, de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations, and any LCC growth can directly impact the programs ability to fund new missions. The D&NF Program Office at Marshall Space Flight Center recently studied cost overruns for 7 D&NF missions related to phase C/D development of operational capabilities and phase E mission operations. The goal was to identify the underlying causes for the overruns and develop practical mitigations to assist the D&NF projects in identifying potential operations risks and controlling the associated impacts to operations development and execution costs. The study found that the drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This presentation summarizes the study and the results, providing a set of lessons NASA can use to improve early estimation and validation of operations costs.
NASA Astrophysics Data System (ADS)
Cacchione, David A.; Thorne, Peter D.; Agrawal, Yogesh; Nidzieko, Nicholas J.
2008-02-01
Profiles of suspended sediment concentration and velocity were measured over a 15-day period at a near-shore site off Santa Cruz, CA in Monterey Bay. The concentration and velocity data were collected with an Acoustic Backscattering System (ABS) and Acoustic Current Profiler (ACP) that were mounted on a bottom tripod. High-resolution bottom scanning sonar was also attached to the tripod to provide images of bed features during the experiment. Hourly time-averaged near-bed concentrations of suspended sediment were calculated from three models and compared with the measurements. Surface waves and currents that were generated by a storm of moderate intensity caused bed stresses that exceeded threshold stress for D50=0.02 cm, the median size of the moderately well-sorted bottom sediment, over a period of about 7 days. Estimates of the concentration at 1 cm above the bottom, Ca1, were obtained using the ABS measurements. These observations have been compared with predictions for the concentration at 1 cm above the bottom, C1. Nielsen's models for reference concentration Co [Nielsen, P., 1986. Suspended sediment concentrations under waves. Coastal Engineering 10, 32-31; Nielsen, P., 1992. Coastal Bottom Boundary Layers and Sediment Transport, Advanced Series on Ocean Engineering. World Scientific, Hackensack, NJ.] are purely wave-based and do not include effects of bottom currents on bed stress and bedform scales. C1 calculated from this model compared well with measured Ca1 when currents were weak and small oscillatory ripples were observed in the sonar images. However, during the 3-day period of highest bottom stresses modeled C1 did not compare well to Ca1. The other two models for C1, Glenn and Grant [Glenn, S.M., Grant, W.D., 1987. A suspended sediment stratification correction for combined wave and current flows. Journal of Geophysical Research 92(C8), 8244-8264.] and van Rijn and Walstra [Van Rijn, L.C., Walstra, D.J.R., 2004. Description of TRANSPOR2004 and
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents and analyzes a new simple instant-estimation method for time-average quantities such as rms-values of voltage and current, active and reactive powers, and power factor for single-phase power with the fundamental component of constant or nearly-constant frequency by measuring instantaneous values of voltage and current. According to the analyses, the method can instantly estimate time average values with accuracy of the fundamental frequency, and estimation accuracy of power factor is about two times better than that of voltage, current, and powers. The instant-estimation method is simple and can be easily applied to single-phase power control systems that are expected to control instantly and continuously power factor on a single-phase grid by inverter. Based on the proposed instant-estimation method, two-methods for such power control systems are also proposed and their usefulness is verified through simulations.
SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition
Supanich, MP
2015-06-15
Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.
Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan
2014-04-01
The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy.
NASA Astrophysics Data System (ADS)
Chen, Zengbao; Chen, Xiaohong; Wang, Yanghua; Li, Jingye
2014-02-01
Reliable Q estimation is desirable for model-based inverse Q filtering to improve seismic resolution. On the one hand, conventional methods estimate Q from the amplitude spectra or frequency variations of individual wavelets at different depth (or time) levels, which is vulnerable to the effects of spectral interference and ambient noise. On the other hand, most inverse Q filtering algorithms are sensitive to noise, in order not to boost them, sometimes at the expense of degrading compensation effect. In this paper, the average-Q values are obtained from reflection seismic data based on the Gabor transform spectrum of a seismic trace. We transform the 2-D time-variant frequency spectrum into the 1-D spectrum, and then estimate the average-Q values based on the amplitude attenuation and compensation functions, respectively. Driven by the estimated average-Q model, we also develop a modified inverse Q filtering algorithm by incorporating a time-variant bandpass filter (TVBF), whose high cut off frequency follows a hyperbola along the traveltime from a specified time. Finally, we test this modified inverse Q filtering algorithm on synthetic data and perform the Q estimation procedure on a real reflection seismic data, followed by applying the modified inverse Q filtering algorithm. The synthetic data test and the real data example demonstrate that the algorithm driven by average-Q model may enhance the seismic resolution, without degrading the signal-to-noise ratio.
Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA.
Wagner, Todd H; Chen, Shuo; Barnett, Paul G
2003-09-01
The U.S. Department of Veterans Affairs (VA) maintains discharge abstracts, but these do not include cost information. This article describes the methods the authors used to estimate the costs of VA medical-surgical hospitalizations in fiscal years 1998 to 2000. They estimated a cost regression with 1996 Medicare data restricted to veterans receiving VA care in an earlier year. The regression accounted for approximately 74 percent of the variance in cost-adjusted charges, and it proved to be robust to outliers and the year of input data. The beta coefficients from the cost regression were used to impute costs of VA medical-surgical hospital discharges. The estimated aggregate costs were reconciled with VA budget allocations. In addition to the direct medical costs, their cost estimates include indirect costs and physician services; both of these were allocated in proportion to direct costs. They discuss the method's limitations and application in other health care systems. PMID:15095543
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley
2009-01-01
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This report uses a causal inference and instrumental variables framework to examine the…
48 CFR 252.215-7002 - Cost estimating system requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Contractor's policies, procedures, and practices for budgeting and planning controls, and generating...) Flow of work, coordination, and communication; and (5) Budgeting, planning, estimating methods... personnel have sufficient training, experience, and guidance to perform estimating and budgeting tasks...
Cai, Quan-Cai; Yu, En-Da; Xiao, Yi; Bai, Wen-Yuan; Chen, Xing; He, Li-Ping; Yang, Yu-Xiu; Zhou, Ping-Hong; Jiang, Xue-Liang; Xu, Hui-Min; Fan, Hong; Ge, Zhi-Zheng; Lv, Nong-Hua; Huang, Zhi-Gang; Li, You-Ming; Ma, Shu-Ren; Chen, Jie; Li, Yan-Qing; Xu, Jian-Ming; Xiang, Ping; Yang, Li; Lin, Fu-Lin; Li, Zhao-Shen
2012-03-15
No prediction rule is currently available for advanced colorectal neoplasms, defined as invasive cancer, an adenoma of 10 mm or more, a villous adenoma, or an adenoma with high-grade dysplasia, in average-risk Chinese. In this study between 2006 and 2008, a total of 7,541 average-risk Chinese persons aged 40 years or older who had complete colonoscopy were included. The derivation and validation cohorts consisted of 5,229 and 2,312 persons, respectively. A prediction rule was developed from a logistic regression model and then internally and externally validated. The prediction rule comprised 8 variables (age, sex, smoking, diabetes mellitus, green vegetables, pickled food, fried food, and white meat), with scores ranging from 0 to 14. Among the participants with low-risk (≤3) or high-risk (>3) scores in the validation cohort, the risks of advanced neoplasms were 2.6% and 10.0% (P < 0.001), respectively. If colonoscopy was used only for persons with high risk, 80.3% of persons with advanced neoplasms would be detected while the number of colonoscopies would be reduced by 49.2%. The prediction rule had good discrimination (area under the receiver operating characteristic curve = 0.74, 95% confidence interval: 0.70, 0.78) and calibration (P = 0.77) and, thus, provides accurate risk stratification for advanced neoplasms in average-risk Chinese. PMID:22328705
ERIC Educational Resources Information Center
Schochet, Peter Z.
2009-01-01
This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…
Ishida, Hideshi
2014-06-15
In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. These deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.
NASA Astrophysics Data System (ADS)
Berthon, Lucie; Biancamaria, Sylvain; Goutal, Nicole; Ricci, Sophie; Durand, Michael
2014-05-01
The future NASA-CNES-CSA Surface Water and Ocean Topogragraphy (SWOT) satellite mission will be launched in 2020 and will deliver maps of water surface elevation, slope and extent with an un-precedented resolution of 100m. A river discharge algorithm was proposed by Durand et al. 2013, based on Manning's equation to estimate reach-averaged discharge from SWOT data. In the present study, this algorithm was applied to a 50-km reach on the Garonne River with an averaged slope of 2.8m per 10000m, averaged width of 180m between Tonneins and La Reole. The dynamics of this reach is satisfyingly represented by the 1D model MASCARET and validated against in-situ water level observations in Marmande. Major assumptions of permanent flow and uniform conditions lie under the Manning's equation choice. Here, we aim at highlighting the limits of validity of these assumptions for the Garonne River during a typical flood event in order to estimate the applicability of the discharge algorithm over averaged reach. Manning-estimated and MASCARET discharges are compared for non-permanent and permanent flow for different reach averaging (100m to 10 km). It was shown that the Manning equation increasingly over-estimates the MASCARET discharge as the reach averaging length increases. It is shown that the Manning overestimate is due to the effect of the sub-reach parameter covariances. In order to further explain these results, this comparison was carried out for a simplified case study with a parametric bathymetry described either by a flat bottom ; constant slope or local slope variations.
48 CFR 252.215-7002 - Cost estimating system requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Contractor shall— (i) Comply with its disclosed estimating system; and (ii) Disclose significant changes to... detection and timely correction of errors. (viii) Protect against cost duplication and omissions....
Nelms, David L.; Messinger, Terence; McCoy, Kurt J.
2015-01-01
As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.
Nelms, David L.; Messinger, Terence; McCoy, Kurt J.
2015-07-14
As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.
Chola, Lumbwe; Robberstad, Bjarne
2009-01-01
Background Millions of children die every year in developing countries, from preventable diseases such as pneumonia and diarrhoea, owing to low levels of investment in child health. Investment efforts are hampered by a general lack of adequate information that is necessary for priority setting in this sector. This paper measures the health system costs of providing inpatient and outpatient services, and also the costs associated with treating pneumonia and diarrhoea in under-five children at a health centre in Zambia. Methods Annual economic and financial cost data were collected in 2005-2006. Data were summarized in a Microsoft excel spreadsheet to obtain total department costs and average disease treatment costs. Results The total annual cost of operating the health centre was US$1,731,661 of which US$1 284 306 and US$447,355 were patient care and overhead departments costs, respectively. The average cost of providing out-patient services was US$3 per visit, while the cost of in-patient treatment was US$18 per bed day. The cost of providing dental services was highest at US$20 per visit, and the cost of VCT services was lowest, with US$1 per visit. The cost per out-patient visit for under-five pneumonia was US$48, while the cost per bed day was US$215. The cost per outpatient visit attributed to under-five diarrhoea was US$26, and the cost per bed day was US$78. Conclusion In the face of insufficient data, a cost analysis exercise is a difficult but feasible undertaking. The study findings are useful and applicable in similar settings, and can be used in cost effectiveness analyses of health interventions. PMID:19845966
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.
2015-10-15
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina
2015-10-01
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
Buxton, H.T.
1985-01-01
Base flows of the 29 major streams in southeast Nassau and southwest Suffolk Counties, New York, were statistically analyzed to discern the correlation among flows of adjacent streams. Concurrent base-flow data from a partial-record and a nearby continuous-record station were related; the data were from 1968-75, a period near hydrologic equilibrium on Long Island. The average base flow at each partial-record station was estimated from a regression equation and average measured base flow for the period at the continuous-record stations. Regression analyses are presented for the 20 streams with partial-record stations. Average base flow of the nine streams with a continuous record totaled 90 cu ft/sec; the predicted average base flow for the 20 streams with a partial record was 73 cu ft/sec (with a 95% confidence interval of 63 to 84 cu ft/sec.) Results indicate that this method provides reliable estimates of average low flow for streams such as those on Long Island, which consist mostly of base flow and are geomorphically similar. (USGS)
Kamal, Izdihar; Chelliah, Kanaga K.; Mustafa, Nawal
2015-01-01
Objectives: The aim of this research was to examine the average glandular dose (AGD) of radiation among different breast compositions of glandular and adipose tissue with auto-modes of exposure factor selection in digital breast tomosynthesis. Methods: This experimental study was carried out in the National Cancer Society, Kuala Lumpur, Malaysia, between February 2012 and February 2013 using a tomosynthesis digital mammography X-ray machine. The entrance surface air kerma and the half-value layer were determined using a 100H thermoluminescent dosimeter on 50% glandular and 50% adipose tissue (50/50) and 20% glandular and 80% adipose tissue (20/80) commercially available breast phantoms (Computerized Imaging Reference Systems, Inc., Norfolk, Virginia, USA) with auto-time, auto-filter and auto-kilovolt modes. Results: The lowest AGD for the 20/80 phantom with auto-time was 2.28 milliGray (mGy) for two dimension (2D) and 2.48 mGy for three dimensional (3D) images. The lowest AGD for the 50/50 phantom with auto-time was 0.97 mGy for 2D and 1.0 mGy for 3D. Conclusion: The AGD values for both phantoms were lower against a high kilovolt peak and the use of auto-filter mode was more practical for quick acquisition while limiting the probability of operator error. PMID:26052465
Lee, Eugenia E.; Stewart, Barclay; Zha, Yuanting A.; Groen, Thomas A.; Burkle, Frederick M.; Kushner, Adam L.
2016-01-01
Background: Climate extremes will increase the frequency and severity of natural disasters worldwide. Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed. Methods: The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Results: Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People’s Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. Conclusion: As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies.
Lee, Eugenia E.; Stewart, Barclay; Zha, Yuanting A.; Groen, Thomas A.; Burkle, Frederick M.; Kushner, Adam L.
2016-01-01
Background: Climate extremes will increase the frequency and severity of natural disasters worldwide. Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed. Methods: The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Results: Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People’s Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. Conclusion: As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies. PMID:27617165
Data concurrency is required for estimating urban heat island intensity.
Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang
2016-01-01
Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps.
Data concurrency is required for estimating urban heat island intensity.
Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang
2016-01-01
Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps. PMID:26243476
NASA Technical Reports Server (NTRS)
Van Donkelaar, A.; Martin, R. V.; Brauer, M.; Kahn, R.; Levy, R.; Verduzco, C.; Villeneuve, P.
2010-01-01
Exposure to airborne particles can cause acute or chronic respiratory disease and can exacerbate heart disease, some cancers, and other conditions in susceptible populations. Ground stations that monitor fine particulate matter in the air (smaller than 2.5 microns, called PM2.5) are positioned primarily to observe severe pollution events in areas of high population density; coverage is very limited, even in developed countries, and is not well designed to capture long-term, lower-level exposure that is increasingly linked to chronic health effects. In many parts of the developing world, air quality observation is absent entirely. Instruments aboard NASA Earth Observing System satellites, such as the MODerate resolution Imaging Spectroradiometer (MODIS) and the Multi-angle Imaging SpectroRadiometer (MISR), monitor aerosols from space, providing once daily and about once-weekly coverage, respectively. However, these data are only rarely used for health applications, in part because the can retrieve the amount of aerosols only summed over the entire atmospheric column, rather than focusing just on the near-surface component, in the airspace humans actually breathe. In addition, air quality monitoring often includes detailed analysis of particle chemical composition, impossible from space. In this paper, near-surface aerosol concentrations are derived globally from the total-column aerosol amounts retrieved by MODIS and MISR. Here a computer aerosol simulation is used to determine how much of the satellite-retrieved total column aerosol amount is near the surface. The five-year average (2001-2006) global near-surface aerosol concentration shows that World Health Organization Air Quality standards are exceeded over parts of central and eastern Asia for nearly half the year.
NASA Technical Reports Server (NTRS)
Kong, Maiying; Bhattacharya, Rabi N.; James, Christina; Basu, Abhijit
2003-01-01
Size distributions of chondrules, volcanic fire-fountain or impact glass spherules, or of immiscible globules in silicate melts (e.g., in basaltic mesostasis, agglutinitic glass, impact melt sheets) are imperfectly known because the spherical objects are usually so strongly embedded in the bulk samples that they are nearly impossible to separate. Hence, measurements are confined to two-dimensional sections, e.g. polished thin sections that are commonly examined under reflected light optical or backscattered electron microscopy. Three kinds of approaches exist in the geologic literature for estimating the mean real diameter of a population of 3D spheres from 2D observations: (1) a stereological approach with complicated calculations; (2) an empirical approach in which independent 3D size measurements of a population of spheres separated from their parent sample and their 2D cross sectional diameters in thin sections have produced an array of somewhat contested conversion equations; and (3) measuring pairs of 2D diameters of upper and lower surfaces of cross sections each sphere in thin sections using transmitted light microscopy. We describe an entirely probabilistic approach and propose a simple factor of 4/x (approximately equal to 1.27) to convert the 2D mean size to 3D mean size.
48 CFR 2452.216-77 - Estimated quantities-requirements contract.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Estimated quantities... Provisions and Clauses 2452.216-77 Estimated quantities—requirements contract. As prescribed in 2416.506-70(c), insert the following provision: Estimated Quantities—Requirements Contract (FEB 2006) In accordance...
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Suzuoki, Yasuo
The fluctuation of the total power output of clustered PV systems would be smaller than that of single PV system because of the time difference in the power output fluctuation among PV systems at different locations. This effect, so called smoothing-effect, must be taken into account properly when the impact of clustered PV systems on electric power system is assessed. If the average power output of clustered PV systems can be estimated from the power output of single PV system, it is very useful and helpful for the impact assessment. In this study, we propose a simple method to estimate the total power output fluctuation of clustered PV systems. In the proposed method, a smoothing effect is assumed to be caused as a result of two factors, i.e. time difference of overhead clouds passing among PV systems and the random change in the size and/or shape of clouds. The first one is formulated as a low-pass filter, assuming that output fluctuation is transmitted to the same direction as the wind direction at the constant speed. The second one is taken into account by using a Fourier transform surrogate data. The parameters in the proposed method were selected, so that the estimated fluctuation can be similar with that of ensemble average fluctuation of data observed at 5 points used as a training data set. Then, by using the selected parameters, the fluctuation property was estimated for other data set. The results show that the proposed method is useful for estimating the total power output fluctuation of clustered PV systems.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Estimated water requirements for the conventional flotation of copper ores
Bleiwas, Donald I.
2012-01-01
This report provides a perspective on the amount of water used by a conventional copper flotation plant. Water is required for many activities at a mine-mill site, including ore production and beneficiation, dust and fire suppression, drinking and sanitation, and minesite reclamation. The water required to operate a flotation plant may outweigh all of the other uses of water at a mine site, [however,] and the need to maintain a water balance is critical for the plant to operate efficiently. Process water may be irretrievably lost or not immediately available for reuse in the beneficiation plant because it has been used in the production of backfill slurry from tailings to provide underground mine support; because it has been entrapped in the tailings stored in the TSF, evaporated from the TSF, or leaked from pipes and (or) the TSF; and because it has been retained as moisture in the concentrate. Water retained in the interstices of the tailings and the evaporation of water from the surface of the TSF are the two most significant contributors to water loss at a conventional flotation circuit facility.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
EURRECA-Estimating selenium requirements for deriving dietary reference values.
Hurst, Rachel; Collings, Rachel; Harvey, Linda J; King, Maria; Hooper, Lee; Bouwman, Jildau; Gurinovic, Mirjana; Fairweather-Tait, Susan J
2013-01-01
Current reference values for selenium, an essential micronutrient, are based on the intake of selenium that is required to achieve maximal glutathione peroxidase activity in plasma or erythrocytes. In order to assess the evidence of relevance to setting dietary reference values for selenium, the EURRECA Network of Excellence focused on systematic searches, review, and evaluation of (i) selenium status biomarkers and evidence for relationships between intake and status biomarkers, (ii) selenium and health (including the effect of intake and/or status biomarkers on cancer risk, immune function, HIV, cognition, and fertility), (iii) bioavailability of selenium from the diet, and (iv) impact of genotype/single nucleotide polymorphisms on status or health outcomes associated with selenium. The main research outputs for selenium and future research priorities are discussed further in this review. PMID:23952089
Lopes, Thomas J.; Evetts, David M.
2004-01-01
Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth
EURRECA-Estimating iodine requirements for deriving dietary reference values.
Ristić-Medić, Danijela; Novaković, Romana; Glibetić, Maria; Gurinović, Mirjana
2013-01-01
Iodine is an essential component of thyroid hormones, and current recommendations for intake are based on urinary iodine excretion, assessment of thyroid size, thyroidal iodine accumulation and turnover, radioactive iodine uptake, balance studies, and epidemiological studies. Dietary iodine is rapidly and almost completely absorbed. The prevalence of inadequate iodine intake is high: 29% of the world's population lives in iodine-deficient areas and 44% of Europe remains mildly iodine deficient. To assess current data and update evidence for setting dietary recommendations for iodine, the EURRECA Network of Excellence has undertaken systematic review and evaluation of (i) the usefulness of iodine status biomarkers (ii) the relationship between iodine status biomarkers and dietary iodine intake, and (iii) the relationship between iodine intake and health outcomes (endemic goiter, hypothyroidism, and cognitive function). This review summarizes the main research outputs: the key findings of the literature review, results of the meta-analyses, and discussion of the main conclusions. Currently, data for relevant intake-status-health relationships for iodine are limited, particularly for population groups such as children under two years, pregnant women, and the elderly. The EURRECA Network developed best practice guidelines for the identification of pertinent iodine studies based on a systematic review approach. This approach aimed to identify comparable data, suitable for meta-analysis, for different countries and across all age ranges. When new data are available, the EURRECA Network best practice guidelines will provide a better understanding of iodine requirements for different health outcomes which could be used to set evidence-based dietary iodine recommendations for optimal health. PMID:23952087
19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.
Code of Federal Regulations, 2014 CFR
2014-04-01
... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...
19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.
Code of Federal Regulations, 2013 CFR
2013-04-01
... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...
19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.
Code of Federal Regulations, 2010 CFR
2010-04-01
... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...
19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.
Code of Federal Regulations, 2011 CFR
2011-04-01
... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...
19 CFR 141.102 - When deposit of estimated duties, estimated taxes, or both not required.
Code of Federal Regulations, 2012 CFR
2012-04-01
... regulations of the Bureau of Alcohol, Tobacco and Firearms (27 CFR part 251). (c) Deferral of payment of taxes on alcoholic beverages. An importer may pay on a semimonthly basis the estimated internal revenue taxes on all the alcoholic beverages entered or withdrawn for consumption during that period, under...
NASA Astrophysics Data System (ADS)
Xi, Caiping; Zhang, Shunning; Xiong, Gang; Zhao, Huichang
2016-07-01
Multifractal detrended fluctuation analysis (MFDFA) and multifractal detrended moving average (MFDMA) algorithm have been established as two important methods to estimate the multifractal spectrum of the one-dimensional random fractal signal. They have been generalized to deal with two-dimensional and higher-dimensional fractal signals. This paper gives a brief introduction of the two-dimensional multifractal detrended fluctuation analysis (2D-MFDFA) and two-dimensional multifractal detrended moving average (2D-MFDMA) algorithm, and a detailed description of the application of the two-dimensional fractal signal processing by using the two methods. By applying the 2D-MFDFA and 2D-MFDMA to the series generated from the two-dimensional multiplicative cascading process, we systematically do the comparative analysis to get the advantages, disadvantages and the applicabilities of the two algorithms for the first time from six aspects such as the similarities and differences of the algorithm models, the statistical accuracy, the sensitivities of the sample size, the selection of scaling range, the choice of the q-orders and the calculation amount. The results provide a valuable reference on how to choose the algorithm from 2D-MFDFA and 2D-MFDMA, and how to make the schemes of the parameter settings of the two algorithms when dealing with specific signals in practical applications.
Endo, T.; Sato, S.; Yamamoto, A.
2012-07-01
Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost the same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation
Visual Estimation of Spatial Requirements for Locomotion in Novice Wheelchair Users
ERIC Educational Resources Information Center
Higuchi, Takahiro; Takada, Hajime; Matsuura, Yoshifusa; Imanaka, Kuniyasu
2004-01-01
Locomotion using a wheelchair requires a wider space than does walking. Two experiments were conducted to test the ability of nonhandicapped adults to estimate the spatial requirements for wheelchair use. Participants judged from a distance whether doorlike apertures of various widths were passable or not passable. Experiment 1 showed that…
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28
Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The ...
ERIC Educational Resources Information Center
McGrath, William E.
To help determine whether a new journal of library research is needed, three estimates of available research are compared with an average-sized journal in the library field. The average number of pages and articles per year (285 and 36) in sixteen primarily American library journals that publish at least an occasional research article were…
Ji, Xing-jie; Cheng, Lin; Fang, Wen-song
2015-09-01
Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future. PMID:26785550
Ji, Xing-jie; Cheng, Lin; Fang, Wen-song
2015-09-01
Based on the analysis of water requirement and water deficit during development stage of winter wheat in recent 30 years (1981-2010) in Henan Province, the effective precipitation was calculated using the U.S. Department of Agriculture Soil Conservation method, the water requirement (ETC) was estimated by using FAO Penman-Monteith equation and crop coefficient method recommended by FAO, combined with the climate change scenario A2 (concentration on the economic envelopment) and B2 ( concentration on the sustainable development) of Special Report on Emissions Scenarios (SRES) , the spatial and temporal characteristics of impacts of future climate change on effective precipitation, water requirement and water deficit of winter wheat were estimated. The climatic impact factors of ETc and WD also were analyzed. The results showed that under A2 and B2 scenarios, there would be a significant increase in anomaly percentage of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period compared with the average value from 1981 to 2010. Effective precipitation increased the most in 2030s under A2 and B2 scenarios by 33.5% and 39.2%, respectively. Water requirement increased the most in 2010s under A2 and B2 scenarios by 22.5% and 17.5%, respectively, and showed a significant downward trend with time. Water deficit increased the most under A2 scenario in 2010s by 23.6% and under B2 scenario in 2020s by 13.0%. Partial correlation analysis indicated that solar radiation was the main cause for the variation of ETc and WD in future under A2 and B2 scenarios. The spatial distributions of effective precipitation, water requirement and water deficit of winter wheat during the whole growing period were spatially heterogeneous because of the difference in geographical and climatic environments. A possible tendency of water resource deficiency may exist in Henan Province in the future.
Ide, Jun'ichiro; Chiwa, Masaaki; Higashi, Naoko; Maruno, Ryoko; Mori, Yasushi; Otsuki, Kyoichi
2012-08-01
This study sought to determine the lowest number of storm events required for adequate estimation of annual nutrient loads from a forested watershed using the regression equation between cumulative load (∑L) and cumulative stream discharge (∑Q). Hydrological surveys were conducted for 4 years, and stream water was sampled sequentially at 15-60-min intervals during 24 h in 20 events, as well as weekly in a small forested watershed. The bootstrap sampling technique was used to determine the regression (∑L-∑Q) equations of dissolved nitrogen (DN) and phosphorus (DP), particulate nitrogen (PN) and phosphorus (PP), dissolved inorganic nitrogen (DIN), and suspended solid (SS) for each dataset of ∑L and ∑Q. For dissolved nutrients (DN, DP, DIN), the coefficient of variance (CV) in 100 replicates of 4-year average annual load estimates was below 20% with datasets composed of five storm events. For particulate nutrients (PN, PP, SS), the CV exceeded 20%, even with datasets composed of more than ten storm events. The differences in the number of storm events required for precise load estimates between dissolved and particulate nutrients were attributed to the goodness of fit of the ∑L-∑Q equations. Bootstrap simulation based on flow-stratified sampling resulted in fewer storm events than the simulation based on random sampling and showed that only three storm events were required to give a CV below 20% for dissolved nutrients. These results indicate that a sampling design considering discharge levels reduces the frequency of laborious chemical analyses of water samples required throughout the year.
ERIC Educational Resources Information Center
United Nations Industrial Development Organization, Vienna (Austria).
The need to develop managerial and technical personnel in the cement, fertilizer, pulp and paper, sugar, leather and shoe, glass, and metal processing industries of various nations was studied, with emphasis on necessary steps in developing nations to relate occupational requirements to technology, processes, and scale of output. Estimates were…
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Mehri, Mehran; Bagherzadeh Kasmani, Farzad; Asghari-Moghadam, Morteza
2015-08-01
A dose-response assay was conducted using broken-line regressions to estimate the lysine (Lys) requirements of quail chicks from 21 to 35 d of age. A basal diet was formulated to be adequate in all nutrients other than Lys. Incremental levels of L-Lys.HCl were added to the basal diet at the expense of a mix of cornstarch, NaHCO3, and NaCl to create 6 experimental diets containing 0.84 to 1.59% Lys. Feed intake (FI), weight gain (WG), and feed conversion ratio (FCR) responded quadratically to incremental levels of Lys (P < 0.0001). Using the linear broken-line (LBL) model, the estimated Lys requirements for WG during the fourth and fifth wk of age were 1.25 and 1.23% of diet, respectively. The corresponding values for FCR were estimated at 1.23 and 1.26% of diet, respectively. Fitting the quadratic broken-line (QBL) model, the estimated Lys requirements for WG during the fourth and fifth wk of age were 1.34 and 1.34% of diet, respectively. The corresponding values for FCR were estimated at 1.35 and 1.36% of diet, respectively. This study showed that using the QBL model as a promising way to estimate amino acids needed in the diet, the optimal Lys level to optimize performance of growing Japanese quail at the late stage of production might be 1.36% of diet, which is 105% of NRC recommendations. PMID:26069252
Davey, Rachel; de Castella, F. Robert
2015-01-01
Abstract Objectives: To estimate: 1) daily energy deficit required to reduce the weight of overweight children to within normal range; 2) time required to reach normal weight for a proposed achievable (small) target energy deficit of 0.42 MJ/day; 3) impact that such an effect may have on prevalence of childhood overweight. Methods: Body mass index and fitness were measured in 31,424 Australian school children aged between 4.5 and 15 years. The daily energy deficit required to reduce weight to within normal range for the 7,747 (24.7%) overweight children was estimated. Further, for a proposed achievable target energy deficit of 0.42 MJ/day, the time required to reach normal weight was estimated. Results: About 18% of children were overweight and 6.6% obese; 69% were either sedentary or light active. If an energy deficit of 0.42 MJ/day could be achieved, 60% of overweight children would reach normal weight and the current prevalence of overweight of 24.7% (24.2%–25.1%) would be reduced to 9.2% (8.9%–9.6%) within about 15 months. Conclusions: The prevalence of overweight in Australian school children could be reduced significantly within one year if even a small daily energy deficit could be achieved by children currently classified as overweight or obese. PMID:26561382
Luksic, A.T.; McKee, R.W.; Daling, P.M.; Konzek, G.J.; Ludwick, J.D.; Purcell, W.L.
1986-10-01
There are two categories of waste considered in this report. The first is the spent fuel disassembly (SFD) hardware. This consists of the hardware remaining after the fuel pins have been removed from the fuel assembly. This includes end fittings, spacer grids, water rods (BWR) or guide tubes (PWR) as appropriate, and assorted springs, fasteners, etc. The second category is other non-fuel-bearing (NFB) components the DOE has agreed to accept for disposal, such as control rods, fuel channels, etc., under Appendix E of the standard utiltiy contract (10 CFR 961). It is estimated that there will be approximately 150 kg of SFD and NFB waste per average metric ton of uranium (MTU) of spent uranium. PWR fuel accounts for approximately two-thirds of the average spent-fuel mass but only 50 kg of the SFD and NFB waste, with most of that being spent fuel disassembly hardware. BWR fuel accounts for one-third of the average spent-fuel mass and the remaining 100 kg of the waste. The relatively large contribution of waste hardware in BWR fuel, will be non-fuel-bearing components, primarily consisting of the fuel channels. Chapters are devoted to a description of spent fuel disassembly hardware and non-fuel assembly components, characterization of activated components, disposal considerations (regulatory requirements, economic analysis, and projected annual waste quantities), and proposed acceptance requirements for spent fuel disassembly hardware and other non-fuel assembly components at a geologic repository. The economic analysis indicates that there is a large incentive for volume reduction.
NASA Astrophysics Data System (ADS)
Li, Qiang; Xing, Zisheng; Danielescu, Serban; Li, Sheng; Jiang, Yefang; Meng, Fan-Rui
2014-04-01
Estimation of baseflow and groundwater recharge rates is important for hydrological analysis and modelling. A new approach which combines recursive digital filter (RDF) model with conductivity mass balance (CMB) method was considered to be reliable for baseflow separation because the combined method takes advantages of the reduced data requirement for RDF method and the reliability of CMB method. However, it is not clear what the minimum data requirements for producing acceptable estimates of the RDF model parameters are. In this study, 19-year record of stream discharge and water conductivity collected from the Black Brook Watershed (BBW), NB, Canada were used to test the combined baseflow separation method and assess the variability of parameters in the model over seasons. The data requirements and potential bias in estimated baseflow index (BFI) were evaluated using conductivity data for different seasons and/or resampled data segments at various sampling durations. Results indicated that the data collected during ground-frozen season are more suitable to estimate baseflow conductivity (Cbf) and data during snow-melting period are more suitable to estimate runoff conductivity (Cro). Relative errors of baseflow estimation were inversely proportional to the number of conductivity data records. A minimum of six-month discharge and conductivity data is required to obtain reliable parameters for current method with acceptable errors. We further found that the average annual recharge rate for the BBW was 322 mm in the past twenty years.
Establishing a method for estimating crop water requirements using the SEBAL method in Cyprus
NASA Astrophysics Data System (ADS)
Papadavid, G.; Toulios, L.; Hadjimitsis, D.; Kountios, G.
2014-08-01
Water allocation to crops has always been of great importance in agricultural process. In this context, and under the current conditions, where Cyprus is facing a severe drought the last five years, purpose of this study is basically to estimate the needed crop water requirements for supporting irrigation management and monitoring irrigation on a systematic basis for Cyprus using remote sensing techniques. The use of satellite images supported by ground measurements has provided quite accurate results. Intended purpose of this paper is to estimate the Evapotranspiration (ET) of specific crops which is the basis for irrigation scheduling and establish a procedure for monitoring and managing irrigation water over Cyprus, using remotely sensed data from Landsat TM/ ETM+ and a sound methodology used worldwide, the Surface Energy Balance Algorithm for Land (SEBAL). The methodology set in this paper refers to COST action ES1106 (Agri-Wat) for determining crop water requirements as part of the water footprint and virtual water-trade.
Noren, S.R.; Udevitz, M.S.; Jay, C.V.
2012-01-01
Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.
Zhan, J X; Ikehata, M; Mayuzumi, M; Koizumi, E; Kawaguchi, Y; Hashimoto, T
2013-01-01
A feedforward-feedback aeration control strategy based on online oxygen requirements (OR) estimation is proposed for oxidation ditch (OD) processes, and it is further developed for intermittent aeration OD processes, which are the most popular type in Japan. For calculating OR, concentrations of influent biochemical oxygen demand (BOD) and total Kjeldahl nitrogen (TKN) are estimated online by the measurement of suspended solids (SS) and sometimes TKN is estimated by NH4-N. Mixed liquor suspended solids (MLSS) and temperature are used to estimate the required oxygen for endogenous respiration. A straightforward parameter named aeration coefficient, Ka, is introduced as the only parameter that can be tuned automatically by feedback control or manually by the operators. Simulation with an activated sludge model was performed in comparison to fixed-interval aeration and satisfying result of OR control strategy was obtained. The OR control strategy has been implemented at seven full-scale OD plants and improvements in nitrogen removal are obtained in all these plants. Among them, the results obtained in Yumoto wastewater treatment plant were presented, in which continuous aeration was applied previously. After implementing intermittent OR control, the total nitrogen concentration was reduced from more than 5 mg/L to under 2 mg/L, and the electricity consumption was reduced by 61.2% for aeration or 21.5% for the whole plant. PMID:23823542
Shekarrizfard, Maryam; Faghih-Imani, Ahmadreza; Hatzopoulou, Marianne
2016-05-01
Air pollution in metropolitan areas is mainly caused by traffic emissions. This study presents the development of a model chain consisting of a transportation model, an emissions model, and atmospheric dispersion model, applied to dynamically evaluate individuals' exposure to air pollution by intersecting daily trajectories of individuals and hourly spatial variations of air pollution across the study domain. This dynamic approach is implemented in Montreal, Canada to highlight the advantages of the method for exposure analysis. The results for nitrogen dioxide (NO2), a marker of traffic related air pollution, reveal significant differences when relying on spatially and temporally resolved concentrations combined with individuals' daily trajectories compared to a long-term average NO2 concentration at the home location. We observe that NO2 exposures based on trips and activity locations visited throughout the day were often more elevated than daily NO2 concentrations at the home location. The percentage of all individuals with a lower 24-hour daily average at home compared to their 24-hour mobility exposure is 89.6%, of which 31% of individuals increase their exposure by more than 10% by leaving the home. On average, individuals increased their exposure by 23-44% while commuting and conducting activities out of home (compared to the daily concentration at home), regardless of air quality at their home location. We conclude that our proposed dynamic modelling approach significantly improves the results of traditional methods that rely on a long-term average concentration at the home location and we shed light on the importance of using individual daily trajectories to understand exposure.
Shekarrizfard, Maryam; Faghih-Imani, Ahmadreza; Hatzopoulou, Marianne
2016-05-01
Air pollution in metropolitan areas is mainly caused by traffic emissions. This study presents the development of a model chain consisting of a transportation model, an emissions model, and atmospheric dispersion model, applied to dynamically evaluate individuals' exposure to air pollution by intersecting daily trajectories of individuals and hourly spatial variations of air pollution across the study domain. This dynamic approach is implemented in Montreal, Canada to highlight the advantages of the method for exposure analysis. The results for nitrogen dioxide (NO2), a marker of traffic related air pollution, reveal significant differences when relying on spatially and temporally resolved concentrations combined with individuals' daily trajectories compared to a long-term average NO2 concentration at the home location. We observe that NO2 exposures based on trips and activity locations visited throughout the day were often more elevated than daily NO2 concentrations at the home location. The percentage of all individuals with a lower 24-hour daily average at home compared to their 24-hour mobility exposure is 89.6%, of which 31% of individuals increase their exposure by more than 10% by leaving the home. On average, individuals increased their exposure by 23-44% while commuting and conducting activities out of home (compared to the daily concentration at home), regardless of air quality at their home location. We conclude that our proposed dynamic modelling approach significantly improves the results of traditional methods that rely on a long-term average concentration at the home location and we shed light on the importance of using individual daily trajectories to understand exposure. PMID:26970897
NASA Astrophysics Data System (ADS)
Didion, Markus; Blujdea, Viorel; Grassi, Giacomo; Hernández, Laura; Jandl, Robert; Kriiska, Kaie; Lehtonen, Aleksi; Saint-André, Laurent
2016-04-01
Globally, soils are the largest terrestrial store of carbon (C) and small changes may contribute significantly to the global C balance. Due to the potential implications for climate change, accurate and consistent estimates of C fluxes at the large-scale are important as recognized, for example, in international agreements such as the United Nations Framework Convention on Climate Change (UNFCCC). Under the UNFCCC and also under the Kyoto Protocol it is required to report C balances annually. Most measurement-based soil inventories are currently not able to detect annual changes in soil C stocks consistently across space and representative at national scales. The use of models to obtain relevant estimates is considered an appropriate alternative under the UNFCCC and the Kyoto Protocol. Several soil carbon models have been developed but few models are suitable for a consistent application across larger-scales. Consistency is often limited by the lack of input data for models, which can result in biased estimates and, thus, the reporting criteria of accuracy (i.e., emission and removal estimates are systematically neither over nor under true emissions or removals) may be met. Based on a qualitative assessment of the ability to meet criteria established for GHG reporting under the UNFCCC including accuracy, consistency, comparability, completeness, and transparency, we identified the suitability of commonly used simulation models for estimating annual C stock changes in mineral soil in European forests. Among six discussed simulation models we found a clear trend toward models for providing quantitative precise site-specific estimates which may lead to biased estimates across space. To meet reporting needs for national GHG inventories, we conclude that there is a need for models producing qualitative realistic results in a transparent and comparable manner. Based on the application of one model along a gradient from Boreal forests in Finland to Mediterranean forests
NASA Technical Reports Server (NTRS)
Bounoua, L.; Imhoff, M.L.; Franks, S.
2008-01-01
the study site, for the month of July, spray irrigation resulted in an irrigation amount of about 1.4 mm per occurrence with an average frequency of occurrence of 24.6 hours. The simulated total monthly irrigation for July was 34.85 mm. In contrast, the drip irrigation resulted in less frequent irrigation events with an average water requirement about 57% less than that simulated during the spray irrigation case. The efficiency of the drip irrigation method rests on its reduction of the canopy interception loss compared to the spray irrigation method. When compared to a country-wide average estimate of irrigation water use, our numbers are quite low. We would have to revise the reported country level estimates downward to 17% or less
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Astrophysics Data System (ADS)
Zhang, H.; Anderson, R. G.; Wang, D.
2011-12-01
Water availability is one of the limiting factors for sustainable production of biofuel crops. A common method for determining crop water requirement is to multiply daily potential evapotranspiration (ETo) calculated from meteorological parameters by a crop coefficient (Kc) to obtain actual crop evapotranspiration (ETc). Generic Kc values are available for many crop types but not for sugarcane in Maui, Hawaii, which grows on a relatively unstudied biennial cycle. In this study, an algorithm is being developed to estimate sugarcane Kc using normalized difference vegetation index (NDVI) derived from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) imagery. A series of ASTER NDVI maps were used to depict canopy development over time or fractional canopy cover (fc) which was measured with a handheld multispectral camera in the fields during satellite overpass days. Canopy cover was correlated with NDVI values. Then the NDVI based canopy cover was used to estimate Kc curves for sugarcane plants. The remotely estimated Kc and ETc values were compared and validated with ground-truth ETc measurements. The approach is a promising tool for large scale estimation of evapotranspiration of sugarcane or other biofuel crops.
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
Refugee issues. Summary of WFP / UNHCR guidelines for estimating food and nutritional requirements.
1997-12-01
In line with recent recommendations by WHO and the Committee on International Nutrition, WFP and UNHCR will now use 2100 kcal/person/day as the initial energy requirement for designing food aid rations in emergencies. In an emergency situation, it is essential to establish such a value to allow for rapid planning and response to the food and nutrition requirements of an affected population. An in-depth assessment is often not possible in the early days of an emergency, and an estimated value is needed to make decisions about the immediate procurement and shipment of food. The initial level is applicable only in the early stages of an emergency. As soon as demographic, health, nutritional and food security information is available, the estimated per capita energy requirements should be adjusted accordingly. Food rations should complement any food that the affected population is able to obtain on its own through activities such as agricultural production, trade, labor, and small business. An understanding of the various mechanisms used by the population to gain access to food is essential to give an accurate estimate of food needs. Therefore, a prerequisite for the design of a longer-term ration is a thorough assessment of the degree of self-reliance and level of household food security. Frequent assessments are necessary to adequately determine food aid needs on an ongoing basis. The importance of ensuring a culturally acceptable, adequate basic ration for the affected population at the onset of an emergency is considered to be one of the basic principles in ration design. The quality of the ration provided, particularly in terms of micronutrients, is stressed in the guidelines, and levels provided will aim to conform with standards set by other technical agencies.
Estimation of the net energy requirements for maintenance in growing and finishing pigs.
Zhang, G F; Liu, D W; Wang, F L; Li, D F
2014-07-01
The objective of this experiment was to determine the net energy requirements for maintenance of growing and finishing pigs using regression models. Thirty-six growing (27.38 ± 2.24 kg) and 36 finishing (70.25 ± 2.61 kg) barrows were used and within each phase. Pigs received a corn-soybean meal diet fed at 6 levels of feed intake, which were calculated as 0, 20, 40, 60, 80, or 100% of the estimated ad libitum ME intake (2,400 kJ ME/kg BW(0.6)·d(-1)) of the pigs. Measurements were conducted on 6 pigs per feeding level and per stage of growth. After a 5-d adjustment period, barrows in the fasted treatment were kept in respiration chambers for 2 d to measure the fasting heat production. Barrows in the other treatments were kept individually in respiration chambers for a 5-d balance trial followed by a 2-d fasting period. Heat production (HP) in the fed state was measured and feces and urine were collected in the balance trial. The total HP increased (P < 0.01) with increasing feeding levels. Fasting HP increased (P < 0.01) as previous feeding level increased and was less (P = 0.012) in finishing pigs than growing pigs if calculated per kilogram BW(0.6) per day. When using an exponential regression analysis, ME requirements for maintenance were estimated at 973 and 921 kJ/kg BW(0.6)·d(-1) and NE requirements for maintenance were estimated at 758 and 732 kJ/kg BW(0.6)·d(-1) for growing and finishing pigs, respectively. The efficiencies of using ME for growth and for maintenance were estimated at 66 and 78.7% for growing and finishing pigs, respectively. It is concluded that exponential regression between HP and a wide range of ME intake may be used as a new method to determine the NE requirement for maintenance.
Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.
Ware, Colin; Trites, Andrew W; Rosen, David A S; Potvin, Jean
2016-01-01
Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal-and they should also
Trites, Andrew W.; Rosen, David A. S.; Potvin, Jean
2016-01-01
Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion—an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric—Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal—and they should
Ware, Colin; Trites, Andrew W; Rosen, David A S; Potvin, Jean
2016-01-01
Forces due to propulsion should approximate forces due to hydrodynamic drag for animals horizontally swimming at a constant speed with negligible buoyancy forces. Propulsive forces should also correlate with energy expenditures associated with locomotion-an important cost of foraging. As such, biologging tags containing accelerometers are being used to generate proxies for animal energy expenditures despite being unable to distinguish rotational movements from linear movements. However, recent miniaturizations of gyroscopes offer the possibility of resolving this shortcoming and obtaining better estimates of body accelerations of swimming animals. We derived accelerations using gyroscope data for swimming Steller sea lions (Eumetopias jubatus), and determined how well the measured accelerations correlated with actual swimming speeds and with theoretical drag. We also compared dive averaged dynamic body acceleration estimates that incorporate gyroscope data, with the widely used Overall Dynamic Body Acceleration (ODBA) metric, which does not use gyroscope data. Four Steller sea lions equipped with biologging tags were trained to swim alongside a boat cruising at steady speeds in the range of 4 to 10 kph. At each speed, and for each dive, we computed a measure called Gyro-Informed Dynamic Acceleration (GIDA) using a method incorporating gyroscope data with accelerometer data. We derived a new metric-Averaged Propulsive Body Acceleration (APBA), which is the average gain in speed per flipper stroke divided by mean stroke cycle duration. Our results show that the gyro-based measure (APBA) is a better predictor of speed than ODBA. We also found that APBA can estimate average thrust production during a single stroke-glide cycle, and can be used to estimate energy expended during swimming. The gyroscope-derived methods we describe should be generally applicable in swimming animals where propulsive accelerations can be clearly identified in the signal-and they should also
Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A; Clark, Steve; Hammond, Philip S; Hoyt, Erich; Noren, Dawn P; Rosen, David; Winship, Arliss
2011-01-01
Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada-US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict
Williams, Rob; Krkošek, Martin; Ashe, Erin; Branch, Trevor A.; Clark, Steve; Hammond, Philip S.; Hoyt, Erich; Noren, Dawn P.; Rosen, David; Winship, Arliss
2011-01-01
Ecosystem-based management (EBM) of marine resources attempts to conserve interacting species. In contrast to single-species fisheries management, EBM aims to identify and resolve conflicting objectives for different species. Such a conflict may be emerging in the northeastern Pacific for southern resident killer whales (Orcinus orca) and their primary prey, Chinook salmon (Oncorhynchus tshawytscha). Both species have at-risk conservation status and transboundary (Canada–US) ranges. We modeled individual killer whale prey requirements from feeding and growth records of captive killer whales and morphometric data from historic live-capture fishery and whaling records worldwide. The models, combined with caloric value of salmon, and demographic and diet data for wild killer whales, allow us to predict salmon quantities needed to maintain and recover this killer whale population, which numbered 87 individuals in 2009. Our analyses provide new information on cost of lactation and new parameter estimates for other killer whale populations globally. Prey requirements of southern resident killer whales are difficult to reconcile with fisheries and conservation objectives for Chinook salmon, because the number of fish required is large relative to annual returns and fishery catches. For instance, a U.S. recovery goal (2.3% annual population growth of killer whales over 28 years) implies a 75% increase in energetic requirements. Reducing salmon fisheries may serve as a temporary mitigation measure to allow time for management actions to improve salmon productivity to take effect. As ecosystem-based fishery management becomes more prevalent, trade-offs between conservation objectives for predators and prey will become increasingly necessary. Our approach offers scenarios to compare relative influence of various sources of uncertainty on the resulting consumption estimates to prioritise future research efforts, and a general approach for assessing the extent of conflict
Tillman, Fred D; Anning, David W.
2014-01-01
The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating over 4.5 million acres of farmland, and annually generating about 12 billion kilowatt hours of hydroelectric power. The Upper Colorado River Basin, part of the Colorado River Basin, encompasses more than 110,000 mi2 and is the source of much of more than 9 million tons of dissolved solids that annually flows past the Hoover Dam. High dissolved-solids concentrations in the river are the cause of substantial economic damages to users, primarily in reduced agricultural crop yields and corrosion, with damages estimated to be greater than 300 million dollars annually. In 1974, the Colorado River Basin Salinity Control Act created the Colorado River Basin Salinity Control Program to investigate and implement a broad range of salinity control measures. A 2009 study by the U.S. Geological Survey, supported by the Salinity Control Program, used the Spatially Referenced Regressions on Watershed Attributes surface-water quality model to examine dissolved-solids supply and transport within the Upper Colorado River Basin. Dissolved-solids loads developed for 218 monitoring sites were used to calibrate the 2009 Upper Colorado River Basin Spatially Referenced Regressions on Watershed Attributes dissolved-solids model. This study updates and develops new dissolved-solids loading estimates for 323 Upper Colorado River Basin monitoring sites using streamflow and dissolved-solids concentration data through 2012, to support a planned Spatially Referenced Regressions on Watershed Attributes modeling effort that will investigate the contributions to dissolved-solids loads from irrigation and rangeland practices.
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Wilson, John W.; Nealy, John E.
1988-01-01
Estimates of radiation risk to the blood forming organs from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different constituents per layer. Calculated galactic cosmic ray doses and dose equivalents behind various thicknesses of aluminum and water shielding are presented for solar maximum and solar minimum periods. Estimates of risk to the blood forming organs are made using 5 cm depth dose/dose equivalent values for water. These results indicate that at least 5 g/sq cm (5 cm) of water of 6.5 g/sq cm (2.4 cm) of aluminum shield is required to reduce annual exposure below the current recommended limit of 50 rem. Because of the large uncertainties in fragmentation parameters, and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as 70 percent. Therefore, more detailed analyses with improved inputs could indicate the need for additional shielding.
Banerjee, Monami; Okun, Michael S; Vaillancourt, David E; Vemuri, Baba C
2016-01-01
Parkinson's disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future.
Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.
2016-01-01
Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID
NASA Astrophysics Data System (ADS)
Ma, Jianwei; Huang, Shifeng; Li, Jiren; Li, Xiaotao; Song, Xiaoning; Leng, Pei; Sun, Yayong
2015-12-01
Soil moisture is an important parameter in the research of hydrology, agriculture, and meteorology. The present study is designed to produce a near real time soil moisture estimation algorithm by linking optical/IR measurements to ground measured soil moisture, and then used to monitoring region drought. It has been found that the Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST) are related to surface soil moisture. Therefore, a relationship between ground measurement soil moisture and NDVI and LST can be developed. Six days' NDVI and LST data calculated from Terra Moderate Resolution Imaging Spectroradiometer (MODIS) of Shandong province during October in 2009 to May in 2010 were combined with ground measured volumetric soil moisture in different depth (10cm, 20cm, 40cm, and mean in vertical (0-40cm)) and different soil type to determine regression relationships at a 1 km scale. Based on the regression relationships, mean volumetric soil moisture in vertical (0-40cm) at 1 km resolution can be calculated over the Shandong province, and then drought maps were obtained. The result shows that significantly relationship exists between the NDVI and LST and soil moisture at different soil depths, and regression relationships are soil type dependent. What is more, the drought monitoring results agree well with actual situation.
SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop
NASA Astrophysics Data System (ADS)
Zeyliger, Anatoly; Ermolaeva, Olga
2013-04-01
The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web
[Estimates of trace elements requirements of children receiving total parenteral nutrition].
Ricour, C; Duhamel, J F; Gros, J; Mazière, B; Comar, D
1977-01-01
Ten children on total parenteral nutrition were studied. Plasma copper, zinc, manganese and selenium levels were determined by neutron activation and gamma spectrometry, every 10 days. With a copper intake of 20 microgram/kg/24 h, the average level 120 microgram% (94-144) was normal (N: 118 microgram +/- 11%). With a manganese intake of 40 microgram/kg/24 h, the level increased to 2.6 microgram% (1.3-4.5) (N: 1.1 microgram +/- 0.2%). With a zinc intake of 30 microgram/kg/24 h, the level decreased to 45.9 microgram % (20-63) (N: 83 microgram +/- 28%); with an intake of 50 microgram/kg/24 h the level remained under normal. With a selenium intake of 1 microgram/kg/24 h, the level decreased to 10.6 ng/ml (3.6-21.6) (N: 38.2 ng/ml +/- 11.9), but was normalized with an intake of 3 microgram/kg/24 h. From these results, with all reserves that estimation implies, the authors suggest that the disorders due to deficit or excess of trace elements could be avoided by daily intakes per kg of body weight: copper 20 microgram, zinc 100 microgram, manganese 10 microgram and selenium 3 microgram, with supplementation of iron, iodine and fluoride.
EURRECA-Estimating vitamin D requirements for deriving dietary reference values.
Cashman, Kevin D; Kiely, Mairead
2013-01-01
The time course of the EURRECA from 2008 to 2012, overlapped considerably with the timeframe of the process undertaken by the North American Institute of Medicine (IOM) to revise dietary reference intakes for vitamin D and calcium (published November 2010). Therefore the aims of the vitamin D-related activities in EURRECA were formulated to address knowledge requirements that would complement the activities undertaken by the IOM and provide additional resources for risk assessors and risk management agencies charged with the task of setting dietary reference values for vitamin D. A total of three systematic reviews were carried out. The first, which pre-dated the IOM review process, identified and evaluated existing and novel biomarkers of vitamin D status and confirmed that circulating 25-hydroxyvitamin D (25(OH)D) concentrations is a robust and reliable marker of vitamin D status. The second systematic review conducted a meta-analysis of the dose-response of serum 25(OH)D to vitamin D intake from randomized controlled trials (RCT) among adults to explore the most appropriate model of the vitamin D intake-serum 25(OH)D) relationship to estimate requirements. The third review also carried out a meta-analysis to evaluate evidence of efficacy from RCT using foods fortified with vitamin D, and found they increased circulating 25(OH)D concentrations in a dose-dependent manner but identified a need for stronger data on the efficacy of vitamin D-fortified food on deficiency prevention and potential health outcomes, including adverse effects. Finally, narrative reviews provided estimates of the prevalence of inadequate intakes of vitamin D in adults and children from international dietary surveys, as well as a compilation of research requirements for vitamin D to inform current and future assessments of vitamin D requirements. [Supplementary materials are available for this article. Go to the publisher's onilne edition of Critical Reviews in Food Science and Nutrion for
NASA Technical Reports Server (NTRS)
Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah
2007-01-01
An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-01
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc. PMID:16652369
A new remote sensing procedure for the estimation of crop water requirements
NASA Astrophysics Data System (ADS)
Spiliotopoulos, M.; Loukas, A.; Mylopoulos, N.
2015-06-01
The objective of this work is the development of a new approach for the estimation of water requirements for the most important crops located at Karla Watershed, central Greece. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) was used as a basis for the derivation of actual evapotranspiration (ET) and crop coefficient (ETrF) values from Landsat ETM+ imagery. MODIS imagery has been also used, and a spatial downscaling procedure is followed between the two sensors for the derivation of a new NDVI product with a spatial resolution of 30 m x 30 m. GER 1500 spectro-radiometric measurements are additionally conducted during 2012 growing season. Cotton, alfalfa, corn and sugar beets fields are utilized, based on land use maps derived from previous Landsat 7 ETM+ images. A filtering process is then applied to derive NDVI values after acquiring Landsat ETM+ based reflectance values from the GER 1500 device. ETrF vs NDVI relationships are produced and then applied to the previous satellite based downscaled product in order to finally derive a 30 m x 30 m daily ETrF map for the study area. CropWat model (FAO) is then applied, taking as an input the new crop coefficient values with a spatial resolution of 30 m x 30 m available for every crop. CropWat finally returns daily crop water requirements (mm) for every crop and the results are analyzed and discussed.
Yang, Tao; Liu, Jingling; Chen, Qiuying; Zhang, Jing; Yang, Yi
2013-01-01
The temporal and spatial environmental flow requirements (EFRs) for the river ecosystem of the Haihe River Basin were analyzed based mainly on the eco-functional regionalization of available water resources. The annual EFRs for the river ecosystem of the Haihe River Basin were 47.71 × 10(8) m(3), which accounted for 18% of the average annual flow (263.9 × 10(8) m(3)). The EFRs for river reaches, wetlands, and estuaries were 22.67, 15.32 and 9.72 × 10(8) m(3), respectively. Moreover, the EFRs for the river ecosystem during the wet (June to October), normal (April, May, November), and dry (December to March) periods were 29.99, 9.51 and 8.21 × 10(8) m(3), respectively. Thus, toward a more integrated water resource allocation in the Haihe River Basin, the primary effort should focus on meeting the EFRs for river systems located in protected areas during the dry period.
ERIC Educational Resources Information Center
Aduol, F. W. O.
2001-01-01
Presents model for estimation of student unit costs and staffing requirements. Begins with specification of a "staff distribution matrix" setting out proportions of staff levels in a given staff category that are needed for a degree level. Student unit cost and staffing requirements are computed through manipulations on the matrix. The model is…
ERIC Educational Resources Information Center
ARCUS, PETER; HEADY, EARL O.
THE PURPOSE OF THIS STUDY IS TO ESTIMATE THE MANPOWER REQUIREMENTS FOR THE NATION FOR 144 REGIONS THE TYPES OF SKILLS AND WORK ABILITIES REQUIRED BY AGRICULTURE IN THE NEXT 15 YEARS, AND THE TYPES AND AMOUNTS OF EDUCATION NEEDED. THE QUANTITATIVE ANALYSIS IS BEING MADE BY METHODS APPROPRIATE TO THE PHASES OF THE STUDY--(1) INTERRELATIONS AMONG…
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.
1982-01-01
Radar simulations were performed at five-day intervals over a twenty-day period and used to estimate soil moisture from a generalized algorithm requiring only received power and the mean elevation of a test site near Lawrence, Kansas. The results demonstrate that the soil moisture of about 90% of the 20-m by 20-m pixel elements can be predicted with an accuracy of + or - 20% of field capacity within relatively flat agricultural portions of the test site. Radar resolutions of 93 m by 100 m with 23 looks or coarser gave the best results, largely because of the effects of signal fading. For the distribution of land cover categories, soils, and elevation in the test site, very coarse radar resolutions of 1 km by 1 km and 2.6 km by 3.1 km gave the best results for wet moisture conditions while a finer resolution of 93 m by 100 m was found to yield superior results for dry to moist soil conditions.
Electrofishing effort required to estimate biotic condition in Southern Idaho Rivers
Maret, T.R.; Ott, D.S.; Herlihy, A.T.
2007-01-01
An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions. ?? Copyright by the American Fisheries Society 2007.
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
ERIC Educational Resources Information Center
Mitchem, John
1989-01-01
Examples used to illustrate Simpson's paradox for secondary students include probabilities, university admissions, batting averages, student-faculty ratios, and average and expected class sizes. Each result is explained. (DC)
ERIC Educational Resources Information Center
Webber, Larry; And Others
Generalizability theory, which subsumes classical measurement theory as a special case, provides a general model for estimating the reliability of observational rating data by estimating the variance components of the measurement design. Research data from the "Heart Smart" health intervention program were analyzed as a heuristic tool. Systolic…
Number of trials required to estimate a free-energy difference, using fluctuation relations.
Yunger Halpern, Nicole; Jarzynski, Christopher
2016-05-01
The difference ΔF between free energies has applications in biology, chemistry, and pharmacology. The value of ΔF can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a ΔF estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006)PLEEE81539-375510.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of ΔF. Estimating ΔF from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations.
Number of trials required to estimate a free-energy difference, using fluctuation relations.
Yunger Halpern, Nicole; Jarzynski, Christopher
2016-05-01
The difference ΔF between free energies has applications in biology, chemistry, and pharmacology. The value of ΔF can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a ΔF estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006)PLEEE81539-375510.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of ΔF. Estimating ΔF from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations. PMID:27300866
Number of trials required to estimate a free-energy difference, using fluctuation relations
NASA Astrophysics Data System (ADS)
Yunger Halpern, Nicole; Jarzynski, Christopher
2016-05-01
The difference Δ F between free energies has applications in biology, chemistry, and pharmacology. The value of Δ F can be estimated from experiments or simulations, via fluctuation theorems developed in statistical mechanics. Calculating the error in a Δ F estimate is difficult. Worse, atypical trials dominate estimates. How many trials one should perform was estimated roughly by Jarzynski [Phys. Rev. E 73, 046105 (2006), 10.1103/PhysRevE.73.046105]. We enhance the approximation with the following information-theoretic strategies. We quantify "dominance" with a tolerance parameter chosen by the experimenter or simulator. We bound the number of trials one should expect to perform, using the order-∞ Rényi entropy. The bound can be estimated if one implements the "good practice" of bidirectionality, known to improve estimates of Δ F . Estimating Δ F from this number of trials leads to an error that we bound approximately. Numerical experiments on a weakly interacting dilute classical gas support our analytical calculations.
Technology Transfer Automated Retrieval System (TEKTRAN)
Identifying the spatial and temporal distribution of crop water requirements is a key for successful management of water resources in the dry areas. Climatic data were obtained from three automated weather stations to estimate reference evapotranspiration (ETO) in the Jordan Valley according to the...
Jiang, Shengyu; Wang, Chun; Weiss, David J
2016-01-01
Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916
Estimated quantitative amino acid requirements for Florida pompano reared in low-salinity
Technology Transfer Automated Retrieval System (TEKTRAN)
As with most marine carnivores, Florida pompano require relatively high crude protein diets to obtain optimal growth. Precision formulations to match the dietary indispensable amino acid (IAA) pattern to a species’ requirements can be used to lower the overall dietary protein. However IAA requirem...
Spectral averaging techniques for Jacobi matrices
Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann
2008-02-15
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Nutrients and suspended sediments in streams and large rivers are two major issues facing state and federal agencies. Accurate estimates of nutrient and sediment loads are needed to assess a variety of important water-quality issues including total maximum daily loads, aquatic ec...
Shadow Radiation Shield Required Thickness Estimation for Space Nuclear Power Units
NASA Astrophysics Data System (ADS)
Voevodina, E. V.; Martishin, V. M.; Ivanovsky, V. A.; Prasolova, N. O.
The paper concerns theoretical possibility of visiting orbital transport vehicles based on nuclear power unit and electric propulsion system on the Earth's orbit by astronauts to maintain work with payload from the perspective of radiation safety. There has been done estimation of possible time of the crew's staying in the area of payload of orbital transport vehicles for different reactor powers, which is a consistent part of nuclear power unit.
Biology, population structure, and estimated forage requirements of lake trout in Lake Michigan
Eck, Gary W.; Wells, LaRue
1983-01-01
Data collected during successive years (1971-79) of sampling lake trout (Salvelinus namaycush) in Lake Michigan were used to develop statistics on lake trout growth, maturity, and mortality, and to quantify seasonal lake trout food and food availability. These statistics were then combined with data on lake trout year-class strengths and age-specific food conversion efficiencies to compute production and forage fish consumption by lake trout in Lake Michigan during the 1979 growing season (i.e., 15 May-1 December). An estimated standing stock of 1,486 metric tons (t) at the beginning of the growing season produced an estimated 1,129 t of fish flesh during the period. The lake trout consumed an estimated 3,037 t of forage fish, to which alewives (Alosa pseudoharengus) contributed about 71%, rainbow smelt (Osmerus mordax) 18%, and slimy sculpins (Cottus cognatus) 11%. Seasonal changes in bathymetric distributions of lake trout with respect to those of forage fish of a suitable size for prey were major determinants of the size and species compositions of fish in the seasonal diet of lake trout.
Green, A J; Smith, P; Whelan, K
2008-01-01
Estimation of resting energy expenditure (REE) involves predicting basal metabolic rate (BMR) plus adjustment for metabolic stress. The aim of this study was to investigate the methods used to estimate REE and to identify the impact of the patient's clinical condition and the dietitians' work profile on the stress factor assigned. A random sample of 115 dietitians from the United Kingdom with an interest in nutritional support completed a postal questionnaire regarding the estimation of REE for 37 clinical conditions. The Schofield equation was used by the majority (99%) of dietitians to calculate BMR; however, the stress factors assigned varied considerably with coefficients of variation ranging from 18.5 (cancer with cachexia) to 133.9 (HIV). Dietitians specializing in gastroenterology assigned a higher stress factor to decompensated liver disease than those not specializing in gastroenterology (19.3 vs 10.7, P=0.004). The results of this investigation strongly suggest that there is wide inconsistency in the assignment of stress factors within specific conditions and gives rise to concern over the potential consequences in terms of under- or overfeeding that may ensue. PMID:17311053
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
A comparison of methods to estimate nutritional requirements from experimental data.
Pesti, G M; Vedenov, D; Cason, J A; Billard, L
2009-01-01
1. Research papers use a variety of methods for evaluating experiments designed to determine nutritional requirements of poultry. Growth trials result in a set of ordered pairs of data. Often, point-by-point comparisons are made between treatments using analysis of variance. This approach ignores that response variables (body weight, feed efficiency, bone ash, etc.) are continuous rather than discrete. Point-by-point analyses harvest much less than the total amount of information from the data. Regression models are more effective at gleaning information from data, but the concept of "requirements" is poorly defined by many regression models. 2. Response data from a study of the lysine requirements of young broilers was used to compare methods of determining requirements. In this study, multiple range tests were compared with quadratic polynomials (QP), broken line models with linear (BLL) or quadratic (BLQ) ascending portions, the saturation kinetics model (SK) a logistic model (LM) and a compartmental (CM) model. 3. The sum of total residuals squared was used to compare the models. The SK and LM were the best fit models, followed by the CM, BLL, BLQ, and QP models. A plot of the residuals versus nutrient intake showed clearly that the BLQ and SK models fitted the data best in the important region where the ascending portion meets the plateau. 4. The BLQ model clearly defines the technical concept of nutritional requirements as typically defined by nutritionists. However, the SK, LM and CM models better depict the relationship typically defined by economists as the "law of diminishing marginal productivity". The SK model was used to demonstrate how the law of diminishing marginal productivity can be applied to poultry nutrition, and how the "most economical feeding level" may replace the concept of "requirements". PMID:19234926
Nielsen, David R; McLellan, P James; Daugulis, Andrew J
2006-08-01
The O2 requirements for biomass production and supplying maintenance energy demands during the degradation of both benzene and ethylbenzene by Achromobacter xylosoxidans Y234 were measured using a newly proposed technique involving a bioscrubber. Using this approach, relevant microbial parameter estimates were directly and simultaneously obtained via linear regression of pseudo steady-state data. For benzene and ethylbenzene, the biomass yield on O2, Y(X/O2), was estimated on a cell dry weight (CDW) basis as 1.96 +/- 0.25 mg CDW mgO2(-1) and 0.98 +/- 0.17 mg CDW mgO2(-1), while the specific rate of O2 consumption for maintenance, m(O2), was estimated as 0.041 +/- 0.008 mgO(2) mg CDW(-1) h(-1) and 0.053 +/- 0.022 mgO(2) mg CDW(-1) h(-1), respectively.
NASA Astrophysics Data System (ADS)
Papadavid, G.; Hadjimitsis, M.; Perdikou, S.; Hadjimitsis, D.; Papadavid, C.; Neophtytou, N.; Kountios, G.; Michaelides, A.
2013-08-01
Water allocation to crops has always been of great importance in agricultural process. In this context, and under the current conditions, where Cyprus is facing a severe drought the last five years, purpose of this study is basically to estimate the needed crop water requirements for supporting irrigation management and monitoring irrigation on a systematic basis for Cyprus using remote sensing techniques. The use of satellite images supported by ground measurements has provided quite accurate results. Intended purpose of this paper is to estimate the Evapotranspiration (ET) of specific crops which is the basis for irrigation scheduling and establish a procedure for monitoring and managing irrigation water over Cyprus, using remotely sensed data from Landsat TM/ ETM+ and a sound methodology used worldwide, the Surface Energy Balance Algorithm for Land (SEBAL). Finally Crop Water Requirements derived from the specific research are disseminated to crop producers through a network of 3rd generation mobile phones.
Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers
An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...
NASA Astrophysics Data System (ADS)
Graymer, R. W.; Simpson, R. W.
2014-12-01
Graymer and Simpson (2013, AGU Fall Meeting) showed that in a simple 2D multi-fault system (vertical, parallel, strike-slip faults bounding blocks without strong material property contrasts) slip rate on block-bounding faults can be reasonably estimated by the difference between the mean velocity of adjacent blocks if the ratio of the effective locking depth to the distance between the faults is 1/3 or less ("effective" locking depth is a synthetic parameter taking into account actual locking depth, fault creep, and material properties of the fault zone). To check the validity of that observation for a more complex 3D fault system and a realistic distribution of observation stations, we developed a synthetic suite of GPS velocities from a dislocation model, with station location and fault parameters based on the San Francisco Bay region. Initial results show that if the effective locking depth is set at the base of the seismogenic zone (about 12-15 km), about 1/2 the interfault distance, the resulting synthetic velocity observations, when clustered, do a poor job of returning the input fault slip rates. However, if the apparent locking depth is set at 1/2 the distance to the base of the seismogenic zone, or about 1/4 the interfault distance, the synthetic velocity field does a good job of returning the input slip rates except where the fault is in a strong restraining orientation relative to block motion or where block velocity is not well defined (for example west of the northern San Andreas Fault where there are no observations to the west in the ocean). The question remains as to where in the real world a low effective locking depth could usefully model fault behavior. Further tests are planned to define the conditions where average cluster-defined block velocities can be used to reliably estimate slip rates on block-bounding faults. These rates are an important ingredient in earthquake hazard estimation, and another tool to provide them should be useful.
Morris, Charlotte R; Nelson, Frank E; Askew, Graham N
2010-08-15
Little is known about how in vivo muscle efficiency, that is the ratio of mechanical and metabolic power, is affected by changes in locomotory tasks. One of the main problems with determining in vivo muscle efficiency is the large number of muscles generally used to produce mechanical power. Animal flight provides a unique model for determining muscle efficiency because only one muscle, the pectoralis muscle, produces nearly all of the mechanical power required for flight. In order to estimate in vivo flight muscle efficiency, we measured the metabolic cost of flight across a range of flight speeds (6-13 m s(-1)) using masked respirometry in the cockatiel (Nymphicus hollandicus) and compared it with measurements of mechanical power determined in the same wind tunnel. Similar to measurements of the mechanical power-speed relationship, the metabolic power-speed relationship had a U-shape, with a minimum at 10 m s(-1). Although the mechanical and metabolic power-speed relationships had similar minimum power speeds, the metabolic power requirements are not a simple multiple of the mechanical power requirements across a range of flight speeds. The pectoralis muscle efficiency (estimated from mechanical and metabolic power, basal metabolism and an assumed value for the 'postural costs' of flight) increased with flight speed and ranged from 6.9% to 11.2%. However, it is probable that previous estimates of the postural costs of flight have been too low and that the pectoralis muscle efficiency is higher.
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
Valleriani, Angelo
2016-01-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct. PMID:27527811
A conditional likelihood is required to estimate the selection coefficient in ancient DNA
NASA Astrophysics Data System (ADS)
Valleriani, Angelo
2016-08-01
Time-series of allele frequencies are a useful and unique set of data to determine the strength of natural selection on the background of genetic drift. Technically, the selection coefficient is estimated by means of a likelihood function built under the hypothesis that the available trajectory spans a sufficiently large portion of the fitness landscape. Especially for ancient DNA, however, often only one single such trajectories is available and the coverage of the fitness landscape is very limited. In fact, one single trajectory is more representative of a process conditioned both in the initial and in the final condition than of a process free to visit the available fitness landscape. Based on two models of population genetics, here we show how to build a likelihood function for the selection coefficient that takes the statistical peculiarity of single trajectories into account. We show that this conditional likelihood delivers a precise estimate of the selection coefficient also when allele frequencies are close to fixation whereas the unconditioned likelihood fails. Finally, we discuss the fact that the traditional, unconditioned likelihood always delivers an answer, which is often unfalsifiable and appears reasonable also when it is not correct.
Schousboe, John; Paudel, Misti; Taylor, Brent; Mah, Lih-Wen; Virnig, Beth; Ensrud, Kristine; Dowd, Bryan
2014-01-01
Background/Aims Payments to hospital providers are not solely driven by the resource requirements of individual patients, but also reflect payment policies specific to the health care payer and hospital provider. For example, Medicare adjusts payments to hospitals according to facility and local geographic characteristics that may not be relevant to studies estimating the associations of individual patient characteristics with true costs of care. We developed a method to estimate hospital costs using the diagnosis related group (DRG) payment weights on which Medicare bases hospital payments that reflect patient medical and surgical acuity. Our purpose was to compare cost estimates for hospital stays calculated using DRG payments weights to actual Medicare hospital payments. Methods We used Medicare Provider Analysis and Review (MedPAR) files and DRG weight tables linked to participant data from the Study of Osteoporotic Fractures (SOF) from 1992 through 2010. Participants were women age 65 and older recruited in three metropolitan and one rural area of the United States. Standardized hospital costs were estimated using DRG payment weights for 1,397 hospital stays (assigned 182 separate DRG codes) for 795 SOF participants for one year following a hip fracture. Cost estimates based on Medicare payments included Medicare and secondary insurer payments, copay and deductible amounts. Results The mean (SD) of inpatient DRG-based cost estimates per person-year were $16,268 ($10,058) compared to $19,937 ($15,531) for MedPAR payments. The correlation between DRG-based estimates and MedPAR payments was 0.71, and 51% of hospital stays were in different quintiles when costs were calculated based on DRG weights compared to MedPAR payments. Conclusions DRG-based cost estimates of hospital stays differ significantly from Medicare payments, which are adjusted by Medicare for facility and local geographic characteristics. These findings also may apply to studies estimating
Reznik, Ed; Chaudhary, Osman; Segrè, Daniel
2013-09-01
The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This "average enzyme principle" provides a natural methodology for jointly studying metabolism and its regulation.
NASA Astrophysics Data System (ADS)
Wroblewski, Thomas
2015-03-01
In the enclosure of synchrotron radiation experiments using a monochromatic beam, secondary radiation arises from two effects, namely fluorescence and scattering. While fluorescence can be regarded as isotropic, the angular dependence of Compton scattering has to be taken into account if the shielding shall not become unreasonably thick. The scope of this paper is to clarify how the different factors starting from the spectral properties of the source and the attenuation coefficient of the shielding, over the spectral and angular distribution of the scattered radiation and the geometry of the experiment influence the thickness of lead required to keep the dose rate outside the enclosure below the desired threshold.
Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N
2014-04-01
Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.
Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor
NASA Astrophysics Data System (ADS)
Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey
This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.
Estimates of power requirements for a manned Mars rover powered by a nuclear reactor
NASA Astrophysics Data System (ADS)
Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey
1991-01-01
This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are met using an SP-100 type reactor. The primary electric power needs, which include 30-kWe net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine (FPSE) yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle (CBC) using He/Xe as the working fluid. The specific mass of the nuclear reactor power systrem, including a man-rated radiation shield, ranged from 150-kg/kWe to 190-kg/kWe and the total mass of the Rover vehicle varied depend upon the cruising speed.
Estimates of power requirements for a Manned Mars Rover powered by a nuclear reactor
NASA Technical Reports Server (NTRS)
Morley, Nicholas J.; El-Genk, Mohamed S.; Cataldo, Robert; Bloomfield, Harvey
1991-01-01
This paper assesses the power requirement for a Manned Mars Rover vehicle. Auxiliary power needs are fulfilled using a hybrid solar photovoltaic/regenerative fuel cell system, while the primary power needs are meet using an SP-100 type reactor. The primary electric power needs, which include 30-kW(e) net user power, depend on the reactor thermal power and the efficiency of the power conversion system. Results show that an SP-100 type reactor coupled to a Free Piston Stirling Engine yields the lowest total vehicle mass and lowest specific mass for the power system. The second lowest mass was for a SP-100 reactor coupled to a Closed Brayton Cycle using He/Xe as the working fluid. The specific mass of the nuclear reactor power system, including a man-rated radiation shield, ranged from 150-kg/kW(e) to 190-kg/KW(e) and the total mass of the Rover vehicle varied depend upon the cruising speed.
Bonnor, W.B.
1987-05-01
The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.
Stein, A.; Scullion, T.
1988-02-01
In the early 1980's the Texas State Department of Highways and Public Transportation implemented its Pavement Evaluation System. The system was designed to (a) document trends in network condition and (b) generate a one-year estimate of rehabilitation funding. The information generated by the system was used for many purposes including funding request, project prioritization and documenting the consequences of changes in funding levels. However, a limitation of the system was its inability to project future conditions and make multi-year needs estimates. This is the subject of the report. Regression equations were built for each major distress type from a pavement data base containing a 10-year history of condition trends from over 350 random sections in Texas. These equations were used to age individual sections that did not qualify for maintenance or rehabilitation in a particular year. A simple decision tree was developed to estimate the maintenance requirements if rehabilitation is not warranted. The decision-tree represents the opinions of experienced maintenance engineers. A case study and sensitivity analysis are presented.
MODEL AVERAGING BASED ON KULLBACK-LEIBLER DISTANCE
Zhang, Xinyu; Zou, Guohua; Carroll, Raymond J.
2016-01-01
This paper proposes a model averaging method based on Kullback-Leibler distance under a homoscedastic normal error term. The resulting model average estimator is proved to be asymptotically optimal. When combining least squares estimators, the model average estimator is shown to have the same large sample properties as the Mallows model average (MMA) estimator developed by Hansen (2007). We show via simulations that, in terms of mean squared prediction error and mean squared parameter estimation error, the proposed model average estimator is more efficient than the MMA estimator and the estimator based on model selection using the corrected Akaike information criterion in small sample situations. A modified version of the new model average estimator is further suggested for the case of heteroscedastic random errors. The method is applied to a data set from the Hong Kong real estate market.
A RAPID NON-DESTRUCTIVE METHOD FOR ESTIMATING ABOVEGROUND BIOMASS OF SALT MARSH GRASSES
Understanding the primary productivity of salt marshes requires accurate estimates of biomass. Unfortunately, these estimates vary enough within and among salt marshes to require large numbers of replicates if the averages are to be statistically meaningful. Large numbers of repl...
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Dillon, Christina B.; Fitzgerald, Anthony P.; Kearney, Patricia M.; Perry, Ivan J.; Rennie, Kirsten L.; Kozarski, Robert; Phillips, Catherine M.
2016-01-01
Introduction Objective methods like accelerometers are feasible for large studies and may quantify variability in day-to-day physical activity better than self-report. The variability between days suggests that day of the week cannot be ignored in the design and analysis of physical activity studies. The purpose of this paper is to investigate the optimal number of days needed to obtain reliable estimates of weekly habitual physical activity using the wrist-worn GENEActiv accelerometer. Methods Data are from a subsample of the Mitchelstown cohort; 475 (44.6% males; mean aged 59.6±5.5 years) middle-aged Irish adults. Participants wore the wrist GENEActiv accelerometer for 7-consecutive days. Data were collected at 100Hz and summarised into a signal magnitude vector using 60s epochs. Each time interval was categorised according to intensity based on validated cut-offs. Spearman pairwise correlations determined the association between days of the week. Repeated measures ANOVA examined differences in average minutes across days. Intraclass correlations examined the proportion of variability between days, and Spearman-Brown formula estimated intra-class reliability coefficient associated with combinations of 1–7 days. Results Three hundred and ninety-seven adults (59.7±5.5yrs) had valid accelerometer data. Overall, men were most sedentary on weekends while women spent more time in sedentary behaviour on Sunday through Tuesday. Post hoc analysis found sedentary behaviour and light activity levels on Sunday to differ to all other days in the week. Analysis revealed greater than 1 day monitoring is necessary to achieve acceptable reliability. Monitoring frame duration for reliable estimates varied across intensity categories, (sedentary (3 days), light (2 days), moderate (2 days) and vigorous activity (6 days) and MVPA (2 days)). Conclusion These findings provide knowledge into the behavioural variability in weekly activity patterns of middle-aged adults. Since Sunday
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Spatial limitations in averaging social cues.
Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Mandok, K S; Kay, J K; Greenwood, S L; Edwards, G R; Roche, J R
2013-06-01
Fifty-three nonlactating, pregnant Holstein-Friesian and Holstein-Friesian × Jersey cross dairy cows were grouped into 4 cohorts (n=15, 12, 13, and 13) and offered 1 of 3 allowances of fresh, cut pasture indoors for 38 ± 2 d (mean ± SD). Cows were released onto a bare paddock after their meal until the following morning. Animals were blocked by age (6 ± 2 yr), day of gestation (208 ± 17 d), and body weight (BW; 526 ± 55 kg). The 3 pasture allowances [low: 7.5 kg of dry matter (DM), medium: 10.1 kg of DM, or high: 12.4 kg of DM/cow per day] were offered in individual stalls to determine the estimated DM and metabolizable energy (ME) intake required for zero energy balance. Individual cow DM intake was determined daily and body condition score was assessed once per week. Cow BW was recorded once per week in cohorts 1 and 2, and 3 times per week in cohorts 3 and 4. Low, medium, and high allowance treatments consumed 7.5, 9.4, and 10.6 kg of DM/cow per day [standard error of the difference (SED)=0.26 kg of DM], and BW gain, including the conceptus, was 0.2, 0.6, and 0.9 kg/cow per day (SED=0.12 kg), respectively. The ME content of the pasture was estimated from in vitro true digestibility and by near infrared spectroscopy. Total ME requirements for maintenance, pregnancy, and limited activity were 1.07 MJ of ME/kg of measured metabolic BW per day. This is more than 45% greater than current recommendations. Differences may be due to an underestimation of ME requirements for maintenance or pregnancy, an overestimation of diet metabolizability, or a combination of these. Further research is necessary to determine the reasons for the greater ME requirements measured in the present study, but the results are important for on-farm decisions regarding feed allocation for nonlactating, pregnant dairy cows.
Code of Federal Regulations, 2010 CFR
2010-01-01
... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? If the amount of the estimated residual value you rely upon to satisfy the full payout...
Code of Federal Regulations, 2012 CFR
2012-01-01
... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? If the amount of the estimated residual value you rely upon to satisfy the full payout...
Code of Federal Regulations, 2013 CFR
2013-01-01
... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? If the amount of the estimated residual value you rely upon to satisfy the full payout...
Code of Federal Regulations, 2011 CFR
2011-01-01
... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? If the amount of the estimated residual value you rely upon to satisfy the full payout...
Code of Federal Regulations, 2014 CFR
2014-01-01
... residual value greater than 25% of the original cost of the leased property? 714.5 Section 714.5 Banks and... required if you rely on an estimated residual value greater than 25% of the original cost of the leased property? If the amount of the estimated residual value you rely upon to satisfy the full payout...
Hancke, Kasper; Dalsgaard, Tage; Sejr, Mikael Kristian; Markager, Stiig; Glud, Ronnie Nøhr
2015-01-01
Accurate quantification of pelagic primary production is essential for quantifying the marine carbon turnover and the energy supply to the food web. Knowing the electron requirement (Κ) for carbon (C) fixation (ΚC) and oxygen (O2) production (ΚO2), variable fluorescence has the potential to quantify primary production in microalgae, and hereby increasing spatial and temporal resolution of measurements compared to traditional methods. Here we quantify ΚC and ΚO2 through measures of Pulse Amplitude Modulated (PAM) fluorometry, C fixation and O2 production in an Arctic fjord (Godthåbsfjorden, W Greenland). Through short- (2h) and long-term (24h) experiments, rates of electron transfer (ETRPSII), C fixation and/or O2 production were quantified and compared. Absolute rates of ETR were derived by accounting for Photosystem II light absorption and spectral light composition. Two-hour incubations revealed a linear relationship between ETRPSII and gross 14C fixation (R2 = 0.81) during light-limited photosynthesis, giving a ΚC of 7.6 ± 0.6 (mean ± S.E.) mol é (mol C)−1. Diel net rates also demonstrated a linear relationship between ETRPSII and C fixation giving a ΚC of 11.2 ± 1.3 mol é (mol C)−1 (R2 = 0.86). For net O2 production the electron requirement was lower than for net C fixation giving 6.5 ± 0.9 mol é (mol O2)−1 (R2 = 0.94). This, however, still is an electron requirement 1.6 times higher than the theoretical minimum for O2 production [i.e. 4 mol é (mol O2)−1]. The discrepancy is explained by respiratory activity and non-photochemical electron requirements and the variability is discussed. In conclusion, the bio-optical method and derived electron requirement support conversion of ETR to units of C or O2, paving the road for improved spatial and temporal resolution of primary production estimates. PMID:26218096
Hancke, Kasper; Dalsgaard, Tage; Sejr, Mikael Kristian; Markager, Stiig; Glud, Ronnie Nøhr
2015-01-01
Accurate quantification of pelagic primary production is essential for quantifying the marine carbon turnover and the energy supply to the food web. Knowing the electron requirement (Κ) for carbon (C) fixation (ΚC) and oxygen (O2) production (ΚO2), variable fluorescence has the potential to quantify primary production in microalgae, and hereby increasing spatial and temporal resolution of measurements compared to traditional methods. Here we quantify ΚC and ΚO2 through measures of Pulse Amplitude Modulated (PAM) fluorometry, C fixation and O2 production in an Arctic fjord (Godthåbsfjorden, W Greenland). Through short- (2h) and long-term (24h) experiments, rates of electron transfer (ETRPSII), C fixation and/or O2 production were quantified and compared. Absolute rates of ETR were derived by accounting for Photosystem II light absorption and spectral light composition. Two-hour incubations revealed a linear relationship between ETRPSII and gross 14C fixation (R2 = 0.81) during light-limited photosynthesis, giving a ΚC of 7.6 ± 0.6 (mean ± S.E.) mol é (mol C)-1. Diel net rates also demonstrated a linear relationship between ETRPSII and C fixation giving a ΚC of 11.2 ± 1.3 mol é (mol C)-1 (R2 = 0.86). For net O2 production the electron requirement was lower than for net C fixation giving 6.5 ± 0.9 mol é (mol O2)-1 (R2 = 0.94). This, however, still is an electron requirement 1.6 times higher than the theoretical minimum for O2 production [i.e. 4 mol é (mol O2)-1]. The discrepancy is explained by respiratory activity and non-photochemical electron requirements and the variability is discussed. In conclusion, the bio-optical method and derived electron requirement support conversion of ETR to units of C or O2, paving the road for improved spatial and temporal resolution of primary production estimates.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Shi, Meng; Zang, Jianjun; Li, Zhongchao; Shi, Chuanxin; Liu, Ling; Zhu, Zhengpeng; Li, Defa
2015-10-01
This experiment was conducted to determine the optimal standardized ileal digestible lysine (SID Lys) level in diets fed to primiparous sows during lactation. A total of 150 (Landrace × Large White) crossbred gilts (weighing 211.1 ± 3.5 kg with a litter size of 11.1 ± 0.2) were fed lactation diets (3325 kcal metabolizable energy (ME)/kg) containing SID Lys levels of 0.76, 0.84, 0.94, 1.04 or 1.14%, through 28 days lactation. Gilts were allocated to treatments based on their body weight and backfat thickness 48 h after farrowing. Gilt body weight loss was significantly (P < 0.05) decreased by increasing dietary SID Lys levels. Fitted broken-line (P < 0.05) and quadratic plot (P < 0.05) analysis of body weight loss indicated that the optimal SID Lys for primiparous sows was 0.85 and 1.01%, respectively. Average daily feed intake (ADFI), weaning-to-estrus interval and subsequent conception rate were not affected by dietary SID Lys levels. Increasing dietary lysine had no effect on litter performances. Protein content in milk was increased by dietary SID Lys (P < 0.05). Dietary SID Lys tended to increase concentrations of serum insulin-like growth factor I (P = 0.066). These results of this experiment indicate that the optimal dietary SID Lys for lactating gilts was at least 0.85%, which approaches the recommendation of 0.84% that is estimated by the National Research Council (2012).
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....
Muggeo, Vito M R
2010-08-15
We discuss some issues relevant to paper of Clegg and co-authors published in Statistics in Medicine; 28, 3670-3682. Emphasis is on computation of the variance of the sum of products of two estimates, slopes and breakpoints. PMID:20680988
Cosmological ensemble and directional averages of observables
Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com
2015-07-01
We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A Site-sPecific Agricultural water Requirement and footprint Estimator (SPARE:WATER 1.0)
NASA Astrophysics Data System (ADS)
Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.
2013-07-01
The agricultural water footprint addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). By considering site-specific properties when calculating the crop water footprint, this methodology can be used to support decision making in the agricultural sector on local to regional scale. We therefore developed the spatial decision support system SPARE:WATER that allows us to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirements and water footprints are assessed on a grid basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume inefficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water is defined as the water needed to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept, we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008, with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional estimation of crop water footprints.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H. Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction
Ahlfors, Seppo P.; Hinrichs, Hermann
2016-01-01
Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
Age-dependence of the average and equivalent refractive indices of the crystalline lens
Charman, W. Neil; Atchison, David A.
2013-01-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
Age-dependence of the average and equivalent refractive indices of the crystalline lens.
Charman, W Neil; Atchison, David A
2013-12-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
Bounding quantum gate error rate based on reported average fidelity
NASA Astrophysics Data System (ADS)
Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.
Bounding quantum gate error rate based on reported average fidelity
NASA Astrophysics Data System (ADS)
Sanders, Yuval; Wallman, Joel; Sanders, Barry
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.
Time-averaging water quality assessment
Reddy, L.S.; Ormsbee, L.E.; Wood, D.J.
1995-07-01
While reauthorization of the Safe Drinking Water Act is pending, many water utilities are preparing to monitor and regulate levels of distribution system constituents that affect water quality. Most frequently, utilities are concerned about average concentrations rather than about tracing a particular constituent`s path. Mathematical and computer models, which provide a quick estimate of average concentrations, could play an important role in this effort. Most water quality models deal primarily with isolated events, such as tracing a particular constituent through a distribution system. This article proposes a simple, time-averaging model that obtains average, maximum, and minimum constituent concentrations and ages throughout the network. It also computes percentage flow contribution and percentage constituent concentration. The model is illustrated using two water distribution systems, and results are compared with those obtained using a dynamic water quality model. Both models predict average water quality parameters with no significant deviations; the time-averaging approach is a simple and efficient alternative to the dynamic model.
Averaging Internal Consistency Reliability Coefficients
ERIC Educational Resources Information Center
Feldt, Leonard S.; Charter, Richard A.
2006-01-01
Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…
ERIC Educational Resources Information Center
Barker, James L.; And Others
This U.S. Environmental Protection Agency report presents estimates of the energy demand attributable to environmental control of pollution from stationary point sources. This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes mobile sources such as trucks, and…
Koltun, G.F.
2001-01-01
This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; Irons, James; Spruce, Joseph P.; Underwood, Lauren W.; Pagnutti, Mary
2006-01-01
This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400-800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30-240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400-m, 600-m, and 800-m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Andersson, K G; Mikkelsen, T; Astrup, P; Thykier-Nielsen, S; Jacobsen, L H; Hoe, S C; Nielsen, S P
2009-12-01
The ARGOS decision support system is currently being extended to enable estimation of the consequences of terror attacks involving chemical, biological, nuclear and radiological substances. This paper presents elements of the framework that will be applied in ARGOS to calculate the dose contributions from contaminants dispersed in the atmosphere after a 'dirty bomb' explosion. Conceptual methodologies are presented which describe the various dose components on the basis of knowledge of time-integrated contaminant air concentrations. Also the aerosolisation and atmospheric dispersion in a city of different types of conceivable contaminants from a 'dirty bomb' are discussed. PMID:19427717
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Bleiwas, Donald I.
2011-01-01
To produce materials from mine to market it is necessary to overcome obstacles that include the force of gravity, the strength of molecular bonds, and technological inefficiencies. These challenges are met by the application of energy to accomplish the work that includes the direct use of electricity, fossil fuel, and manual labor. The tables and analyses presented in this study contain estimates of electricity consumption for the mining and processing of ores, concentrates, intermediate products, and industrial and refined metallic commodities on a kilowatt-hour per unit basis, primarily the metric ton or troy ounce. Data contained in tables pertaining to specific currently operating facilities are static, as the amount of electricity consumed to process or produce a unit of material changes over time for a great number of reasons. Estimates were developed from diverse sources that included feasibility studies, company-produced annual and sustainability reports, conference proceedings, discussions with government and industry experts, journal articles, reference texts, and studies by nongovernmental organizations.
Vibrational averages along thermal lines
NASA Astrophysics Data System (ADS)
Monserrat, Bartomeu
2016-01-01
A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.
Hall, D H; Drury, D; Gronow, J R; Rosevear, A; Pollard, S J T; Smith, R
2006-12-01
Introduction of the EU Landfill Directive is having a significant impact on waste management in the UK and in other member states that have relied on landfilling. This paper considers the length of the aftercare period required by the municipal solid waste streams that the UK will most probably generate following implementation of the Landfill Directive. Data were derived from literature to identify properties of residues from the most likely treatment processes and the probable management times these residues will require within the landfill environment were then modelled. Results suggest that for chloride the relevant water quality standard (250 mg l(-1)) will be achieved with a management period of 40 years and for lead (0.1 mg I(-1)), 240 years. This has considerable implications for the sustainability of landfill and suggests that current timescales for aftercare of landfills may be inadequate.
Flowers, W L; Alhusen, H D
1992-03-01
A study was conducted to examine effects of mating systems composed of natural service (NS) and AI in swine on farrowing rate, litter size, and labor requirements. Sows and gilts were bred once per day via one of the following treatments (d 1/d 2): NS/NS, NS/AI, AI/AI, and NS/none. Gilts bred with NS/AI, AI/AI, and NS/NS had higher (P less than .05) farrowing rates than gilts bred with NS/none matings. Similarly, farrowing rates were higher (P less than .05) in NS/AI than in NS/NS gilts. Numbers of pigs born alive were greater (P less than .05) in NS/NS, NS/AI, and AI/AI than in NS/none gilts. In sows, a treatment x time interaction (P less than .01) was present for farrowing rate. In the AI/AI treatment, farrowing rate increased (P less than .01) from 70.0% (wk 1 through 3) to 88.5% (wk 4 through 10). Farrowing rates were 87.3, 93.2, and 76.0% in the NS/NS, NS/AI, and NS/none groups, respectively, and did not change (P = .72) over time. Sows bred via NS/NS and NS/AI had larger litters (P less than .05) than NS/none sows. In the present study, if four or more sows and gilts were bred, then AI required less (P less than .05) time per animal than NS. Furthermore, gilts required more (P less than .05) time for breeding than sows. Results from this study demonstrate that gilts and sows responded differently to combinations of NS and AI in terms of reproductive performance. In addition, differences in labor requirements per sow or gilt between NS and AI matings were dependent on parity and daily breeding demands. PMID:1563988
Sabljic, A
2001-04-01
The molecular connectivity indices (MCIs) have been successfully used for over 20 years in quantitative structure activity relationships (QSAR) modelling in various areas of physics, chemistry, biology, drug design, and environmental sciences. With this review, we hope to assist present and future QSAR practitioners to apply MCIs more wisely and more critically. First, we have described the methods of calculation and systematics of MCIs. This section should be helpful in rational selection of MCIs for QSAR modelling. Then we have presented our long-term experience in the application of MCIs through several characteristic and successful QSAR models for estimating partitioning and chromatographic properties of persistent organic pollutants (POPs). We have also analysed the trends in calculated MCIs and discussed their physical interpretation. In conclusion, several practical recommendations and warnings, based on our research experience, have been given for the application of MCIs in the QSAR modelling. PMID:11302582
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...
New applications for high average power beams
NASA Astrophysics Data System (ADS)
Neau, E. L.; Turman, B. N.; Patterson, E. L.
1993-06-01
The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.
Averaging of globally coupled oscillators
NASA Astrophysics Data System (ADS)
Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt
1992-03-01
We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Emissions averaging top option for HON compliance
Kapoor, S. )
1993-05-01
In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.
Zou, C X; Lively, F O; Wylie, A R G; Yan, T
2016-04-01
Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; P<0.001) indicated values for net energy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; P<0.001). There were positive linear relationships between N intake and N outputs in manure, and manure N accounted for 0.923 of the N intake. The present results provide approaches to predict maintenance energy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems.
Considerations for applying VARSKIN mod 2 to skin dose calculations averaged over 10 cm2.
Durham, James S
2004-02-01
VARSKIN Mod 2 is a DOS-based computer program that calculates the dose to skin from beta and gamma contamination either directly on skin or on material in contact with skin. The default area for calculating the dose is 1 cm2. Recently, the U.S. Nuclear Regulatory Commission issued new guidelines for calculating shallow dose equivalent from skin contamination that requires the dose be averaged over 10 cm2. VARSKIN Mod 2 was not filly designed to calculate beta or gamma dose estimates averaged over 10 cm2, even though the program allows the user to calculate doses averaged over 10 cm2. This article explains why VARSKIN Mod 2 overestimates the beta dose when applied to 10 cm2 areas, describes a manual method for correcting the overestimate, and explains how to perform reasonable gamma dose calculations averaged over 10 cm2. The article also describes upgrades underway in Varskin 3. PMID:14744063
Oluwole, Akinola S.; Ekpo, Uwem F.; Karagiannis-Voules, Dimitrios-Alexios; Abe, Eniola M.; Olamiju, Francisca O.; Isiyaku, Sunday; Okoronkwo, Chukwu; Saka, Yisa; Nebe, Obiageli J.; Braide, Eka I.; Mafiana, Chiedu F.; Utzinger, Jürg; Vounatsou, Penelope
2015-01-01
Background The acceleration of the control of soil-transmitted helminth (STH) infections in Nigeria, emphasizing preventive chemotherapy, has become imperative in light of the global fight against neglected tropical diseases. Predictive risk maps are an important tool to guide and support control activities. Methodology STH infection prevalence data were obtained from surveys carried out in 2011 using standard protocols. Data were geo-referenced and collated in a nationwide, geographic information system database. Bayesian geostatistical models with remotely sensed environmental covariates and variable selection procedures were utilized to predict the spatial distribution of STH infections in Nigeria. Principal Findings We found that hookworm, Ascaris lumbricoides, and Trichuris trichiura infections are endemic in 482 (86.8%), 305 (55.0%), and 55 (9.9%) locations, respectively. Hookworm and A. lumbricoides infection co-exist in 16 states, while the three species are co-endemic in 12 states. Overall, STHs are endemic in 20 of the 36 states of Nigeria, including the Federal Capital Territory of Abuja. The observed prevalence at endemic locations ranged from 1.7% to 51.7% for hookworm, from 1.6% to 77.8% for A. lumbricoides, and from 1.0% to 25.5% for T. trichiura. Model-based predictions ranged from 0.7% to 51.0% for hookworm, from 0.1% to 82.6% for A. lumbricoides, and from 0.0% to 18.5% for T. trichiura. Our models suggest that day land surface temperature and dense vegetation are important predictors of the spatial distribution of STH infection in Nigeria. In 2011, a total of 5.7 million (13.8%) school-aged children were predicted to be infected with STHs in Nigeria. Mass treatment at the local government area level for annual or bi-annual treatment of the school-aged population in Nigeria in 2011, based on World Health Organization prevalence thresholds, were estimated at 10.2 million tablets. Conclusions/Significance The predictive risk maps and estimated
Mafe, Oluwakemi A.T.; Davies, Scott M.; Hancock, John; Du, Chenyu
2015-01-01
This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly. PMID:26109752
Weiss, Carlos O.; Cappola, Anne R.; Varadhan, Ravi; Fried, Linda P.
2012-01-01
Objectives Resting metabolic rate (RMR) is the largest component of total energy expenditure. It has not been studied in old-old adults living in the community, though abnormalities in RMR may play a critical role in the development of the clinical syndrome of frailty. The objective was to measure RMR and examine the association of measured RMR with frailty status and compare it to expected RMR generated by a predictive equation. Design Physiologic sub-study conducted as a home visit within an observational cohort study. Setting Baltimore City and County, Maryland. Participants 77 women age 83–93 years enrolled in the Women’s Health and Aging Study II. Measurements RMR with indirect calorimetry; frailty status; fat-free mass; ambient and body temperature; expected RMR via the Mifflin-St. Jeor equation. Results Average RMR was 1119 kcal/d (s.d.± 205; range 595–1560). Agreement between observed and expected RMR was biased and very poor (between-subject coefficient of variation 38.0%, 95%CI: 35.1–40.8). Variability of RMR was increased in frail subjects (heteroscedasticity F test P value=0.02). Both low and high RMR were associated with being frail (Odds Ratio 5.4, P value=0.04) and slower self-selected walking speed (P value<0.001) after adjustment for covariates. Conclusion Equations to predict RMR that are not validated in old-old adults appear to correlate poorly with measured RMR. RMR values are highly variable among old-old women, with deviations from the mean predicting clinical frailty. These exploratory findings suggest a pathway to clinical frailty through either high or low RMR. PMID:22985142
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Average luminosity distance in inhomogeneous universes
NASA Astrophysics Data System (ADS)
Kostov, Valentin Angelov
observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Averaging Robertson-Walker cosmologies
NASA Astrophysics Data System (ADS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-04-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.
NASA Astrophysics Data System (ADS)
Multsch, S.; Al-Rumaikhani, Y. A.; Frede, H.-G.; Breuer, L.
2013-01-01
The water footprint accounting method addresses the quantification of water consumption in agriculture, whereby three types of water to grow crops are considered, namely green water (consumed rainfall), blue water (irrigation from surface or groundwater) and grey water (water needed to dilute pollutants). Most of current water footprint assessments focus on global to continental scale. We therefore developed the spatial decision support system SPARE:WATER that allows to quantify green, blue and grey water footprints on regional scale. SPARE:WATER is programmed in VB.NET, with geographic information system functionality implemented by the MapWinGIS library. Water requirement and water footprints are assessed on a grid-basis and can then be aggregated for spatial entities such as political boundaries, catchments or irrigation districts. We assume in-efficient irrigation methods rather than optimal conditions to account for irrigation methods with efficiencies other than 100%. Furthermore, grey water can be defined as the water to leach out salt from the rooting zone in order to maintain soil quality, an important management task in irrigation agriculture. Apart from a thorough representation of the modelling concept we provide a proof of concept where we assess the agricultural water footprint of Saudi Arabia. The entire water footprint is 17.0 km3 yr-1 for 2008 with a blue water dominance of 86%. Using SPARE:WATER we are able to delineate regional hot spots as well as crop types with large water footprints, e.g. sesame or dates. Results differ from previous studies of national-scale resolution, underlining the need for regional water footprint assessments.
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized
NASA Astrophysics Data System (ADS)
D'Urso, Guido; Maltese, Antonino; Palladino, Mario
2014-10-01
An efficient use of water for irrigation is a challenging task. From an agronomical point of view, it requires establishing the optimal amount of water to be supplied, at the correct time, based on phenological phase and water stress spatial distribution. Indeed, the knowledge of the actual water stress is essential for agronomic decisions, vineyards need to be managed to maintain a moderate water stress, thus allowing to optimize berries quality and quantity. Methods for quickly quantifying where, when and in what extent, vines begin to experience water stress are beneficial. Traditional point based methodologies, such those based on Scholander pressure chamber, even if well established are time expensive and do not give a comprehensive picture of the vineyard water deficit. Earth Observation (E.O.) based methodologies promise to achieve a synoptic overview of the water stress. Some E.O. data, indeed, sense the territory in the thermal part of the spectrum and, as it is well recognized, leaf radiometric temperature is related to the plant water status. However, current satellite sensors have not detailed enough spatial resolution to detect pure canopy pixels; thus, the pixel radiometric temperature characterizes the whole soil-vegetation system, and in variable proportions. On the other hand, due to limits in the actual crop dusters, there is no need to characterize the water stress distribution at plant scale, and a coarser spatial characterization would be sufficient. The research aims to assess to what extent: 1) E.O. based canopy radiometric temperature can be used, straightforwardly, to detected plant water status; 2) E.O. based canopy transpiration, would be more suitable (or not) to describe the spatial variability in plant water stress. To these aims: 1) radiometric canopy temperature measured in situ, and derived from a two-source energy balance model applied on airborne data, were compared with in situ leaf water potential from freshly cut leaves; 2) two
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
Annual average radon concentrations in California residences.
Liu, K S; Hayward, S B; Girman, J R; Moed, B A; Huang, F Y
1991-09-01
A study was conducted to determine the annual average radon concentrations in California residences, to determine the approximate fraction of the California population regularly exposed to radon concentrations of 4 pCi/l or greater, and to the extent possible, to identify regions of differing risk for high radon concentrations within the state. Annual average indoor radon concentrations were measured with passive (alpha track) samplers sent by mail and deployed by home occupants, who also completed questionnaires on building and occupant characteristics. For the 310 residences surveyed, concentrations ranged from 0.10 to 16 pCi/l, with a geometric mean of whole-house (bedroom and living room) average concentrations of 0.85 pCi/l and a geometric standard deviation of 1.91. A total of 88,000 California residences (0.8 percent) were estimated to have radon concentrations exceeding 4 pCi/l. When the state was divided into six zones based on geology, significant differences in geometric mean radon concentrations were found between several of the zones. Zones with high geometric means were the Sierra Nevada mountains, the valleys east of the Sierra Nevada, the central valley (especially the southern portion), and Ventura and Santa Barbara Counties. Zones with low geometric means included most coastal counties and the portion of the state from Los Angeles and San Bernardino Counties south.
Fast Optimal Transport Averaging of Neuroimaging Data.
Gramfort, A; Peyré, G; Cuturi, M
2015-01-01
Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface. PMID:26221679
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
Miller, M.R.; Eadie, J. McA
2006-01-01
We examined the allometric relationship between resting metabolic rate (RMR; kJ day-1) and body mass (kg) in wild waterfowl (Anatidae) by regressing RMR on body mass using species means from data obtained from published literature (18 sources, 54 measurements, 24 species; all data from captive birds). There was no significant difference among measurements from the rest (night; n = 37), active (day; n = 14), and unspecified (n = 3) phases of the daily cycle (P > 0.10), and we pooled these measurements for analysis. The resulting power function (aMassb) for all waterfowl (swans, geese, and ducks) had an exponent (b; slope of the regression) of 0.74, indistinguishable from that determined with commonly used general equations for nonpasserine birds (0.72-0.73). In contrast, the mass proportionality coefficient (b; y-intercept at mass = 1 kg) of 422 exceeded that obtained from the nonpasserine equations by 29%-37%. Analyses using independent contrasts correcting for phylogeny did not substantially alter the equation. Our results suggest the waterfowl equation provides a more appropriate estimate of RMR for bioenergetics analyses of waterfowl than do the general nonpasserine equations. When adjusted with a multiple to account for energy costs of free living, the waterfowl equation better estimates daily energy expenditure. Using this equation, we estimated that the extent of wetland habitat required to support wintering waterfowl populations could be 37%-50% higher than previously predicted using general nonpasserine equations. ?? The Cooper Ornithological Society 2006.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Aberration averaging using point spread function for scanning projection systems
NASA Astrophysics Data System (ADS)
Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi
2000-07-01
Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Protein Requirements during Aging.
Courtney-Martin, Glenda; Ball, Ronald O; Pencharz, Paul B; Elango, Rajavel
2016-01-01
Protein recommendations for elderly, both men and women, are based on nitrogen balance studies. They are set at 0.66 and 0.8 g/kg/day as the estimated average requirement (EAR) and recommended dietary allowance (RDA), respectively, similar to young adults. This recommendation is based on single linear regression of available nitrogen balance data obtained at test protein intakes close to or below zero balance. Using the indicator amino acid oxidation (IAAO) method, we estimated the protein requirement in young adults and in both elderly men and women to be 0.9 and 1.2 g/kg/day as the EAR and RDA, respectively. This suggests that there is no difference in requirement on a gender basis or on a per kg body weight basis between younger and older adults. The requirement estimates however are ~40% higher than the current protein recommendations on a body weight basis. They are also 40% higher than our estimates in young men when calculated on the basis of fat free mass. Thus, current recommendations may need to be re-assessed. Potential rationale for this difference includes a decreased sensitivity to dietary amino acids and increased insulin resistance in the elderly compared with younger individuals. PMID:27529275
Protein Requirements during Aging
Courtney-Martin, Glenda; Ball, Ronald O.; Pencharz, Paul B.; Elango, Rajavel
2016-01-01
Protein recommendations for elderly, both men and women, are based on nitrogen balance studies. They are set at 0.66 and 0.8 g/kg/day as the estimated average requirement (EAR) and recommended dietary allowance (RDA), respectively, similar to young adults. This recommendation is based on single linear regression of available nitrogen balance data obtained at test protein intakes close to or below zero balance. Using the indicator amino acid oxidation (IAAO) method, we estimated the protein requirement in young adults and in both elderly men and women to be 0.9 and 1.2 g/kg/day as the EAR and RDA, respectively. This suggests that there is no difference in requirement on a gender basis or on a per kg body weight basis between younger and older adults. The requirement estimates however are ~40% higher than the current protein recommendations on a body weight basis. They are also 40% higher than our estimates in young men when calculated on the basis of fat free mass. Thus, current recommendations may need to be re-assessed. Potential rationale for this difference includes a decreased sensitivity to dietary amino acids and increased insulin resistance in the elderly compared with younger individuals. PMID:27529275
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Mayya, Y S
2012-01-01
Measuring the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of a quality assurance program. Owing to their ready availability in radiotherapy departments, the Farmer-type ionization chambers are also used to determine the strength of HDR (192)Ir brachytherapy sources. The use of a Farmer-type ionization chamber requires the estimation of the scatter correction factor along with positioning error (c) and the constant of proportionality (f) to determine the strength of HDR (192)Ir brachytherapy sources. A simplified approach based on a least squares method was developed for estimating the values of f and M(s). The seven distance method was followed to record the ionization chamber readings for parameterization of f and M(s). Analytically calculated values of M(s) were used to determine the room scatter correction factor (K(sc)). The Monte Carlo simulations were also carried out to calculate f and K(sc) to verify the magnitude of the parameters determined by the proposed analytical approach. The value of f determined using the simplified analytical approach was found to be in excellent agreement with the Monte Carlo simulated value (within 0.7%). Analytically derived values of K(sc) were also found to be in good agreement with the Monte Carlo calculated values (within 1.47%). Being far simpler than the presently available methods of evaluating f, the proposed analytical approach can be adopted for routine use by clinical medical physicists to estimate f by hand calculations.
Bingham, Daniel D; Costa, Silvia; Clemes, Stacy A; Routen, Ash C; Moore, Helen J; Barber, Sally E
2016-10-01
This study presents a worked example of a stepped process to reliably estimate the habitual physical activity and sedentary time of a sample of young children. A total of 299 children (2.9 ± 0.6 years) were recruited. Outcome variables were daily minutes of total physical activity, sedentary time, moderate to vigorous physical activity and proportional values of each variable. In total, 282 (94%) provided 3 h of accelerometer data on ≥1 day and were included in a 6-step process: Step-1: determine minimum wear-time; Step-2: process 7-day-data; Step-3: determine the inclusion of a weekend day; Step-4: examine day-to-day variability; Step-5: calculate single day intraclass correlation (ICC) (2,1); Step-6: calculate number of days required to reach reliability. Following the process the results were, Step-1: 6 h was estimated as minimum wear-time of a standard day. Step-2: 98 (32%) children had ≥6 h wear on 7 days. Step-3: no differences were found between weekdays and weekend days (P ≥ 0.05). Step-4: no differences were found between day-to-day variability (P ≥ 0.05). Step-5: single day ICC's (2,1) ranged from 0.48 (total physical activity and sedentary time) to 0.53 (proportion of moderate to vigorous physical activity). Step-6: to reach reliability (ICC = 0.7), 3 days were required for all outcomes. In conclusion following a 7 day wear protocol, ≥6 h on any 3 days was found to have acceptable reliability. The stepped-process offers researchers a method to derive sample-specific wear-time criterion.
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548... moisture content determination. (a) Determining average moisture content of the lot is not a requirement of... drawn composite sample. Official certification shall be based on the air-oven method or other...
2013-01-01
Background We estimated sufficient setup margins for head-and-neck cancer (HNC) radiotherapy (RT) when 2D kV images are utilized for routine patient setup verification. As another goal we estimated a threshold for the displacements of the most important bony landmarks related to the target volumes requiring immediate attention. Methods We analyzed 1491 orthogonal x-ray images utilized in RT treatment guidance for 80 HNC patients. We estimated overall setup errors and errors for four subregions to account for patient rotation and deformation: the vertebrae C1-2, C5-7, the occiput bone and the mandible. Setup margins were estimated for two 2D image guidance protocols: i) imaging at first three fractions and weekly thereafter and ii) daily imaging. Two 2D image matching principles were investigated: i) to the vertebrae in the middle of planning target volume (PTV) (MID_PTV) and ii) minimizing maximal position error for the four subregions (MIN_MAX). The threshold for the position errors was calculated with two previously unpublished methods based on the van Herk’s formula and clinical data by retaining a margin of 5 mm sufficient for each subregion. Results Sufficient setup margins to compensate the displacements of the subregions were approximately two times larger than were needed to compensate setup errors for rigid target. Adequate margins varied from 2.7 mm to 9.6 mm depending on the subregions related to the target, applied image guidance protocol and early correction of clinically important systematic 3D displacements of the subregions exceeding 4 mm. The MIN_MAX match resulted in smaller margins but caused an overall shift of 2.5 mm for the target center. Margins ≤ 5mm were sufficient with the MID_PTV match only through application of daily 2D imaging and the threshold of 4 mm to correct systematic displacement of a subregion. Conclusions Adequate setup margins depend remarkably on the subregions related to the target volume. When the systematic 3D
The effects of sampling and internal noise on the representation of ensemble average size.
Im, Hee Yeon; Halberda, Justin
2013-02-01
Increasing numbers of studies have explored human observers' ability to rapidly extract statistical descriptions from collections of similar items (e.g., the average size and orientation of a group of tilted Gabor patches). Determining whether these descriptions are generated by mechanisms that are independent from object-based sampling procedures requires that we investigate how internal noise, external noise, and sampling affect subjects' performance. Here we systematically manipulated the external variability of ensembles and used variance summation modeling to estimate both the internal noise and the number of samples that affected the representation of ensemble average size. The results suggest that humans sample many more than one or two items from an array when forming an estimate of the average size, and that the internal noise that affects ensemble processing is lower than the noise that affects the processing of single objects. These results are discussed in light of other recent modeling efforts and suggest that ensemble processing of average size relies on a mechanism that is distinct from segmenting individual items. This ensemble process may be more similar to texture processing.
A new approach to high-order averaging
NASA Astrophysics Data System (ADS)
Chartier, P.; Murua, A.; Sanz-Serna, J. M.
2012-09-01
We present a new approach to perform high-order averaging in oscillatory periodic or quasi-periodic dynamical systems. The averaged system is expressed in terms of (i) scalar coefficients that are universal, i.e. independent of the system under consideration and (ii) basis functions that may be written in an explicit, systematic way in terms of the derivatives of the Fourier coefficients of the vector field being averaged. The coefficients may be recursively computed in a simple fashion. This approach may be used to obtain exponentially small error estimates, as those first derived by Neishtadt for the periodic case and Simó in the quasi-periodic scenario.
Method for detection and correction of errors in speech pitch period estimates
NASA Technical Reports Server (NTRS)
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...
Inferring average generation via division-linked labeling.
Weber, Tom S; Perié, Leïla; Duffy, Ken R
2016-08-01
For proliferating cells subject to both division and death, how can one estimate the average generation number of the living population without continuous observation or a division-diluting dye? In this paper we provide a method for cell systems such that at each division there is an unlikely, heritable one-way label change that has no impact other than to serve as a distinguishing marker. If the probability of label change per cell generation can be determined and the proportion of labeled cells at a given time point can be measured, we establish that the average generation number of living cells can be estimated. Crucially, the estimator does not depend on knowledge of the statistics of cell cycle, death rates or total cell numbers. We explore the estimator's features through comparison with physiologically parameterized stochastic simulations and extrapolations from published data, using it to suggest new experimental designs.
Code of Federal Regulations, 2013 CFR
2013-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...
Code of Federal Regulations, 2011 CFR
2011-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...
Code of Federal Regulations, 2014 CFR
2014-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...
Code of Federal Regulations, 2012 CFR
2012-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant...
Assessing the total uncertainty on average sediment export measurements
NASA Astrophysics Data System (ADS)
Vanmaercke, Matthias
2015-04-01
Sediment export measurements from rivers are usually subjected to large uncertainties. Although many case studies have focussed on specific aspects influencing these uncertainties (e.g. the sampling procedure, laboratory analyses, sampling frequency, load calculation method, duration of the measuring method), very few studies provide an integrated assessment of the total uncertainty resulting from these different sources of errors. Moreover, the findings of these studies are commonly difficult to apply, as they require specific details on the applied measuring method that are often unreported. As a result, the overall uncertainty on reported average sediment export measurements remains difficult to assess. This study aims to address this gap. Based on Monte Carlo simulations on a large dataset of daily sediment export measurements (> 100 catchments and > 2000 catchment-years of observations), the most dominant sources of uncertainties are explored. Results show that uncertainties on average sediment-export values (over multiple years) are mainly controlled by the sampling frequency and the duration of the measuring period. Measuring errors on individual sediment concentration or runoff discharge samples have an overall smaller influence. Depending on the sampling strategy used (e.g. uniform or flow-proportional), also the load calculation procedure can cause significant biases in the obtained results. A simple method is proposed that allows estimating the total uncertainty on sediment export values, based on commonly reported information (e.g. the catchment area, measuring period, number of samples taken, load calculation procedure used). An application of this method shows that total uncertainties on annual sediment export measurements can easily exceed 200%. It is shown that this has important consequences for the calibration and validation of sediment export models.
Cammack, P; Harris, J M
2016-06-19
Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes' views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators.This article is part of the themed issue 'Vision in our three-dimensional world'.
Depth perception in disparity-defined objects: finding the balance between averaging and segregation
Cammack, P.
2016-01-01
Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes’ views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269601
A procedure to average 3D anatomical structures.
Subramanya, K; Dean, D
2000-12-01
Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
Two levels of Bayesian model averaging for optimal control of stochastic systems
NASA Astrophysics Data System (ADS)
Darwen, Paul J.
2013-02-01
Bayesian model averaging provides the best possible estimate of a model, given the data. This article uses that approach twice: once to get a distribution of plausible models of the world, and again to find a distribution of plausible control functions. The resulting ensemble gives control instructions different from simply taking the single best-fitting model and using it to find a single lowest-error control function for that single model. The only drawback is, of course, the need for more computer time: this article demonstrates that the required computer time is feasible. The test problem here is from flood control and risk management.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
Code of Federal Regulations, 2014 CFR
2014-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2010 CFR
2010-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2011 CFR
2011-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2012 CFR
2012-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2013 CFR
2013-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
A K-fold Averaging Cross-validation Procedure
Jung, Yoonsuh; Hu, Jianhua
2015-01-01
Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.
NASA Astrophysics Data System (ADS)
Cresswell, A. J.; Sanderson, D. C. W.
2009-08-01
The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.
Time average vibration fringe analysis using Hilbert transformation
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-10-20
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Impact location estimation in anisotropic structures
NASA Astrophysics Data System (ADS)
Zhou, Jingru; Mathews, V. John; Adams, Daniel O.
2015-03-01
Impacts are major causes of in-service damage in aerospace structures. Therefore, impact location estimation techniques are necessary components of Structural Health Monitoring (SHM). In this paper, we consider impact location estimation in anisotropic composite structures using acoustic emission signals arriving at a passive sensor array attached to the structure. Unlike many published location estimation algorithms, the algorithm presented in this paper does not require the waveform velocity profile for the structure. Rather, the method employs time-of-arrival information to jointly estimate the impact location and the average signal transmission velocities from the impact to each sensor on the structure. The impact location and velocities are estimated as the solution of a nonlinear optimization problem with multiple quadratic constraints. The optimization problem is solved by using first-order optimality conditions. Numerical simulations as well as experimental results demonstrate the ability of the algorithm to accurately estimate the impact location using acoustic emission signals.
Modifying SEBAL ET Algorithm to account for advection by using daily averages of weather data
NASA Astrophysics Data System (ADS)
Mkhwanazi, M. M.; Chavez, J. L.
2013-12-01
The use of Remote Sensing (RS) in crop evapotranspiration (ET) estimation is aimed at improving agricultural water management. The Surface Energy Balance Algorithm for Land (SEBAL) is one of several methods that have been developed for this purpose. This has been a preferred model as it requires minimal climate data. However, it has a noted downside of underestimating ET under advective conditions. This is primarily due to the use of evaporative fraction (EF) to extrapolate instantaneous ET to daily values, with the assumption that EF is constant throughout the day. A modified SEBAL model was used in this study, which requires daily averages of weather data to estimate advection which is then introduced into the 24-hour ET sub-model of SEBAL. The study was carried out in southeastern Colorado, a semi-arid area where afternoon advection is a common feature. ET estimated using the original and modified SEBAL was compared to the lysimeter-measured ET. Results showed that the modified SEBAL algorithm performed better in estimating daily ET in overall, but especially on days when there was advection. On non-advective days, the original SEBAL was more accurate. It is therefore recommended that the modified SEBAL be used only on advective days, and guidelines to help identify such days were proposed.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Calculating Free Energies Using Average Force
NASA Technical Reports Server (NTRS)
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Average oxidation state of carbon in proteins.
Dick, Jeffrey M
2014-11-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W.N.; Cinnella, P.; Dwight, R.P.
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Spatially averaged flow over a wavy boundary revisited
McLean, S.R.; Wolfe, S.R.; Nelson, J.M.
1999-01-01
Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.
Optimum orientation versus orientation averaging description of cluster radioactivity
NASA Astrophysics Data System (ADS)
Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.
2016-07-01
While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel–Kramers–Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske–Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.
Optimum orientation versus orientation averaging description of cluster radioactivity
NASA Astrophysics Data System (ADS)
Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.
2016-07-01
While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske-Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
Schousboe, John T; Paudel, Misti L; Taylor, Brent C; Mau, Lih-Wen; Virnig, Beth A; Ensrud, Kristine E; Dowd, Bryan E
2014-01-01
Objective To compare cost estimates for hospital stays calculated using diagnosis-related group (DRG) weights to actual Medicare payments. Data Sources/Study Setting Medicare MedPAR files and DRG tables linked to participant data from the Study of Osteoporotic Fractures (SOF) from 1992 through 2010. Participants were women age 65 and older recruited in three metropolitan and one rural area of the United States. Study Design Costs were estimated using DRG payment weights for 1,397 hospital stays for 795 SOF participants for 1 year following a hip fracture. Medicare cost estimates included Medicare and secondary insurer payments, and copay and deductible amounts. Principal Findings The mean (SD) of inpatient DRG-based cost estimates per person-year were $16,268 ($10,058) compared with $19,937 ($15,531) for MedPAR payments. The correlation between DRG-based estimates and MedPAR payments was 0.71, and 51 percent of hospital stays were in different quintiles when costs were calculated based on DRG weights compared with MedPAR payments. Conclusions DRG-based cost estimates of hospital stays differ significantly from Medicare payments, which are adjusted by Medicare for facility and local geographic characteristics. DRG-based cost estimates may be preferable for analyses when facility and local geographic variation could bias assessment of associations between patient characteristics and costs. PMID:24461126
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Liu, X T; Ma, W F; Zeng, X F; Xie, C Y; Thacker, P A; Htoo, J K; Qiao, S Y
2015-10-01
.68 using a linear broken-line model and 0.72 using a quadratic model. Carcass traits and muscle quality were not influenced by SID Val:Lys ratio. In conclusion, the dietary SID Val:Lys ratios required for 26- to 46-, 49- to 70-, 71- to 92-, and 94- to 119-kg pigs were estimated to be 0.62, 0.66, 0.67, and 0.68, respectively, using a linear broken-line model and 0.71, 0.72, 0.73, and 0.72, respectively, using a quadratic model.
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2005-01-01
Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.
Time-average TV holography for vibration fringe analysis
Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2009-06-01
Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... such services in compliance with its geographic rate averaging and rate integration obligations... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED)...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Averaging. 1037.710 Section 1037.710 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW HEAVY-DUTY MOTOR VEHICLES Averaging, Banking, and Trading for Certification §...
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Whatever Happened to the Average Student?
ERIC Educational Resources Information Center
Krause, Tom
2005-01-01
Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
Averaged equations for distributed Josephson junction arrays
NASA Astrophysics Data System (ADS)
Bennett, Matthew; Wiesenfeld, Kurt
2004-06-01
We use an averaging method to study the dynamics of a transmission line studded by Josephson junctions. The averaged system is used as a springboard for studying experimental strategies which rely on spatial non-uniformity to achieve enhanced synchronization. A reduced model for the near resonant case elucidates in physical terms the key to achieving stable synchronized dynamics.
AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN
The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...
New results on averaging theory and applications
NASA Astrophysics Data System (ADS)
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Condition monitoring of gearboxes using synchronously averaged electric motor signals
NASA Astrophysics Data System (ADS)
Ottewill, J. R.; Orkisz, M.
2013-07-01
Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different
Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.
Alvarez-Castro, José M; Yang, Rong-Cai
2012-01-01
Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-01-01
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging
Brezis, Noam; Bronfman, Zohar Z.; Usher, Marius
2015-01-01
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must meet the minimum driving range requirements established by the Secretary of Transportation (49 CFR... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS...
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
Average shape of transport-limited aggregates.
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z
2005-08-12
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793
Average Shape of Transport-Limited Aggregates
NASA Astrophysics Data System (ADS)
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.
2005-08-01
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
Average-passage flow model development
NASA Technical Reports Server (NTRS)
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
NASA Technical Reports Server (NTRS)
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Total pressure averaging in pulsating flows
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Average Passenger Occupancy (APO) in Your Community.
ERIC Educational Resources Information Center
Stenstrup, Al
1995-01-01
Provides details of an activity in which students in grades 4-10 determine the Average Passenger Occupancy (APO) in their community and develop, administer, and analyze a survey to determine attitudes toward carpooling. (DDR)
Averaging Sampled Sensor Outputs To Detect Failures
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.
1990-01-01
Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
ERIC Educational Resources Information Center
Sullivan, Sharon G.; Grabois, Andrew; Greco, Albert N.
2003-01-01
Includes six reports related to book trade statistics, including prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and numbers of books and other media reviewed by major reviewing publications. (LRW)
ERIC Educational Resources Information Center
Sullivan, Sharon G.; Barr, Catherine; Grabois, Andrew
2002-01-01
Includes six articles that report on prices of U.S. and foreign published materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and review media statistics. (LRW)
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
Average Soil Water Retention Curves Measured by Neutron Radiography
Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Miyauchi, Masaatsu; Hirai, Chizuko; Nakajima, Hideaki
2013-01-01
Although the importance of solar radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been determined in Japan. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 3.5 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin corresponding to the area of a face and the back of a pair of hands without ingestion from foods. In contrast, it took 76.4 min to produce the same quantity of vitamin D3 at Sapporo in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 22.4 min were required, but 106.0 min were required at 09:00 and 271.3 min were required at 15:00 for the same meteorological conditions. Naha receives high levels of ultraviolet radiation allowing vitamin D3 synthesis almost throughout the year.
NASA Astrophysics Data System (ADS)
Nakajima, Hideaki; Miyauchi, Masaatsu; Hirai, Chizuko
2013-04-01
After the discovery of Antarctic ozone hole, the negative effect of exposure of human body to harmful solar ultraviolet (UV) radiation is widely known. However, there is positive effect of exposure to UV radiation, i.e., vitamin D synthesis. Although the importance of solar UV radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been well determined. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha, in Japan) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 2.3 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin. This quantity of vitamin D represents the recommended intake for an adult by the Ministry of Health, Labour and Welfare, and the 2010 Japanese Dietary Reference Intakes (DRIs). In contrast, it took 49.5 min to produce the same amount of vitamin D3 at Sapporo in the northern part of Japan in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 14.5 min were required, but at 09:00 68.7 min were required and at 15:00 175.8 min were required for the same meteorological conditions. Naha receives high levels of UV radiation allowing vitamin D3 synthesis almost throughout the year. According to our results, we are further developing an index to quantify the necessary time of UV radiation exposure to produce required amount of vitamin D3 from a UV radiation data.
Hendrie, Gilly A; Ridoutt, Brad G; Wiedmann, Thomas O; Noakes, Manny
2014-01-08
Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe) for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e) per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet). Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day) and energy-dense, nutrient poor "non-core" foods (3.9 kg CO2e). Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe.
NASA Astrophysics Data System (ADS)
Marius Mudd, Simon; Harel, Marie-Alice; Hurst, Martin D.; Grieve, Stuart W. D.; Marrero, Shasta M.
2016-08-01
We report a new program for calculating catchment-averaged denudation rates from cosmogenic nuclide concentrations. The method (Catchment-Averaged denudatIon Rates from cosmogenic Nuclides: CAIRN) bundles previously reported production scaling and topographic shielding algorithms. In addition, it calculates production and shielding on a pixel-by-pixel basis. We explore the effect of sampling frequency across both azimuth (Δθ) and altitude (Δϕ) angles for topographic shielding and show that in high relief terrain a relatively high sampling frequency is required, with a good balance achieved between accuracy and computational expense at Δθ = 8° and Δϕ = 5°. CAIRN includes both internal and external uncertainty analysis, and is packaged in freely available software in order to facilitate easily reproducible denudation rate estimates. CAIRN calculates denudation rates but also automates catchment averaging of shielding and production, and thus can be used to provide reproducible input parameters for the CRONUS family of online calculators.
Fast algorithm for scaling analysis with higher-order detrending moving average method
NASA Astrophysics Data System (ADS)
Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken
2016-05-01
Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Technology Transfer Automated Retrieval System (TEKTRAN)
Magnesium deprivation increased the inflammatory neuropeptide substance P and the inflammatory cytokines TNFa and IL-1ß in bone of rats; the effects of deprivation were more marked at 6 months than 3 months in rats fed 50% of the magnesium requirement (Rude et al., Ostoporosos Int. 17:1022, 2006). D...
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Successive averages of firmly nonexpansve mappings
Flam, S.
1994-12-31
The problem considered here is to find common fixed points of (possibly infinitely) many firmly nonexpansive selfmappings in a Hilbert space. For this purpose we use averaged relaxations of the original mappings, the averages being Bochner integrals with respect to chosen measures. Judicious choices of such measures serve to enhance the convergence towards common fixed points. Since projection operators onto closed convex sets are firmly non expansive, the methods explored are applicable for solving convex feasibility problems. In particular, by varying the measures our analysis encompasses recent developments of so-called block-iterative algorithms. We demonstrate convergence theorems which cover and extend many known results.
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
Evaluating Methods for Constructing Average High-Density Electrode Positions
Richards, John E.; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M.C.
2014-01-01
Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel “Geodesic Sensor Net” (GSN; EGI, Inc.), 38 participants with the 128 channel “Hydrocel Geodesic Sensor Net” (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants’ original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants) PMID:25234713
40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?
Code of Federal Regulations, 2010 CFR
2010-07-01
... average toxics value determined? 80.825 Section 80.825 Protection of Environment ENVIRONMENTAL PROTECTION... Gasoline Toxics Performance Requirements § 80.825 How is the refinery or importer annual average toxics value determined? (a) The refinery or importer annual average toxics value is calculated as...
42 CFR 417.590 - Computation of the average of the per capita rates of payment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Computation of the average of the per capita rates... of the average of the per capita rates of payment. (a) Computation by the HMO or CMP. As indicated in... benefits required under § 417.592, weighted averages of those per capita rates must be computed...
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
Code of Federal Regulations, 2012 CFR
2012-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
Code of Federal Regulations, 2011 CFR
2011-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
Code of Federal Regulations, 2010 CFR
2010-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
Code of Federal Regulations, 2013 CFR
2013-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...
40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?
Code of Federal Regulations, 2014 CFR
2014-07-01
... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...
42 CFR 417.590 - Computation of the average of the per capita rates of payment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Computation of the average of the per capita rates....590 Computation of the average of the per capita rates of payment. (a) Computation by the HMO or CMP... additional benefits required under § 417.592, weighted averages of those per capita rates must be...
42 CFR 417.590 - Computation of the average of the per capita rates of payment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 3 2013-10-01 2013-10-01 false Computation of the average of the per capita rates....590 Computation of the average of the per capita rates of payment. (a) Computation by the HMO or CMP... additional benefits required under § 417.592, weighted averages of those per capita rates must be...
42 CFR 417.590 - Computation of the average of the per capita rates of payment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Computation of the average of the per capita rates....590 Computation of the average of the per capita rates of payment. (a) Computation by the HMO or CMP... additional benefits required under § 417.592, weighted averages of those per capita rates must be...
Discrete Models of Fluids: Spatial Averaging, Closure, and Model Reduction
Panchenko, Alexander; Tartakovsky, Alexandre; Cooper, Kevin
2014-03-06
The main question addressed in the paper is how to obtain closed form continuum equations governing spatially averaged dynamics of semi-discrete ODE models of fluid flow. In the presence of multiple small scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy balance equations of mass, momentum and energy. These equations are exact, but they do not form a continuum model in the true sense of the word because calculation of stress and heat flux requires solving the underlying ODE system. To produce continuum equations that can be simulated without resolving micro-scale dynamics, we developed a closure method based on the use of regularized deconvolutions. We mostly deal with non-linear averaging suitable for Lagrangian particle solvers, but consider Eulerian linear averaging where appropriate. The results of numerical experiments show good agreement between our closed form flux approximations and their exact counterparts.
Two-Stage Bayesian Model Averaging in Endogenous Variable Models.
Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E
2014-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed.
Two-Stage Bayesian Model Averaging in Endogenous Variable Models.
Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E
2014-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
78 FR 76241 - Rescission of Quarterly Financial Reporting Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-17
... Regulatory Review,'' 76 FR 3821 (Jan. 21, 2011), which required agencies, among other things, to prepare..., among other things, to prepare plans for reviewing existing rules. The rule eliminates the quarterly.... Table ES-1 displays the average annual net costs and benefits of the rule. Table ES-1--Estimated...
42 CFR 441.303 - Supporting documentation required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... used for hospital, NF, or ICF/IID placement; (3) The agency's procedure to ensure the maintenance of... waiver program. G = the estimated annual average per capita Medicaid cost for hospital, NF, or ICF/IID... require the level of care provided in an ICF/IID as determined by the State on the basis of an...
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Average configuration of the induced venus magnetotail
McComas, D.J.; Spence, H.E.; Russell, C.T.
1985-01-01
In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Cryo-Electron Tomography and Subtomogram Averaging.
Wan, W; Briggs, J A G
2016-01-01
Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733
Cryo-Electron Tomography and Subtomogram Averaging.
Wan, W; Briggs, J A G
2016-01-01
Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations.
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
Averaging cross section data so we can fit it
Brown, D.
2014-10-23
The ^{56}Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).
Synthesizing average 3D anatomical shapes using deformable templates
NASA Astrophysics Data System (ADS)
Christensen, Gary E.; Johnson, Hans J.; Haller, John W.; Melloy, Jenny; Vannier, Michael W.; Marsh, Jeffrey L.
1999-05-01
A major task in diagnostic medicine is to determine whether or not an individual has a normal or abnormal anatomy by examining medical images such as MRI, CT, etc. Unfortunately, there are few quantitative measures that a physician can use to discriminate between normal and abnormal besides a couple of length, width, height, and volume measurements. In fact, there is no definition/picture of what normal anatomical structures--such as the brain-- look like let alone normal anatomical variation. The goal of this work is to synthesize average 3D anatomical shapes using deformable templates. We present a method for empirically estimating the average shape and variation of a set of 3D medical image data sets collected from a homogeneous population of topologically similar anatomies. Results are shown for synthesizing the average brain image volume from a set of six normal adults and synthesizing the average skull/head image volume from a set of five 3 - 4 month old infants with sagittal synostosis.
A database of age-appropriate average MRI templates.
Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze
2016-01-01
This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/).
Miller, M.R.; Eadie, J. McA
2006-01-01
Breeding densities and migration periods of Common Snipe in Colorado were investigated in 1974-75. Sites studied were near Fort Collins and in North Park, both in north central Colorado; in the Yampa Valley in northwestern Colorado; and in the San Luis Valley in south central Colorado....Estimated densities of breeding snipe based on censuses conducted during May 1974 and 1975 were, by region: 1.3-1.7 snipe/ha near Fort Collins; 0.6 snipe/ha in North Park; 0.5-0.7 snipe/ha in the Yampa Valley; and 0.5 snipe/ha in the San Luis Valley. Overall mean densities were 06 and 0.7 snipe/ha in 1974 and 1975 respectively. On individual study sites, densities of snipe ranged from 0.2 to 2.1 snipe/ha. Areas with shallow, stable, discontinuous water levels, sparse, short vegetation, and soft organic soils had the highest densities.....Twenty-eight nests were located having a mean clutch size of 3.9 eggs. Estimated onset of incubation ranged from 2 May through 4 July. Most nests were initiated in May.....Spring migration extended from late March through early May. Highest densities of snipe were recorded in all regions during l&23 April. Fall migration was underway by early September and was completed by mid-October with highest densities occurring about the third week in September. High numbers of snipe noted in early August may have been early migrants or locally produced juveniles concentrating on favorable feeding areas.
The role of the harmonic vector average in motion integration.
Johnston, Alan; Scarfe, Peter
2013-01-01
The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.
High average power diode pumped solid state lasers for CALIOPE
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Average entanglement for Markovian quantum trajectories
Vogelsberger, S.; Spehner, D.
2010-11-15
We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.
From cellular doses to average lung dose.
Hofmann, W; Winkler-Heil, R
2015-11-01
Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions.
High-average-power exciplex laser system
NASA Astrophysics Data System (ADS)
Sentis, M.
The LUX high-average-power high-PRF exciplex laser (EL) system being developed at the Institut de Mecanique des Fluides de Marseille is characterized, and some preliminary results are presented. The fundamental principles and design criteria of ELs are reviewed, and the LUX components are described and illustrated, including a closed-circuit subsonic wind tunnel and a 100-kW-average power 1-kHz-PRF power pulser providing avalanche-discharge preionization by either an electron beam or an X-ray beam. Laser energy of 50 mJ has been obtained at wavelength 308 nm in the electron-beam mode (14.5 kV) using a 5300/190/10 mixture of Ne/Xe/HCl at pressure 1 bar.
Apparent and average accelerations of the Universe
Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu
2008-10-15
In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Iterative methods based upon residual averaging
NASA Technical Reports Server (NTRS)
Neuberger, J. W.
1980-01-01
Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Geomagnetic effects on the average surface temperature
NASA Astrophysics Data System (ADS)
Ballatore, P.
Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Modern average global sea-surface temperature
Schweitzer, Peter N.
1993-01-01
The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866
Oguchi, Masahiro; Fuse, Masaaki
2015-02-01
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
NASA Technical Reports Server (NTRS)
Plitau, Denis; Prasad, Narasimha S.
2012-01-01
The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.
NASA Astrophysics Data System (ADS)
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-08-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.
Temporal averaging of atmospheric turbulence-induced optical scintillation.
Yura, H T; Beck, S M
2015-08-24
Based on the Rytov approximation we have developed for weak scintillation conditions a general expression for the temporal averaged variance of irradiance. The present analysis provides, for what we believe is the first time, a firm theoretical basis for the often-observed reduction of irradiance fluctuations of an optical beam due to atmospheric turbulence. Accurate elementary analytic approximations are presented here for plane, spherical and beam waves for predicting the averaging times required to obtain an arbitrary value of the ratio of the standard deviation to the mean of an optical beam propagating through an arbitrary path in the atmosphere. In particular, a novel application of differential absorption measurement for the purpose of measuring column-integrated concentrations of various so-called greenhouse gas (GHG) atmospheric components is considered where the results of our analysis indicates that relatively short averaging times, on the order of a few seconds, are required to reduce the irradiance fluctuations to a value precise enough for GHG measurements of value to climate related studies.
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
CD SEM metrology macro CD technology: beyond the average
NASA Astrophysics Data System (ADS)
Bunday, Benjamin D.; Michelson, Di K.; Allgair, John A.; Tam, Aviram; Chase-Colin, David; Dajczman, Asaf; Adan, Ofer; Har-Zvi, Michael
2005-05-01
Downscaling of semiconductor fabrication technology requires an ever-tighter control of the production process. CD-SEM, being the major image-based critical dimension metrology tool, is constantly being improved in order to fulfill these requirements. One of the methods used for increasing precision is averaging over several or many (ideally identical) features, usually referred to as "Macro CD". In this paper, we show that there is much more to Macro CD technology- metrics characterizing an arbitrary array of similar features within a single SEM image-than just the average. A large amount of data is accumulated from a single scan of a SEM image, providing informative and statistically valid local process characterization. As opposed to other technologies, Macro CD not only provides extremely precise average metrics, but also allows for the reporting of full information on each of the measured features and of various statistics (such as the variability) on all currently reported CD SEM metrics. We present the mathematical background behind Macro CD technology and the opportunity for reducing number of sites for SPC, along with providing enhanced-sensitivity CD metrics.
Estimation of Radar Cross Section of a Target under Track
NASA Astrophysics Data System (ADS)
Jung, Young-Hun; Hong, Sun-Mog; Choi, Seung Ho
2010-12-01
In allocating radar beam for tracking a target, it is attempted to maintain the signal-to-noise ratio (SNR) of signal returning from the illuminated target close to an optimum value for efficient track updates. An estimate of the average radar cross section (RCS) of the target is required in order to adjust transmitted power based on the estimate such that a desired SNR can be realized. In this paper, a maximum-likelihood (ML) approach is presented for estimating the average RCS, and a numerical solution to the approach is proposed based on a generalized expectation maximization (GEM) algorithm. Estimation accuracy of the approach is compared to that of a previously reported procedure.
Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works
NASA Astrophysics Data System (ADS)
Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha
2015-04-01
Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the
Lee, Anthony J.; Mitchem, Dorian G.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.
2015-01-01
Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample (N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the ‘genetic benefits’ account of facial averageness, but cast doubt on others. PMID:26858521
NASA Astrophysics Data System (ADS)
Ogata, Hidehiko; Noda, Tomoyuki; Sakamoto, Yasufumi; Shinotsuka, Masanori; Kamada, Osamu; Nakamura, Kazuaki
The pavement rate of the farm road which becomes important in activities of agricultural production, circulation of agricultural products and rural life is low. There are many farm roads to which the function of traveling performance, traveling comfort and prevention of the damage of agricultural products in transportation is not secured. Maintenance including improvement in the pavement rate of a farm road must be economically carried out based on the service environment, the circumference environment and the required function according to the kind of farm road. In this research, the problem of the farm road in a paddy area was extracted from the questionnaire to a land improvement district as an administrator, and the conditions which should be taken into consideration in maintenance of a farm road were clarified. The problem of a farm road is deformation of a road surface and a request is a period which does not need to repair. Moreover, the present performance of ground property and road surface of sediment pavement on-farm road was evaluated. Positive correlation is between the standard deviation of modulus of elasticity of the soil and surface roughness, negative correlation is between the modulus of elasticity of the soil in the rut and rutting depth.
Fluctuations of wavefunctions about their classical average
NASA Astrophysics Data System (ADS)
Benet, L.; Flores, J.; Hernández-Saldaña, H.; Izrailev, F. M.; Leyvraz, F.; Seligman, T. H.
2003-02-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Collimation of average multiplicity in QCD jets
NASA Astrophysics Data System (ADS)
Arleo, François; Pérez Ramos, Redamy
2009-11-01
The collimation of average multiplicity inside quark and gluon jets is investigated in perturbative QCD in the modified leading logarithmic approximation (MLLA). The role of higher order corrections accounting for energy conservation and the running of the coupling constant leads to smaller multiplicity collimation as compared to leading logarithmic approximation (LLA) results. The collimation of jets produced in heavy-ion collisions has also been explored by using medium-modified splitting functions enhanced in the infrared sector. As compared to elementary collisions, the angular distribution of the jet multiplicity is found to broaden in QCD media at all energy scales.
Average characteristics of partially coherent electromagnetic beams.
Seshadri, S R
2000-04-01
Average characteristics of partially coherent electromagnetic beams are treated with the paraxial approximation. Azimuthally or radially polarized, azimuthally symmetric beams and linearly polarized dipolar beams are used as examples. The change in the mean squared width of the beam from its value at the location of the beam waist is found to be proportional to the square of the distance in the propagation direction. The proportionality constant is obtained in terms of the cross-spectral density as well as its spatial spectrum. The use of the cross-spectral density has advantages over the use of its spatial spectrum.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P.
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Ensemble estimators for multivariate entropy estimation
Sricharan, Kumar; Wei, Dennis; Hero, Alfred O.
2015-01-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T−γ/d), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T−1). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample. PMID:25897177
H∞ control of switched delayed systems with average dwell time
NASA Astrophysics Data System (ADS)
Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay
2013-12-01
This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.
Robust myelin water quantification: averaging vs. spatial filtering.
Jones, Craig K; Whittall, Kenneth P; MacKay, Alex L
2003-07-01
The myelin water fraction is calculated, voxel-by-voxel, by fitting decay curves from a multi-echo data acquisition. Curve-fitting algorithms require a high signal-to-noise ratio to separate T(2) components in the T(2) distribution. This work compared the effect of averaging, during acquisition, to data postprocessed with a noise reduction filter. Forty regions, from five volunteers, were analyzed. A consistent decrease in the myelin water fraction variability with no bias in the mean was found for all 40 regions. Images of the myelin water fraction of white matter were more contiguous and had fewer "holes" than images of myelin water fractions from unfiltered echoes. Spatial filtering was effective for decreasing the variability in myelin water fraction calculated from 4-average multi-echo data.
Average observational quantities in the timescape cosmology
Wiltshire, David L.
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Motional averaging in a superconducting qubit.
Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S
2013-01-01
Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011
Optimum Low Thrust Elliptic Orbit Transfer Using Numerical Averaging
NASA Astrophysics Data System (ADS)
Tarzi, Zahi Bassem
Low-thrust electric propulsion is increasingly being used for spacecraft missions primarily due to its high propellant efficiency. Since analytical solutions for general low-thrust transfers are not available, a simple and fast method for low-thrust trajectory optimization is of great value for preliminary mission planning. However, few low-thrust trajectory tools are appropriate for preliminary mission design studies. The method presented in this paper provides quick and accurate solutions for a wide range of transfers by using numerical orbital averaging to improve solution convergence and include orbital perturbations. Thus allowing preliminary trajectories to be obtained for transfers which involve many revolutions about the primary body. This method considers minimum fuel transfers using first order averaging to obtain the fuel optimum rates of change of the equinoctial orbital elements in terms of each other and the Lagrange multipliers. Constraints on thrust and power, as well as minimum periapsis, are implemented and the equations are averaged numerically using a Gaussian quadrature. The use of numerical averaging allows for more complex orbital perturbations to be added without great difficulty. Orbital perturbations due to solar radiation pressure, atmospheric drag, a non-spherical central body, and third body gravitational effects have been included. These perturbations have not been considered by previous methods using analytical averaging. Thrust limitations due to shadowing have also been considered in this study. To allow for faster convergence of a wider range of problems, the solution to a transfer which minimizes the square of the thrust magnitude is used as a preliminary guess for the minimum fuel problem. Thus, this method can be quickly applied to many different types of transfers which may include various perturbations. Results from this model are shown to provide a reduction in propellant mass required over previous minimum fuel solutions
Discrete models of fluids: spatial averaging, closure and model reduction
Panchenko, Alexander; Tartakovsky, Alexandre M.; Cooper, Kevin
2014-04-15
We consider semidiscrete ODE models of single-phase fluids and two-fluid mixtures. In the presence of multiple fine-scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy exact balance equations of mass, momentum, and energy. These equations do not form a satisfactory continuum model because evaluation of stress and heat flux requires solving the underlying ODEs. To produce continuum equations that can be simulated without resolving microscale dynamics, we recently proposed a closure method based on the use of regularized deconvolution. Here we continue the investigation of deconvolution closure with the long term objective of developing consistent computational upscaling for multiphase particle methods. The structure of the fine-scale particle solvers is reminiscent of molecular dynamics. For this reason we use nonlinear averaging introduced for atomistic systems by Noll, Hardy, and Murdoch-Bedeaux. We also consider a simpler linear averaging originally developed in large eddy simulation of turbulence. We present several simple but representative examples of spatially averaged ODEs, where the closure error can be analyzed. Based on this analysis we suggest a general strategy for reducing the relative error of approximate closure. For problems with periodic highly oscillatory material parameters we propose a spectral boosting technique that augments the standard deconvolution and helps to correctly account for dispersion effects. We also conduct several numerical experiments, one of which is a complete mesoscale simulation of a stratified two-fluid flow in a channel. In this simulation, the operation count per coarse time step scales sublinearly with the number of particles.
Homodyne measurement of the average photon number
NASA Astrophysics Data System (ADS)
Webb, J. G.; Ralph, T. C.; Huntington, E. H.
2006-03-01
We describe a scheme for measurement of the mean photon flux at an arbitrary optical sideband frequency using homodyne detection. Experimental implementation of the technique requires an acousto-optic modulator in addition to the homodyne detector, and does not require phase locking. The technique exhibits polarization and frequency and spatial mode selectivity, as well as much improved speed, resolution, and dynamic range when compared to linear photodetectors and avalanche photodiodes, with potential application to quantum-state tomography and information encoding using an optical frequency basis. Experimental data also support a quantum-mechanical description of vacuum noise.
Homodyne measurement of the average photon number
Webb, J. G.; Huntington, E. H.; Ralph, T. C.
2006-03-15
We describe a scheme for measurement of the mean photon flux at an arbitrary optical sideband frequency using homodyne detection. Experimental implementation of the technique requires an acousto-optic modulator in addition to the homodyne detector, and does not require phase locking. The technique exhibits polarization and frequency and spatial mode selectivity, as well as much improved speed, resolution, and dynamic range when compared to linear photodetectors and avalanche photodiodes, with potential application to quantum-state tomography and information encoding using an optical frequency basis. Experimental data also support a quantum-mechanical description of vacuum noise.
Non-self-averaging in Ising spin glasses and hyperuniversality.
Lundow, P H; Campbell, I A
2016-01-01
Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U_{22}(T,L) for the spin glass susceptibility [and for higher moments U_{nn}(T,L)] is reported for dimensions 2,3,4,5, and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ(T,L) as U_{nn}(β,L)=[K_{d}ξ(T,L)/L]^{d} and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996)PRLTAO0031-900710.1103/PhysRevLett.77.3700]. Empirically, it is found that the K_{d} values are independent of d to within the statistics. The maximum values [U_{nn}(T,L)]_{max} are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [U_{nn}(T,L)]_{max} peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and XY spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario. PMID:26871035
Non-self-averaging in Ising spin glasses and hyperuniversality
NASA Astrophysics Data System (ADS)
Lundow, P. H.; Campbell, I. A.
2016-01-01
Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U22(T ,L ) for the spin glass susceptibility [and for higher moments Un n(T ,L ) ] is reported for dimensions 2 ,3 ,4 ,5 , and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ (T ,L ) as Un n(β ,L ) =[Kdξ (T ,L ) /L ] d and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996), 10.1103/PhysRevLett.77.3700]. Empirically, it is found that the Kd values are independent of d to within the statistics. The maximum values [Unn(T,L ) ] max are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [Unn(T,L ) ] max peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and X Y spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.
High average power linear induction accelerator development
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.
Average Gait Differential Image Based Human Recognition
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:24141918
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Scaling crossover for the average avalanche shape
NASA Astrophysics Data System (ADS)
Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.
2010-03-01
Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Rong, Y; Sillick, M; Gregson, C M
2009-01-01
Dextrose equivalent (DE) value is the most common parameter used to characterize the molecular weight of maltodextrins. Its theoretical value is inversely proportional to number average molecular weight (M(n)), providing a theoretical basis for correlations with physical properties important to food manufacturing, such as: hygroscopicity, the glass transition temperature, and colligative properties. The use of freezing point osmometry to measure DE and M(n) was assessed. Measurements were made on a homologous series of malto-oligomers as well as a variety of commercially available maltodextrin products with DE values ranging from 5 to 18. Results on malto-oligomer samples confirmed that freezing point osmometry provided a linear response with number average molecular weight. However, noncarbohydrate species in some commercial maltodextrin products were found to be in high enough concentration to interfere appreciably with DE measurement. Energy dispersive spectroscopy showed that sodium and chloride were the major ions present in most commercial samples. Osmolality was successfully corrected using conductivity measurements to estimate ion concentrations. The conductivity correction factor appeared to be dependent on the concentration of maltodextrin. Equations were developed to calculate corrected values of DE and M(n) based on measurements of osmolality, conductivity, and maltodextrin concentration. This study builds upon previously reported results through the identification of the major interfering ions and provides an osmolality correction factor that successfully accounts for the influence of maltodextrin concentration on the conductivity measurement. The resulting technique was found to be rapid, robust, and required no reagents.
The average size and temperature profile of quasar accretion disks
Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Motta, V.; Falco, E.
2014-03-01
We use multi-wavelength microlensing measurements of a sample of 10 image pairs from 8 lensed quasars to study the structure of their accretion disks. By using spectroscopy or narrowband photometry, we have been able to remove contamination from the weakly microlensed broad emission lines, extinction, and any uncertainties in the large-scale macro magnification of the lens model. We determine a maximum likelihood estimate for the exponent of the size versus wavelength scaling (r{sub s} ∝λ {sup p}, corresponding to a disk temperature profile of T∝r {sup –1/p}) of p=0.75{sub −0.2}{sup +0.2} and a Bayesian estimate of p = 0.8 ± 0.2, which are significantly smaller than the prediction of the thin disk theory (p = 4/3). We have also obtained a maximum likelihood estimate for the average quasar accretion disk size of r{sub s}=4.5{sub −1.2}{sup +1.5} lt-day at a rest frame wavelength of λ = 1026 Å for microlenses with a mean mass of M = 1 M {sub ☉}, in agreement with previous results, and larger than expected from thin disk theory.